Moving from rented servers in DC with the purchase of your iron.
The servers more or less clear, but network performance is still very vague, so how do you do it server administration, not a network.
Clarify some points that possibly are even underlying in such cases.
Hoster gives us the power and the Internet. All internal connections, all the internal network have to do it yourself.
Stands will be from 3 to 6 to start. Servers about 60-70 pieces.
At the moment I think to build the following logic network:
2 WAN router (reserve) other -------> rest of switches.
Internal network want to do 10GbaseT for cluster databases and 1Gbps for all remaining servers.
For 10GBaseT want to take Netgear M7100
. Setevye card on the database server Intel E10G42BT X520-T2 10Gigabit Ethernet Card
To connect other servers to take something like M5300-28G
1. Do I have something to put in the center of the network? For example, Netgear M7300 XSM7224S
, or to combine all of the switches using the SFP ports at 10Gbps and close to the WAN routers?
2. What are the WAN edge routers to choose?
3. Network card with two ethernet interfaces is designed for aggregate links? Ie for the Intel card you can get up to 20Gbps to the server?
4. For IPMI to take a separate switch with its subnet, which will be separated from the main?
5. Do I need to do VLANы given the fact that the server infrastructure will grow?
5. How to organize a failover in network?
Any additional tips and advice are welcome.
Thanks in advance!
Binding and the preferences of the vendors no no. Netgear brought examples to show, what about you.
BGP traffic is not planned. VPN only one or two channels for administrative access to the network.
Structure of traffic: a website with about 400k visitors per day. Quite heavy traffic is expected between database servers.
Other servers will fit into 1Gbps. 1Gbps Uplink. Traffic an average of 300 with peaks up to 700 Mbps.