Network design for 3 — 6 racks in DC

Moving from rented servers in DC with the purchase of your iron.
The servers more or less clear, but network performance is still very vague, so how do you do it server administration, not a network.
Clarify some points that possibly are even underlying in such cases.

Hoster gives us the power and the Internet. All internal connections, all the internal network have to do it yourself.
Stands will be from 3 to 6 to start. Servers about 60-70 pieces.

At the moment I think to build the following logic network:

2 WAN router (reserve) other -------> rest of switches.

Internal network want to do 10GbaseT for cluster databases and 1Gbps for all remaining servers.
For 10GBaseT want to take Netgear M7100. Setevye card on the database server Intel E10G42BT X520-T2 10Gigabit Ethernet Card
To connect other servers to take something like M5300-28G

Questions:
1. Do I have something to put in the center of the network? For example, Netgear M7300 XSM7224S , or to combine all of the switches using the SFP ports at 10Gbps and close to the WAN routers?
2. What are the WAN edge routers to choose?
3. Network card with two ethernet interfaces is designed for aggregate links? Ie for the Intel card you can get up to 20Gbps to the server?
4. For IPMI to take a separate switch with its subnet, which will be separated from the main?
5. Do I need to do VLANы given the fact that the server infrastructure will grow?
5. How to organize a failover in network?

Any additional tips and advice are welcome.

Thanks in advance!

UPD1:
Binding and the preferences of the vendors no no. Netgear brought examples to show, what about you.
BGP traffic is not planned. VPN only one or two channels for administrative access to the network.
Structure of traffic: a website with about 400k visitors per day. Quite heavy traffic is expected between database servers.
Other servers will fit into 1Gbps. 1Gbps Uplink. Traffic an average of 300 with peaks up to 700 Mbps.
October 3rd 19 at 03:07
5 answers
October 3rd 19 at 03:09
1. Looking at the Cisco 4900M — a great solution for lots of servers.
2. Despite how much traffic is expected on the uplink. And what you need from the routers. Maybe you will have a bunch of VPN? Maybe BGP Full View? Maybe stupid static route?
3. Yes
4. Don't understand the question
5. Of course it is necessary. At least for active UPS monitoring private VLAN is obvious. And To access a database cluster separate mesh arises.
6. failover depends on what is needed. The answer is "for all" is not accepted.

In General, if the budget don't care, take some of the 65-th cisk or her modules with the desired number of ports of each type and you will be happy. She will be all reserved if the money you will not regret)))
4. The question is, do I need to do a separate physical network for management of IPMI cards servers.
The essence of the question, as I understand it, even if the network is overloaded, we will not lose control over the servers. - rosemarie.Okuneva commented on October 3rd 19 at 03:12
Make a separate VLAN, will put him in control for Your servers, network equipment, UPS. - sincere_Hoppe commented on October 3rd 19 at 03:15
When the network is jammed, and a separate vlan in the trunk is clogged, if you do not deal with qos settings.
So either separate physical network or vlan + qos - rosemarie.Okuneva commented on October 3rd 19 at 03:18
1. One 4900M will be enough? Or necessary a couple?
2. Just a static route at the moment.
3. Thank you
4. Below is explained.
5. We will not have their UPS. Hosting company provides guaranteed electricity.
6. failover is needed to ensure the connectivity of the hosts in the loss of the current active switch.
It's easy for apps on the server level. As I wrote above, I deal with servers, and have a very General idea about networks.
I suppose that must be the solution to failover'and network, as well as for failover'and applications. - Roosevelt.Lars commented on October 3rd 19 at 03:21
1. Well, you look at how many and what ports you need. She's on Board with the move 8x10GE ports plus cards will finish, the Board is on the 20x1GE ports, put in it 2 such and a well. If we need more, second buy. Cisco working well for 2 years nirazu was not reloaded.
2. If just a static route, what a special router is not needed, this 4900 and as a router run. Well, unless you have inside all sorts of network protocols and tunnels to stir up
6. If this failover, stairways glands. But especially no need. You take good basic piece of metal with two power supplies. If nothing happens force majeure, then all will be well. Again, Cisco is working well and long. We have devices in the kernel refuse only in case, if someone something crooked to configure tried. Well, or if the roof collapse))) Failover with network glands is more at the level of routers, there's a whole bunch of protocols there. - sincere_Hoppe commented on October 3rd 19 at 03:24
Directly switches rarely reserve, reserve wire between them or already routing - sincere_Hoppe commented on October 3rd 19 at 03:27
vvpoloskin, This is what happens. Will baronesa switch (or is there a firmware will change) and will not regard? - rosemarie.Okuneva commented on October 3rd 19 at 03:30
how often do you have grohansa? And prereality firmware once a year is included in the notorious "99.999 reliability". - sincere_Hoppe commented on October 3rd 19 at 03:33
I honestly can't remember that I or my clients have Cisco yourself "grohansa".
But I do know that five nines of reliability is a 5 and a half minutes per year of downtime. Are willing to provide on a single switch? I wouldn't have agreed to that. But if an urgent update, and if the fan inside stopped spinning? etc., etc.

Besides, if Security update, then will wait for the new year to the firmware to upload? - rosemarie.Okuneva commented on October 3rd 19 at 03:36
Well, if you really need five nines, then the cost of such a system will be more expensive 6 racks with servers))) As you dumped the links below should go to a 3-level model of the network and to fork out 100 500 zillion. Probably this solution is more suitable for large data centers, like hosters, but not for the tenant space.
Yes and uplink, as I understand it, only one physical cord. - sincere_Hoppe commented on October 3rd 19 at 03:39
Will not be expensive does well.

Standard dumb switches in each Rack pair + two less stupid L3 10G switch to the rack database servers.
And the first and second can not Cisco. Can and Cisco. And everything will work and 5 nines.

In normal data centers usually give 2 inches for the Internet. And if you ask then at least another 10-th (often for a fee). - rosemarie.Okuneva commented on October 3rd 19 at 03:42
October 3rd 19 at 03:11
1. depends on how organized the flow of data in your case. If between the uprights plan to have a large traffic — put two switches in the center
2. What is your traffic? How many packets per second that the traffic tcp/udp need more details. If your provider gives you the Internet via ethernet, then why do you need a router? What and where to route you?
3. in theory. Need to check.
4. At least to make a separate vlan priority. I would put a separate physical network
5. We must first understand what tasks on servers, which streams the data, and then you can and VLANs to do.
6. Hmm. Traditionally. View how to build a network for data centers.

If anything is unclear — ask again.
1. Two switches in the center stack and they all connected?
2. Traffic is all TCP, router splitting my network and the provider network.
3. In my opinion, so can you. The current hosting company provides such an opportunity.
4. One VLAN for all the servers?
5. Servers are divided into groups: DB, cache, API, WEB
6. Any resources you can recommend?

Thank you - rosemarie.Okuneva commented on October 3rd 19 at 03:14
1. Let the number of servers that you plan to connect. Separately write how many servers are going to connect at 1G, how much for 10G. Server 10G in a single rack will be? Or randomly, with the growth of server farm?
2. Will be dimmed complicated firewall or NAT? Need some kind of traffic accounting who went where?
3. The current hoster is responsible FOR what on your servers raise 20G aggregated link and iron on your server pump 20Gbit/s data through its bus and through the network card?
4. — 5. If you don't need to split servers firewall-th (or in the future to share them), then it is possible to drive in one vlan. If you want disturbing, and frontend web-s web backend s to separate, it is possible to drive in different vlan. The benefit that marshrutizatory at wirespeed works on L3 switches.
6. Read here a design document from Cisco as sleduyuet do data centers are.
www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/nx_7000_dc.html
www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/VMDC/2.2/design_guide/vmdcDesign22.html
There is not at all necessary to pay attention, but to understand what it's all about — I suggest you do.
And other resources — is Google going to ask — no one place — there are a bunch of blogs where you write about it. - sincere_Hoppe commented on October 3rd 19 at 03:17
October 3rd 19 at 03:13
Updated the initial post with the clarification of the issues raised
October 3rd 19 at 03:15
The switches for the racks, which require 10g — DGS-3420-28TC, dlink.ru/ru/products/1/1468_b.html

For racks with Gigabit — DGS-3120-24TC, dlink.ru/ru/products/1/1366_b.html (SFP+ is absent, i.e. if the kernel is a piece of iron without Gigabit ports you will need intermediate).

Core — Extreme Summit X650, extremenetworks.com/products/summit-x650.aspx
October 3rd 19 at 03:17
If you doubt their abilities to correctly calculate the network for North recommend to more experienced people. They will analyze your system(flows and type of traffic peak loads on critical services, etc.) and offer some solutions require at least 2. You choose that which is more satisfied and they will do the project for him.
Next, sit down and think:
take the cost of the project, divide by 15 years (the average life cycle of the network) and add the annual cost of maintenance of own server infrastructure:
salary hitter
- Internet channels (primary and backup)
- uninterruptible power supply system: replacement of batteries and servicing of diesel
- cooling system (i.e. air conditioning)
- SMARTNET or equivalent, depending on the vendor's key glands
signatures for IPS + DDoS protection
the consumption of this economy

The result compare with what you pay for colocation. Only then make the decision on your own server. If the amounts differ slightly, in my opinion, preferable to staying in DC.

Without knowledge of your system it is very difficult to give specific recommendations.

P. S. if not difficult, write what you eventually came it would be a very useful addition to the question

Find more questions by tags Other