Brocade’s BR-MLX-10Gx24-DM isn’t as I expected

Brocade’s BR-MLX-10Gx24-DM line card for the MLXe Chassis isn’t as I expected.

Brocade recently released a line card that has 24 port of 10Gig (BR-MLX-10Gx24-DM) in a half slot form factor for the MLXe chassis. To run this card, the Ironware version needs to be at 5.4 or higher. While reading the upgrade guide I found this note on Page 73 of the upgrade guide.

For maximum performance you must operate your BR-MLX-10Gx24-DM interface module with high
speed switch fabric modules in turbo mode.


How to enable Turbo Mode? I went to the configuration guide to find that and this is what I found.

The module can support up to 200Gbps when the system fabric mode is in Turbo mode (i.e.,
the system contains only Gen 2 and Gen 3 modules such as the MLX 8x10G, 100G or 24x10G
The module can support up to 12 10G wire-speed ports when the system fabric mode is in
Normal mode
(i.e., the system also contains any Gen 1 modules such as the MLX 1G or
4x10G modules)


Below is the link to find out if your line cards are Gen1, Gen2, Or Gen3. Depending on the other line cards in your chassis, you may get 18 or 12 line rate gig ports with this card.

Brocade is not the only vendor that is doing this with their high density 10G blades. Cisco has a blade on the market that needs to be in it’s own VDC on the Nexus 7000 platform for it to work.

Always ask lots of questions and read about how the hardware is going to perform so you have the correct expectation when you get the cards.

Has anybody purchased this card? If so, how is it working for you?

This entry was posted in Brocade by Scape. Bookmark the permalink.

About Scape

Over 10 Years in the networking field. Have worked in the Service provider and Enterprise environments. I have worked with Cisco, Foundry/Brocade, F5, Riverbed, Scientific Atlanta, Routers, Switches, Firewalls, Load Balancers, WAN Accelerators, DWDM, SONET, Multicast etc...

One thought on “Brocade’s BR-MLX-10Gx24-DM isn’t as I expected

  1. Im planning to purchase it, even with 12 linerate ports, in a datacenter environment you usually either have traffic in, or traffic out, but not at the same time. So in that case it still is an improvement over the normal 8x10G cards.

    One other thing to watch out for is that LAGs can only be formed between 24x10G cards, and not be mixed with the others.

    What does annoy me, is there is no G2 1G blade available, and in todays word there are still plenty of customers on 1g bgp ports etc, where it doesn’t make sense cost wise to up them to a 10G port. So even if you have the rest of your chassis with G2+, if you need to provide 1g ports, you’re stuck in normal.

Leave a Reply