• Home
  • Who Am I ?
    • Contact
    • What does Ethereal Mind mean ?
  • Disclosure
    • Disclaimer
    • Comment Policy
    • Privacy Policy
  • Just Three
  • Archive

EtherealMind

Software Defined & Intent Based Networking

You are here: Home / Basics: Three Types of Cisco UCS Network Adapters

Basics: Three Types of Cisco UCS Network Adapters

13th August 2012 By Greg Ferro Filed Under: Basics, Tech Notes

In the Cisco UCS servers, there are three classes or categories of Network Adapters for B-series and C-Server servers.

The network adapters are all connected to a PCI Slot on the motherboard in the form of a proprietary card called a”mezzanine adapter” which connects to a “mezzanine slot” or “mezz slot” for short. The term mezzanine is widely used by most vendors and refers to the physical way it ‘stacks’ onto the motherboard like a mezzanine level in a physical building.

The three categories are;

Three categories of Cisco UCS Network Adapters
Category Description My Observations
Ethernet Adapters Network silicon from Intel and Broadcom providing conventional networking capabilities. Can present up to two Ethernet interfaces to a server when two cards are installed. When using hypervisor,  resources of the card  handled in software. 
Converged Network Adapters Network silicon from Emulex and QLogic that has FC and Ethernet with software drivers for FC interfaces that storage admins care about. Network cards that have FibreChannel and Ethernet adapters on a single card with fancy silicon for hardware acceleration. One Ethernet network interface per card and one FC interface. When using hypervisor, it shares the resources of the card in software. 
Virtual Interface Cards Network silicon from Cisco that is bathed in Unicorn Tears in the final production phase.

Supports FEX features via VN-Link/802.1BR technology that allows each virtual adapter to appear as a separate Virtual Interface (VIF) on the fabric interconnects.

The architecture of the Virtual Interface Card is capable of supporting up to 128 total virtual network adapters split between vNICs and vHBAs.

Configuration of the VIC is handled in UCS Manager. The hypervisor sees as many “physical NICs” as are configured.

Can allocate multiple NICs to multiple software switches according to whatever policy you need to create. 

I write this down because it’s pretty hard to find this summary anywhere is Cisco’s documentation and I keep having to tell people the difference. Now I can point them at this blog post.

About Greg Ferro

Human Infrastructure for Data Networks. 25 year survivor of Corporate IT in many verticals, tens of employers working on a wide range of networking solutions and products.

Host of the Packet Pushers Podcast on data networking at http://packetpushers.net- now the largest networking podcast on the Internet.

My personal blog at http://gregferro.com

Comments

  1. stu says

    13th August 2012 at 18:43 +0100

    When Cisco first launched UCS, the partner ecosystem made a big deal about all of the options. I’ve heard that around 95% of adapters sold over the last could of quarters have been of the Cisco (VIC) variety. Since UCS is still only 1-2% of overall server sales, the Cisco adapters don’t have significant market adoption compared to Intel, Broadcom, Emulex and QLogic, but I was still a bit surprised at how dominant the Cisco adapter on the UCS platform.

    • Etherealmind says

      13th August 2012 at 21:45 +0100

      The VIC is the “magic sauce” of the UCS platform and about the only thing that is a clear differentiator. “stateless computing” is nice, but it’s a hard sell. The network integration is what makes UCS attractive to customers I think.

      • Dmitri Kalintsev says

        14th August 2012 at 08:50 +0100

        From what I understand, UCS is much more stateless than other “stateless” alternatives. I had a document somewhere that compared the number of configuration points that UCS had in server profile, compared to HP’s blade system. UCS had about twice the number, IIRC.

  2. Jim Leach says

    13th August 2012 at 19:34 +0100

    Don’t forget about the new VIC-1240 mLOM (modular LAN on Motherboard-doohicky) and the VIC-1280, which supports 256 virtualised interfaces. The mLOM initially has 40GE connection to the fabric which can be extended up to 80GE by replacing the Mezz card with a ‘port extender’. There’s also a VIC-1225 which is an actual PCIe card for the C-series UCS.

    How many more options could you possible ask for??

    • Etherealmind says

      13th August 2012 at 21:46 +0100

      Less options! That’s too many choices and causes decision paralysis when buying.

      Keep it simple.

    • Dmitri Kalintsev says

      14th August 2012 at 08:52 +0100

      Apparently with 8 uplinks you can have up to 502 VIFs per blade. See vmguru (dot) nl website for a nice summary article on that.

  3. Daniel Bowers says

    13th August 2012 at 20:20 +0100

    If you don’t like vendor docs, write your own!

    The adapters that plug into C-Series servers (pizza-box type rack servers) are physically shaped different than the adapters that plug into B-Series servers (blades).

    The adapters for B-Series are ‘proprietary’ in their shape; they physically can only connect to Cisco B-Series servers.

    The Ethernet and CNA adapters for C-Series are standard half- or full-height PCI Express cards, so perhaps they aren’t ‘proprietary’. (Unless Cisco is doing something strange, those C-Series Ethernet and CNA adapters are physically the same Broadcom, Intel, Emulex, and Qlogic cards that other vendors use, but with firmware customized for Cisco.).

    Until recently, I’d only heard vendors call to the adapter cards that plug into blades “mezzanines”. Cisco docs still don’t call the C-series adapters “mezzanines”, they’re still just “cards”.

    • Etherealmind says

      13th August 2012 at 21:46 +0100

      I just did.

      I still call them “mezz cards” but I’m rebellious like that.

  4. tbourke says

    13th August 2012 at 21:26 +0100

    Small correction: All mezzanine cards have access to two connections on the backplane, so all cards have at least the ability to have two NICs, and the CNAs/VICs have also can have (at least) two native FC interfaces. One connection goes to the A IOM and thus A Fabric Interconnect, and the other goes to the B IOM and thus B Fabric Interconnect.

Network Break Podcast

Network Break is round table podcast on news, views and industry events. Join Ethan, Drew and myself as we talk about what happened this week in networking. In the time it takes to have a coffee.

Packet Pushers Weekly

A podcast on Data Networking where we talk nerdy about technology, recent events, conduct interviews and more. We look at technology, the industry and our daily work lives every week.

Our motto: Too Much Networking Would Never Be Enough!

Find Me on Social Media

  • Facebook
  • Instagram
  • Linkedin
  • RSS
  • Twitter
  • YouTube

Return to top of page

Copyright Greg Ferro 2008-2017 - Thanks for reading my site, it's been good to have you here.

Opinions, Views and Ideas expressed here are my own and do not represent any employer, vendor or sponsor.Full disclosure