TCAM – a Deeper Look and the impact of IPv6

I’ve learned quite a bit about TCAM over the years but there are number of areas I’ve never researched. This is another scratch notes where I’ve scratching the itch to know a little bit more about TCAM. The main questions I’m looking to solve is that TCAM memory is that it can be big killer for IPv6 migration. Why ?

TCAM = Ternary Content Addressable Memory.

  • special type of computer memory used in certain very high speed searching applications.
  • Where Content Addressable Memory describes a chip design that allows for a search of the entire memory in a single operation.
  • there are Binary CAMs for binary searches where registers contain only 1 or 0 – two states memory.
  • there are Ternary CAMS for binary searches where registers contain 1 or 0 or X (Don’t Care) – three state memory.
  • Ternary is another way of saying ‘three’ similar to Binary meaning ‘two’ (in case you didn’t know)
  • because the Memory lookup can be achieved very quickly, it is perfect for lookup where packets or frames need to make decisions on which interface to send the packet or frame.
  • the search algorithm is deterministic – important because of the time sensitivity of a route or forwarding lookup.
  • TCAM can perform a wide search in memory in a very short fixed period of time, typically less than 20ns. Reference: CEENET
  • in 2001, 5-10 lookups per packets needed to deliver feature rich forwarding. CEENET
  • “using simultaneous parallel operation to compare data strings input from an external device with data strings stored in the memory and outputting the matches.”
  • TCAM memory is expensive to build therefore manufacturers use as little as possible.
  • TCAM chips use a lot of power and have high heat dissipation
  • In 2009 a single Renesys TCAM chip of the day cost about USD$350.
  • CAM memory is implemented in Sun SPARC, MIPS CPUs and lately Intel Core CPUs as a ‘translation lookaside buffer’ to improve virtual memory translation. Source
  • Cisco has used multiple smaller TCAM (to save money and power) by using Patricia Tree precomputation.

Cisco Implementation

  • The ternary content-addressable memory (TCAM) contains ACLs in a compiled form so that a decision can be made on whether to forward a frame in a single table lookup. Reference: CCNP SWITCH 642-813 Official Certification Guide.
  • Cisco uses TCAM for L2 Forwarding, L3 Forwarding, QoS ACLs,
  • IOS handles available TCAM resources in two key ways.
  • Feature Manager (FM)— After an access list has been created or configured, the Feature Manager software compiles, or merges, the ACEs into entries in the TCAM table. The TCAM then can be consulted at full frame-forwarding speed.
  • Switching Database Manager (SDM)— You can partition the TCAM on some Catalyst switches into areas for different functions. The SDM software configures or tunes the TCAM partitions, if needed. (The TCAM is fixed on Catalyst 4500 and 6500 platforms and cannot be repartitioned.)
  • The TCAM architecture varies from platform to platform, and from model to model. For chassis based devices, TCAM are located on the Supervisor engine and each engine will be different from other engines. Therefore you will need research each platform and model on it’s own merits. Example on 4500 Supervisor Example on C6500 Sups
  • Understanding and Configuring Switching Database Manager on Catalyst 3750 Series Switches
  • Can’t find too many details on C3750-E or X.
  • The Nexus 7K has TCAM on every module (not just Supervisor). The TCAM can hold different amounts.


Nexus Modules – EARL 8 Chipset
Protocol Protocol Entries TCAM Entries
IPv4 Unicast 80K 80K
IPv4 Multicast & IPv6 Unicast 20K 40K
IPv4 Mulitcast 2K 8K

The EtherealMind View

  • worth noting that because IPv6 is four times bigger, it uses four times as much TCAM memory.
  • This means that your router can only on 25% capacity for IPv6 compared to the IPv4 maximum size.
  • And since you will probably have both IPv4 AND IPv6 addresses in your network, TCAM exhaustion could be a concern in larger networks since you may have a significant expansion in the gross size of the routing table.
  • Therefore the networking vendors are going to be excited about the extra sales.
  • Speak to your Cisco account manager about getting more information about your hardware. There is a lot of good documentation inside Cisco that they can give you under NDA that will give you more detail on the TCAM utilisation. Account Managers are a resource and you should use them.
  • The documents referenced here have a lot of detail about HOW TCAM works but it not possible to cover the detail here – some of those documents are twenty / thirty pages on the topic.


Source: Wikipedia

  • David Rothera

    Nice article Greg. Knowing that quite a few of our core switches are already pretty high on TCAM util is going to be interesting in the coming months. I’m sure as you have said that a lot of people will be spending some serious $$$ to get IPv6 running fully.

  • Alex S

    Thanks Greg, great article.
    Do you think software developers may turn to portion of slower RAM in order to compensate for size as last resort? Judging from what Ivan said in PPP show 31 about processing instructions and how memory works in general, it may not even be comparable at all and working with RAM insterad can slow it down to crawling.

    • Greg Ferro

      From what I read, once the L3 TCAM is overloaded the Process Path is used. Since all Cisco devices have underpowered CPUs to increase profits, this is effectively a failure since it will overload the CPU and the device will stop. For L2 TCAM I think that the system stops accepting new entris and throws “CAM TABLE FULL” error messages (or something similar). To some extent, each platform will do this differently so “it depends” is in force here and you’ll need to research your own hardware.

      • Chris Crawley

        We had this exact issue a couple of years ago on some 7613’s. We busted the TCAM limit and all hell broke loose. Spent the next two nights stripping down one node at a time and replacing the Sup720-3B’s with 3B-XL’s and 3B-XL DFC’s on all the line cards so that we could up the TCAM limit on IPv4.

        With IPv6 taking off and the IPv4 table still growing at an incredible rate I think that some serious investment is going to be needed over the next couple of years to cope with it all.

        Fun times ahead.

  • Roman

    Nice summary Greg.

    – You can partition 6500 TCAM for number of IPv4/IPv6/MPLS/MCAST entries
    – 6500 can be distributed platform (similar to N7K), with DCF/TCAM on linecards. On some linecards, DFC is mandatory.
    – Numbers for N7K are wrong. It’s 128K routes for IPv4 etc. There are also XL linecards, they can handle 1M IPv4 routes. You need special (per box) license to enable XL.
    – IPv4 vs IPv6 number of routes is not 25%. It’s very platform specific (e.g. 50% for nonXL 6500 – 128K vs 64K)

  • Eliot


    I think the theory on IPV6 is that prefixes will be aggregated a lot better than in IPv4 and because of that, the number of routes in the global routing table will be less(In theory using the same or less TCAM than in V4). The problem comes when IPv4 gets extended further out as the years go on and that all of the V4 prefixes will take up a large part of the TCAM due to massive de-aggreation of larger prefixes(Private companies selling the then depleted IPv4 space out of their larger aggregations).

    • Greg Ferro

      That’s possible. I guess I’m seeing the most people will just do what they already do where each subnet will be a route in your network. If you IPv4 routing is 5000 entries, then your IPv6 table will also be 5000 entries, but IPv6 will consume more TCAM memory than an equivalent IPv4 route because it is four times larger in binary.

      Note that I’m mostly looking at the Enterprise problem here. Most Service Providers have already overloaded their equipment to the point where they have discovered TCAM and its limitations and are carefully monitoring it.

      • Eliot

        Ahhh, yes I see. I was looking at this more from a service provider or large enterprise who is accepting a full table on edge/backbone gear. I can see how internal routes in an enterprise might have this issue w/lower end gear. I guess that’s where good summarization and mangement of routes come into play ๐Ÿ˜‰

        • Greg Ferro

          I would also bet on the fact that many more companies will dual home with their IPv6 allocation and the IPv6 tables will MUCH larger as a result. Given that everyone will have their own public IPv6 allocation – because you do NOT want to be using a provider dependent IPv6 address and have to re-address every desktop when you change providers.

  • Pingback: Show 40 – Openflow – Upending the Network Industry รณ Packet Pushers()

  • Pingback: Show 40 โ€“ Openflow โ€“ Upending the Network Industry – Gestalt IT()

  • Pingback: Routing Protocols and Computation in Silicon โ€” My Etherealmind()

  • Pingback: What’s Happening Inside an Ethernet Switch ? ( Or Network Switches for Virtualization People ) โ€” EtherealMind()

  • Pingback: OpenFlow: Proactive vs Reactive()

  • Pingback: OpenFlow: Proactive vs Reactive()