• Home
  • Who Am I ?
    • Contact
    • What does Ethereal Mind mean ?
  • Disclosure
    • Disclaimer
    • Comment Policy
    • Privacy Policy
  • Just Three
  • Archive

EtherealMind

Software Defined & Intent Based Networking

You are here: Home / 2016 / Archives for June 2016

Archives for June 2016

Custom Silicon Isn’t That Hard After All

30th June 2016 By Greg Ferro Filed Under: Blog

I’ve just been speaking with Barefoot Networks about their Tofino ASIC for flexible packet processing. During the briefing it became clear that:

  1. Three years from start to finish to design their ASIC. They claim that volume shipping is expected mid-2017 so four years time-to-market. (I’m a little dubious of this claim)
  2. Current team size is 80 people. Thats less than the average product management team at a big vendor.
  3. The Tofino ASIC is equivalent to or  better than the Broadcom Tomahawk in almost every way. TOFINO is a 6.5TB/S, 256 x 25G I/O channel (65 X 100GE OR 260 X 25GE or some other combination) user-programmable networking ASIC.
  4. The pricing is expected to be similar to existing products.

Working with networking vendors over the years, especially Cisco, I’ve been told by product managers and marketing that silicon is hard. Things like “hundreds of people”, “100’s of millions”, “many years to develop”, “you can’t change the silicon”.

Its clear that something has changed. Here are some thoughts on this transition.

Small Teams are Better Than Big

Barefoot has demonstrated that small companies can build networking silicon  for moderate level usage. Its can do so in a half the time at one quarter the cost and with a genuine innovation compared to the market. barefoot-p4-flexible pipeline.png

In my view, Barefoot Networks has a major competitive advantage over big networking companies. It draws in experienced and smart people from them and focussed on a specific thing.

Customer Input

Barefoot has developed an open source community around its product. P4.org  has been working quietly to gather input from a wide range of potential customers. Previously, only big companies would have access to customer input.

Customer input is about reducing risk of product development. Investors can measure the value of their spend during development by monitoring open source engagement. Its low cost and also marketing/exposure for the company product.

Its also worth noting that the rise of cloud companies has enabled Barefoot. Facebook, Azure, Google and Amazon are sure to be involved in early testing because it enables a vast new range

The EtherealMind View

  1. Don’t look to incumbent vendors for genuine change.
  2. Look for incumbent vendors to buy these companies to bring proven products into their portfolio/strategy.
  3. The old ways of needed “years, dollars and expertise” do not apply to building new products.
  4. Access to customers for product development has been replaced with open source communities and well known individuaks to engage them.
  5. This is why the Cisco “spin in” model isn’t working anymore (if it ever did).

Upgrade Your Data Centre To A Closet

21st June 2016 By Greg Ferro Filed Under: Analysis, Blog

I’ve recently reviewed a data centre design for a mid-sized company which can fit into a closet. This prompts the question, do you really need a data centre anymore ?

TL:DR – we can put enough CPU, Storage and Networking into a single rack to meet the infrastructure needs of most companies. Especially if you app

I’ve heard and spoken to a few companies that are moving services out of the public cloud because of high recurring cost of public cloud services. Because their applications were “cloud ready” or “cloud architected”  they achieved major infrastructure cost reductions by  designing a “data centre” at closet-scale.

History

When I first started my career in IT infrastructure in mid-1990s, I was “installing” servers & networks into spare closets/broom cupboards in offices. Later it was a “spare” office and later converted with a raised floor and extra aircon.

While “big” companies who had mainframes had already built data centres (at vast expense), there was a time in the late 1990’s/early 2000’s that made building a data centre made sense. It was “accounting fashionable” to own real estate to boost the balance sheet with hard assets like real estate instead of “corporate good will”.

Today, the money-fashion is to own nothing, rent everything, produce intangibles and focus on short term returns and thus the tide has turned against owning and operating a data centre. The result was that building a DC on a 20-year depreciation schedule was cost and tax effective. Naturally, once the idea took hold everyone started overdoing it leading to the costly and vastly over-specified data centers of today.

For now, at least, its practical to consider returning to building closets instead of renting co-location space because of major change to Density, Power, Cooling and Weight.

Density

The overall trend in Enterprise is increasing utilisation and efficiency and thus the “density of utilisation” (I made that up). Mostly because the existing generation of technology is highly inefficient – manual operations, one server/one app, zero/limited re-use of existing platforms. The last five years have seen some improvement with a slow but steady migrate to using hypervisors to improve the utilisation of servers, flash storage and overlay networks.

A single rack of equipment has enough compute and storage to drive the applications of a large company when using virtualization, automation and orchestration (a la, software defined).

Less Power

If you can keep power consumption at a reasonable level then you don’t need a complex power infrastructure. A key driver for dedicated data centre’s was to support the power infrastructure of diesel generators, fuel tanks, battery rooms etc. But if you can keep the power under 20-40KW then you can avoid this with battery backup. Modern battery systems require a lot less space and last longer.

Power failure ? You can readily automate a power down of non-critical assets to extend battery life for a three nines. If you have something that needs better (you probably don’t), then get half a rack in a colo where the power is someone else’s problem.

Higher Temperatures

Modern hardware doesn’t need to be refrigerated to 17ºC. Running a closet at 30ºC substantially reduces cooling load and concomitant reduction in CapEx AND OpEx. While its a bit unpleasant to work in there, its also strong encouragement to implement automation.

Less Space & Weight

We reached peak space consumption around 2009 and virtualization has solved this problem. The increase in CPU performance has reduced the number of servers needed. Networking doesn’t need chassis switches when using a handful of servers and is reduced to a couple of 1RU switches (again, less power & cooling too).

Weight has been a major problem, especially for storage arrays with hundreds of disk drives. The transition to All Flash Arrays has reduced space and weight while increasing performance compared to disk arrays.

The EtherealMind View

I’ve heard and spoken to a few companies that are moving services out of the public cloud because of high cost of public cloud services. Because thier applications were “cloud ready” or “cloud architected”  they achieved signficant infrastructure cost reductions by  designing a “data centre” at closet-scale.

Their “cloud only” processes means that a design process to think small, minimalist and software-first produced a small-scale data centre. They already have dashboards and monitoring that can predict their consumption and resources.

Many companies can easily fund a few racks of equipment from cash flow. Importantly, it reduces the total overhead of the business.

NOTE: this assumes that you have the right skills in your organisation. Not technical skills (thats easy to sort out) but management skills to comprehend how to deliver this. In my experience, finding competent managers in technology is exceedingly rare. 

 

Basics: What is the difference between routing, switching, bridging and forwarding

20th June 2016 By Greg Ferro Filed Under: Basics, Blog

The terms routing, switching, forwarding and bridging have different meanings that have changed over time.

Routing is the process of forwarding packets at L3 of the OSI model.

Bridging is the process of forwarding frames at Layer 2 of the OSI Model.

Switching is the process of forwarding frames at Layer 2 of the OSI based on  the Destination Address.

  So Forwarding is the general term for moving data from device input to output  and routing/switching are specific forms of forwarding.

Once upon a time, there were many L3 protocols (IP, IPX, Appletalk, Banyan Vines etc) and there were many L2 protocols (Ethernet, FDDI, ATM, Token Ring, Arcnet etc etc). The term forwarding was used to describe moving any protocol across the network. Today, we have converged on Ethernet as the only L2 protocol to survive and IPv4/IPv6 are the only L3 protocols so the concept of “forwarding” has merged with the concept of  switching and routing.

Historically (in the 1980 and 1990s), switching was done with “switches” and “routing” was done with routers. Switching of Ethernet frames could be performed in silicon while routing was done in CPU.

Another  difference is that Ethernet frames  require a CRC check on every frame. IP Routing requires that IP header  packet have a checksum. Because the Ethernet is a fixed size header the processing is simple while the IP header is much more complex and calculating the checksum is very complex and difficult to do in silicon. There are other differences but this is the major one.

Basics: Is there always a feasible successor in EIGRP ?

16th June 2016 By Greg Ferro Filed Under: Basics, Blog

Question : Is there always a feasible successor in EIGRP ?

Background: An operational route is passive. If the path is lost, the router examines the topology table to find an FS. If there is an FS, it is placed in the routing table; otherwise, the router queries its neighbors, sending the route into active mode.

Answer: There has to be an more than one path between two routers for a feasible successor route to exist. If there is only one route, then there can be only one successor and, of course, there must not be a “feasible successor” route for the alternate path because there is only one path. 

Note: As Gary notes in the comments below, a route must pass several pre-conditions. In order to be considered a feasible successor, the Reported Distance (RD) of another route must be lower than the Successor route’s Feasible Distance (FD). RD is what your neighbor is reporting as their cost to reach a specific network. The FD is your cost to that neighbor, in addition to that neighbor’s RD..

Corporate to Consumer Devolution

15th June 2016 By Greg Ferro Filed Under: Analysis, Blog

The process of selling products to Enterprise IT is slowly beginning to change. Instead of selling “top-down” to the CIO and a small team of experts its about selling to the consumer with a “bottom up” strategy.  I’ve put together

DevOps. Technology platform decisions have been devolved to developers following the rise of open source. Open source is key because no costs are incurred up front and thus do not require management supervisor.

Cisco and Apple. Cisco is investing in an Apple partnership. Apple gets a path to Enteprise customers, Cisco gets its name in front of the people who use their products. (Consumers don’t much know that Cisco exists ).

Microsoft and LinkedIn. Microsoft is losing touch with Enterprise IT as open source dominates the future (Linux, KVM, Docker, OpenStack, Cloud Foundry etc etc etc). And this is a battle that cannot be won. Buying LinkedIn allows Microsoft to reach consumers directly. LinkedIn has personal data about a key market – people who work 9 to 5 in corporate offices. LinkedIn will be integrated into Dynamics CRM for sales people to know more, and into MS Outlook for.

The strategy is that influencing consumers means that Corporate IT is removed from the decision making path or that users have a much larger say in what products are used.

Public Cloud. The early years of public cloud were led by individuals and small teams. Public cloud wasn’t sold to Enterprise, it was targeting individuals. Today, public cluod is trying to reach Enterprise/Corporate sales with some difficulty.

The Etherealmind Viewconsumer devolution-opt

This viewpoint confirms my perspective on:

  1. Enterprise IT continues to lose relevance in the sales cycle.
  2. Consumer markets have a growing impact on new technology. Enterprise IT doesn’t drive new technology, it consumes whats available e.g. Cisco using Apple iPhone instead of producing its own hardware/software devices.
  3. Social media will penetrate the corporate technology stack regardless of need, want or practicality. There is too much competitive opportunity and money on the table for deep tech companies like Facebook, Amazon, Microsoft & Google in terms of tertiary data exploitation. They will force it to happen.
  4. Networking is about “over the top”. These technologies do not rely, need or want functions in the network. Its all over the top.

  • 1
  • 2
  • 3
  • Next Page »

Network Break Podcast

Network Break is round table podcast on news, views and industry events. Join Ethan, Drew and myself as we talk about what happened this week in networking. In the time it takes to have a coffee.

Packet Pushers Weekly

A podcast on Data Networking where we talk nerdy about technology, recent events, conduct interviews and more. We look at technology, the industry and our daily work lives every week.

Our motto: Too Much Networking Would Never Be Enough!

Find Me on Social Media

  • Facebook
  • Instagram
  • Linkedin
  • RSS
  • Twitter
  • YouTube

Return to top of page

Copyright Greg Ferro 2008-2017 - Thanks for reading my site, it's been good to have you here.

Opinions, Views and Ideas expressed here are my own and do not represent any employer, vendor or sponsor.Full disclosure