Its “best practice” to assume that IT vendor products are faulty, have serious bugs and will fail in normal operation at anytime.
This article first appeared in an Issue of the Human Infrastructure Magazine at Packet Pushers. You can signup to receive the magazine by email and for free by subscribing here.
You need a POC to prove it works.
- We accept this on the basis of the debate that networking is complex technology and cannot be bug free. (Why not ?)
- And that every customer is unique (even though every customer uses exactly the same products).
- If I am buying top quality equipment – why should I prove it works as its supposed to before I use the purchased product ?
POCs don’t prove anything
- POCs can test, at best, 50% of real deployment.
- Completing a vendor POC project provides zero guarantees that it will work in real life.
- There are NO circumstances in which the vendor will accept legal or financial liability for their products in normal operation.
So why are we told that Proof of Concept testing is necessary for large projects ?
POC Value
Conducting POC needs large amounts of OpEx to define and execute the testing. Typically in the order of 400–800 hours of preparation by the customer to execute an medium-sized engagement. (Not including travel expenses)
Are there any other benefits from conducting a POC ?
- Training on product and forced education on technology through actual hands on work.
- Offers insights into troubleshooting and operating the technology after the deployment.
- This can reduce the impact of the learning curve after deployment.
- A (false) sense of confidence that the solution works since there are no actual vendor guarantees.
- Vendor gets a lot of benefits : bug testing, product validation, user feedback, training for their POC engineers and professional services revenue.
The Value of Vendor “Warranty”
If you are paying a vendor for their “high quality, reliable and market proven technology then a POC/TEST/Validation should never be required.
Conclusion: Vendors have big profit margins and those profits aren’t put back into products
Conclusion: Vendor have no incentive to produce high quality products because all responsibility and risk is accepted by the customer.
Conclusion
- High priced products should be high quality and POCs should not be necessary.
- Proof of Concept testing is sales exercise that provides an illusion of risk management or risk mitigation
- Vendors profit from recurrent services revenue in the form of maintenance contracts.
- Vendors profit from creating an environment of fear, uncertainty and doubt to encourage customers to buy maintenance contracts.
- The same fear drives Proof of Concept testing because no other option exists.
- Companies offering “solution validation” without guarantees or financial liability accept that their products are sub-standard and faulty.
There is something wrong with our industry that produces unreliable technology and customers that buy it on that basis.



I think you have missed one aspect – IT people are not always having what it takes to validate the solution. Users are. For example admins don’t necessarily understand what BYOD onboarding process is easy to use and intuitive for users. And users are typically not able to say in advance what they want, they need to see it first and give thumbs up or down.
This is where PoC might have great value.
Disagree. A POC is a full custom build of a proposed solution prior to deployment. I would take the view that you are discussing user validation / user experience and that certainly should not require a full pre-deployment to do that testing.
If you do this, you are certainly doing it wrong. But then, nothing like Enterprise IT to spend money like drunken sailors.
Not sure that I agree on your definition of what a POC is here Greg. A PoC is a high level Proof of Concept. That doesn’t need to be a full replication, it is the first stage needed to see if a solution has legs before moving into more detailed integration testing and pilot phases. Well that is my experience anyhow and we all see things differently depending on the environments we have been exposed to.
I see that you are happy for your employer to accept the risk for integration and product function. This means that vendor is not responsible for making a quality product, nor are they motivated to fix bugs/problems because they have your money.
And when something goes wrong, they are not obligated to issue a refund when the product is fault. Why not ?
Not at all, vendor doesn’t get the money or the orders until the product is selected. Product is not selected until it is productised. (Certified for production). A Product is not productised until it is fully integration tested. A product doesn’t make integration testing if the basic concept fails in a PoC situation. PoC may not occur until product is on a shortlist determined through an RFI/RFP process. Even after all of that the vendor may not get the orders due to other commercial factors. Do I trust a vendor to tell me the truth about the quality and performance of their product or its conformance to my companies standards? The answer is a resounding no based on real world experiences.
I would highlight that product selection is very different from product operation. You can select the best car ever and still have a bad product.
…also a refund on a box in my world would be little piece of mind in an environment like mine. If there is some kind of major outage costing millions in downtime or reputational damage, the money back on a single failed box is little comfort. We test for ourselves for a greater assurance and comfort level. It also reveals the actuals in terms of performance where vendors typically massage the truth (often by what they don’t say or in the way that they present their measurements). For the vendor it is in their interest to present only positives as negatives can result in fewer sales. As a customer my motivations are quite different and I want to get to the realities.
OK, so add liquidated damages to the contract. In a car accident, you have insurance to pay for the first and third party damage. We don’t have this in IT because the quality of products is so low that its uninsurable.
Got it, we differ on definition of word PoC. What you describe I consider Pilot (when solution is integrated deeply with company stack). I consider PoC a limited demo, but with ability for real users to participate and provide feedback. For example get one AP + AAA in VM and let users see (and use) different captive portal processes (self register, SMS-based, social login, sponsored,…). Little more than demo.
From what you describe as PoC I agree it is not worth the time and money.
Pretty much. I always wondered why my 6-person network team would have to validate that (for example) BFD works as advertised on a new router or a new switch can talk MSTP to an old switch of a different vendor. Where are the interoperability labs where I can go and pay $100 for a report on vendor switch X?
FWIW: we ( Packet Pushers ) looked into building an interoperability lab and you simply cannot make it pay. The vendors hate them so you would have to pay full price for assets. Then the headcount, space and power needed to operate is much larger than you can imagine. Basically its like a building an entire Enterprise IT team without the revenue to pay for it.
That era is over. Move to open source and you won’t need an interoperability lab.
Moving to open source is perhaps just shifting the issues of interoperability to the open source versio, due to open source not being required to follow a set standard for protocols/format/templates or even backward compatibility. I like your POC chart, has some good points, but regardless of open vs.
closed it is always good to POC when possible and pilot when possible. As we alll know there are always uncertainties i.e. bugs when deploying new gear, even with the “rigorous” testing vendors do prior to releasing products.
You are missing the point. “there are always uncertainties i.e. bugs when deploying new gear, ” this should never happen. If you are paying top prices for a product, then a faulty should not be assumed as normal.
You wouldn’t accept this behaviour for a car or a smartphone, you would demand a replacement or a refund.
At least with open source, your loss is limited to the time spent.
It’s a shame the vendors hate interoperability labs, they could definately benefit from performing some cross-vendor Interop testing themselves before releasing product into the ether
In a similar vain, STACResearch perform independent testing of infrastructure in finance sector focuses stacks. What they don’t do those is the benchmarking of one product against another as a bake-off.
If you haven’t come across them I recommend taking a look.
I believe they are funded partially via their members who are typically banks and financial institutions.
There definitely is merit in independent testing if the testing company can demonstrate there is no bias.
Alternate viewpoint: Isn’t it a shame that you have to rely on third part testing to validate that a product actually works as represented by a vendor ?
If you consider a PoC as “test an individual technology to make sure it works” then I agree with this article.
However, there is value in running a test that is designed to answer a different kind of question, like: “would automated collection and visualisation of state and performance data improve our ability to resolve incidents more quickly and allow more time for strategic projects?”. In that case, it may not even matter what technology you use for the PoC – it’s the business value you’re testing. Run a PoC so you only spend money buying licenses and implementing the technology if the business PoC is successful.