- A nightmare of inconsistent and unpredictable free UNIX tools that was the only off-line debugging available to network engineers because it was easy and cheap for vendors to implement. There wasn’t any innovation or planning, it happened because developed needed some debugging. (Another example of profits and laziness before customer need)
- “Logging” comes from an old practice of throwing a piece of wood off a ship, then counting the rate at which knots on a rope tied to it fell into the water. In other words, they were doing it wrong from the very beginning, by using logging instead of time-series for metrics. (Source: https://twitter.com/Ben_Pfaff/status/1085979153273253888)
- Note that logging is not monitoring, analytics or observability. These are superset capabilities of which logging may be a part.
From a networking angle, syslog was the only logging tool that we had for years. It was unreliable, and vendor implementations were unpredictable. No one really knew what was content going to be in the message, when to why it would log or what it meant even when something was there. Vendors did not create or adhere to any pattern or consistent guideline of what to put in logs making them unreliable source of data.
Back in the old days of the 1990’s, someone decided that logging messages weren’t that important so they used the UDP protocol so that messages could readily be lost. Sure, it saved 2 cents worth of memory and CPU on a device and millions in software development but who cares about the customer anyway. Stuff them and knowing what was happening in their networks.
Slow forward to 2019, everyone despises syslog and would much rather that Unix had managed to change in the last 30 years and take logging seriously. So now we use gRPC and YANG models that are so complicated it takes a year of training to build up the expertise needed to comprehend what the hell is actually going on. Yay for XML.
Logging in a networking context is a sign that, surely, the gods hate network engineers.