The Case For Using Lines of Code In Metrics
--
Are you nuts?
Maybe, but let’s dive a bit further and see where we end up!
The History of Lines of Code and Goodhart’s Law
Early in any programmer’s career journey they hear some variation of the Bill Atkinson at Apple story. Essentially he had a major win for QuickDraw back in 1982 by removing 2,000 lines of code from the module, but had to input how many lines he wrote into his daily productivity form. If legend is to be believed, this entry in 1982 was the first middle finger to Lines of Code as a metric.
The removal of that code made the system much better, but by the measure, Atkinson was doing a poor job.
Goodhart’s Law states:
When a measure becomes a target, it ceases to be a good measure
Managers at that time lacked creativity and zazz and went with one of the only measurables they could see. In their minds, lines of code = productivity = good employee. I see what early managers were trying to get from the metric, but there are some glaring problems.
Lines of Code as a Measurement
At the heart of this measurement, lines of code is “how many lines of code written to perform the task at hand.” On the surface, pretty straightforward, yes?
Alone, lines of code is a bad metric. It does not scratch the surface of effectiveness, quality, or efficiency. If desired, I could write my entire program in one line of code. Doesn’t mean it’s good or that it works, and think of the engineers that would mutiny over maintenance.
On the other end of the spectrum, I could break my program into as many lines as I want to hit your dumb measurement.
When I started as a front-end engineer, you would have thought me a regular Zuckerberg by the number of lines of CSS I could crank out. In reality, my CSS was gigantic, gross, and impossible to maintain.