Quality is a recurrent theme of corporate identity. "Quality is our goal" they will say, or "quality moves us". I don't mean that this is actually true. It isn't true. In actual reality market share is our corporate goal and status is what motivates us individually. Never mind; quality is a desideratum of the paying public, and therefore quality is a tool of interest to the corporations, and therefore it remains a topic of recurrent concern.
One thing I've noticed in my sojourn as a corporate lackey is that the heat and noise given to the topic of quality doesn't seem to be proportionate to changes in actual quality. This has appeared to be true at least in my own area of speciality, computer programming.
I should add that saying actual quality in this context is a bit of a misnomer simply because it has been difficult to identify any observable quantity that reliably correlates with any agreed-upon quality of computer software. In other words, in actual practice we can't measure quality. This problem is somewhat less acute in manufacturing areas because there are measurable attributes of manufactured products (such as weight, content, flexural strength, and so on) which do in fact correlate, at least in part, with generally accepted concepts of quality.
Generally, in the computer programming area, quality has been addressed (with limited impact) by trying to get the company's programmers (i) to use the same coding style as used by some particular programmers at the same company or at the consulting company which was hired to "fix" the problem or at some other company with whom the consulting company has previously worked, and (ii) by putting all the resulting code through various kinds of reviews in which some second person (who has their own work to do) might possibly spot some flaw in the programmer's code.
I've pointed out in the past (with limited impact) that the biggest advantage of performing reviews is that it forces the original programmer to organize the work more carefully and to think it through well enough that it can be shown to someone else. The second biggest advantage of the reviews can be a sharing of knowledge and experience.
The biggest actual impact of most reviews is to take people away from useful work to sit in meetings; the second largest impact is often to discourage innovation, creativity, and any technique which isn't immediately obvious. This may have the benefit of reducing wildly unexpected failures by at the cost of precluding dramatic successes. The typical result is uniformly low quality programs.
Now the question is why this should be the case. Why don't these measures actually increase quality? There are many reasons, but I only want to comment on the one that I finally recognized this week.
Increasing quality is a social process. This was already implicit in my earlier observations, but it wasn't explicitly conscious in my statement of them. Having your work visible to other people automatically makes the entire development process a social process. It is this social aspect that drives you to do a better job, so that these other people will approve of your work. This is why I would say that it isn't what happens in the review itself that is so important, but rather the fact that the review will happen.
If we can go beyond that to initiate a real conversation among the developers, that very sharing of knowledge and experience which I mentioned in my earlier observations, then we have advanced the sociality of quality even more. In the first level, there was observation and approval (or, of course, the danger of condemnation). In the next level, there is a back-and-forth exchange of ideas. Such a conversation can be continued and expanded, at least in principle, beyond the immediate group of colleagues to encompass sharing among larger groups.
This seldom happens. Why? In large part, I think, because the people charged with quality improvement have no special competence in the area of social interactions. They don't know how to initiate or maintain such wide-ranging conversations. Instead, the corporate managers assign quality improvement to highly technical people who seldom engage in broad conversation. Or the task is assigned to outsiders, either consultants or recent MS grads, who don't have a good grasp of the existing social structures. In either case, the actual reality is that social networks are damped rather than cultivated.
But increasing quality is a social process and if the social networks are diminished it follows that quality is very likely to diminish as well. From this point of view, it may be only the remarkable resiliance of workplace social relationships which prevents the total collapse of software quality in the face of a new corporate quality initiative.