Defining Quality
Since I've been writing a bit about what it truly means to go fast—and emphasizing that speed without quality is failure, not success—I thought I'd share some thoughts on what quality actually is.
The temptation is often to define a single, global standard for quality across the entire organization. Phrases like "It has to be bug-free" or "It must fully meet requirements" sound reasonable on the surface. They're not wrong, exactly, but they also won't guarantee success.
Consider two very different features: a login system and an invoice report. Engineering certainly aims to deliver both according to their respective requirements and without critical bugs (setting aside, for the moment, the obvious challenges in achieving perfect bug-free software).
Yet declaring that something "meets requirements" can still feel vague. Can the login feature be measured by the same expectations for the invoice report feature? Absolutely not. The login system demands high security, near-perfect reliability, and seamless performance under load. The invoice report prioritizes accuracy of data, clarity of presentation, and perhaps export functionality. The bar for each is fundamentally different.
So is "meets requirements" truly measurable across the board? In practice, no—not without deeper context.
For the department or organization to consistently deliver quality work, success must be defined specifically for each piece of work undertaken. Technically, that's what requirements are supposed to do. But the key is making the measure of success concrete: tying it directly to observable, verifiable metrics rather than abstract ideals.
It is these concrete data points—clear, agreed-upon metrics—that ultimately ensure engineering delivers quality, on time, and in a way that drives real business success. And these data points have to be clear on a per feature basis, and not with some broad over arching statement.
No comments yet. Be the first to comment!
To leave a comment, please log in.