In order to analyze and break down what quality in software engineering is and how we should think about it, we need to understand the various industry standards and best practices for designing and writing code, evaluating complex architectural trade-offs, testing, reliability, security, product management and many, many other disciplines or areas.
Because this would easily exceed the scope of this article, we will leave the tactics and details of assuring quality for future focused articles, similar to, for example, our previous article on “Managing Complexity”, or our process for peer review, which are directly related to engineering quality.
In this article we will instead describe the general philosophy and responsibilities that Lemma teams follow to deliver quality.
The golden rule of quality
We are responsible for the quality of our work.
We may find ourselves working on projects in which there is a dedicated QA team, a UAT (User Acceptance Testing) team, there may even be dedicated QA specialists for things like reliability, UX usability, security audits and more. We could also work on small projects without any of those things. In all cases, we are still first and foremost responsible for the quality of our work in all of its aspects.
For us, quality is not a static score or something to reflect upon at the end of our work. Quality begins before anything is actually built and it’s a direct result of not just how we write code, but also how we communicate, how we gather requirements, how we collaborate as a team, how we design and implement our solutions, how engineers experience having to maintain and change them over time, how real users feel about the solution individually and as part of the bigger whole.
Quality Assurance is not a rubber stamp of approval at the end of an implementation; quality is the result of a chain of techniques, best practices, discipline, collaboration and genuinely caring and believing in what’s being built, from idea to rollout.
At Lemma we expect that each engineer thinks holistically about the things that they build and that we leverage the tools and support we get to set quality standards, without falling into the pitfalls of overly-relying on them or inhibiting our own role and responsibility.
Gold plating, bike shedding & premature optimizations
The tricky thing about quality is that there can be such a thing as “too much quality”, at which point it becomes detrimental to the overall success of the project.
Gold plating means designing or implementing a solution past the point of diminishing returns. This actually is an incredibly common pitfall when you work with talented engineers who are passionate about what they do.
The more educated and experienced an engineer is, the more likely they are to implement robust and thorough solutions. This is obviously a good, desirable thing, most of the time. This would not be good, on the other hand, if you are participating in a hackathon, or overengineering a quick prototype or proof of concept for an idea to elicit feedback.
It’s important to realize that engineering quality is not inherently good by itself, but rather as a means to driving business outcomes. There are legitimate cases where pragmatism can have a higher business impact than robustness, past a certain point.
An insightful real-world example of this is the case of RethinkDB, and you can read their own article titled “RethinkDB: Why we failed”, which we highly recommend as it’s an excellent, lucid analysis from one of its founders. In a nutshell: RethinkDB competed with MongoDB early on, but while RethinkDB had brilliant minds and higher quality standards, MongoDB’s more pragmatic approach optimized for speed to market at the expense of initial quality and after gaining user adoption and becoming one of the world’s most popular databases, they then had time (and a healthy, active community) to polish the rough edges, while RethinkDB could never break through.
Quality is also not just a question of good vs bad, but rather a series of more nuanced trade-offs. Something could be more robust, at the cost of being more expensive or taking longer, which may mean other features may not get built instead. A search engine could be more powerful, at a cost to performance. Etc.
Even in cases where quality is indeed a question of good vs bad, sometimes bad is better. A shopping cart at a supermarket could have the suspension system of an expensive mountain bike, but maybe that wouldn’t be a good idea – this may be a hyperbolic example and you may think it wouldn’t happen to you, but remember that your “shopping cart” team is likely made of suspension experts who are dying for an intellectually rewarding challenge and this may just scratch that itch. There is a strong natural bias for craftspeople to optimize, and most of us spend much more time learning how to optimize than how to decide if and when we should optimize.
So, how do you guide the team to hit the right balance of quality and business pragmatism? – And also, it’s easy to compare good vs bad, but what about good vs great? Or great vs really great? How do you know what is good-enough to meet your business objectives?
We could keep going into this quality rabbit hole, but we’ll stop here so we can arrive at our point:
Quality is not a static thing. You cannot write a general “quality tutorial” and real-world professional engineering quality is not a synonym with your unit test coverage percentage or any other single metric.
The way we overcome these biases and achieve strategic quality is through collaboration and careful planning. We define requirements as a team, we explicitly document “quality acceptance criteria” on our User Stories or other considerations as needed, we peer review our work each step of the way, we incorporate user feedback as early and frequently as possible and we put a strong emphasis on simplicity and modularity as a north star for software design.
Before a feature is ready for QA validation, the engineering team specifies the necessary steps and use cases for trying out the new feature locally. A gif or video of the feature in action, is included, when possible, to help the QA process. The instructions include necessary steps to updating configuration files, downloading dependencies or anything else required to run the new code locally. The instructions include recommended test data one can use to experience the new functionality. For example, if the new feature is a list of things, the instructions contain an example list of items that can be used to populate that list.
The team not only evaluates the “happy path” of a new feature, but also considers all relevant edge cases and acceptance criteria. The QA is exploratory beyond what’s documented and seeks to find the breaking points.
Engineers review the resulting product changes, not only the technical implementation. The team analyzes the quality of a feature when considering the big picture:
- Is the quality of the product as a whole better, for having added this feature?
- Is this feature cohesive with the rest of the product?
- Is this feature missing anything vital to maximize its value?
- What about tangential requirements like accompanying documentation, tests, infrastructure or security considerations? Is anything missing?
- Is this feature doing too much? Could it be simpler or more intuitive?
- Is the feature highly performant? Is it affecting the performance of the application in any way?
- Will other features need to change in any way after this change is added?
The QA process and dedicated QA specialists are not considered as a replacement to any of the items listed here nor an appropriate alternative to having a robust suite of tests and continuous integration. A CI can run a gamut of automated quality verifications: typing, tests, linting, formatting, security, formatting, domain specific checks.
The role of QA Engineers
In cases where we have the opportunity to work with dedicated QA Engineers, these are some basic guidelines for how that collaboration usually goes:
QA Engineers support both Product Managers and other Engineers throughout the development cycle in 4 core areas, described below.
User Story Definition
QA Engineers play a key role during backlog refinement ceremonies. They collaborate with Engineers and Product Managers on:
- Acceptance Criteria for each User Story
- These include business and engineering considerations about completeness, quality, edge cases and so on.
- Creation of new tasks or User Stories for the implementation of new automated functional tests or modification of existing ones.
In collaboration with Product Managers, QA Engineers review and validate all newly reported defects.
This input coming from QA Engineers in turn enables Product Managers to make a judgment call for each thing, categorizing it as an enhancement or defect, and ultimately leading to the prioritization and coordination of any resulting new work.
QA Engineers can also assist with manual testing, often in dedicated environments (such as a “QA environment”).
Different teams have different approaches to why, when and how to do manual testing, but in our experience this is a technique best used frugally and strategically.
We think manual testing is valuable when it is exploratory, trying to uncover any integration points, flows or edge cases that may have been missed during implementation or not explicitly documented in the acceptance criteria.
This manual testing is mostly focused on the brand new functionality being added to the QA environment every day, not on any old functionality that is already covered by an automated regression tests suite.
In addition to exploratory testing of new functionality, manual testing can also be utilized as a strategic option for testing things that would be impossible or impractical to test with automated tests (e.g.: the look and feel of CSS animations on a specific browser, advanced flows that may require waiting for actions to happen on the background for a long time, etc.)
It’s important to keep in mind that not only the QA Engineers are doing manual testing, but Product Managers, Product Owners and Engineers are participating as well as part of the peer review (or in some cases also dog-fooding) process.
Lastly, QA Engineers also collaborate on writing automated functional tests. Depending on the QA team or Engineer, this could also extend to designing and setting up the testing framework and tooling, setting up relevant CI/CD steps and more.
The Role Of Engineers in QA
Each engineer is responsible for the quality of their work.
Engineers accomplish this by collaborating with their peers, QA specialists and Product Managers to ensure that all product and technical requirements are taken into account.
As part of the implementation for any task or User Story, each engineer is also responsible for writing unit tests, integration tests and / or any other appropriate type of test in accordance with our testing quality standards.
When the implementation is complete, engineers submit a pull request which is then reviewed by other engineers, also following our guidelines and quality standards for pull requests.
Depending on the team and circumstances, Engineers are also responsible for writing functional end to end tests.
The Role Of Product Managers in QA
Product Managers are responsible for the quality of the product overall, ensuring that technical implementations live up to the product expectations and all aspects or details of the product intent are accurately represented in the final result.
To accomplish this, the Product Manager collaborates with QA Specialists and Engineers to define the acceptance criteria of each feature and to convey the product intent beyond what’s explicitly documented, filling any gaps in knowledge there may be or clearing up misunderstandings.
Product Managers also lead defects triage sessions, acting as the interface and coordinator between all reported feedback or problems and the engineering team.
Product Managers also contribute to doing manual testing of new features on the QA environment for the products or features they own.
With these responsibilities, the Product Manager also becomes empowered to truly take ownership of the product, evaluate product quality first-hand, and sign off on production releases.