Tuesday, May 17, 2016

On Quality Engineering and Testing and Defect Prevention

Some time ago, I wrote a response to a post I read extolling the virtues of "Quality Engineering" over mere testing. (You can my response here.) Since then I have received some emails and been in some conversations on the topic. I've also seen a variety of threads on twitter related to the discussions I've had with others.

This, then, is some more of my thinking around the topic, based on what people have said to me - mostly trying to convince me of the error of my thinking.

It was explained to me that Quality Engineering is, at it's heart, prevention of bugs and problems in software. Thus, a Quality Engineer is not looking for bugs, instead, a Quality Engineer focuses on bug prevention - keeping them from being created in the first place.

"A good QE works to avoid bugs in software."

That was precisely what I was told by a very nice young lady. It struck me a bit like "A good AO (Automobile Operator) works to avoid potholes in the road." Apparently I was not amusing to her (maybe she lived in a city, as I do, where there are myriad potholes on nearly every road.)

There were several examples presented.

One involved a QE finding a problem in a planned change to a DB table. The QE prevented a problem by identifying the flaw in the development group's intended change. Their work flow consists of proposing DB changes, reviewing it with the development team, then the full Scrum team, then presenting it to the DBAs for review. It was in the Scrum team review that the QE identified the problem.

Another involved a QE identifying a problem in the design of some changes to an application. Again, the QE spoke up and raised an issue during review of the design with the Scrum team.

The third example was a QE speaking out over requirements that seemed contradictory. The reason was simple. They had not been understood and were noted down incorrectly.

Each of these were presented to me as examples of what a good Quality Engineer does. They prevented bugs from being created.


My response was, in each of these cases, the QE found a problem or inconsistency and raised the issue. They did not so much prevent a bug, as they did find a problem (bug) in a spot other than the working code. They found the problem earlier in the course of software development.

This, to me, is part of the role of testing and why testers need to be involved in the early discussions.

Taking the next logical step, including a tester who is familiar with the application in the initial discussions could benefit the entire process by helping other participants think critically about what the story/change/new feature is about.

By engaging in these discussions and exploring the intent and nuances around the request, recorded notes and conversations on the work, a tester might be able head-off issues while they are in the "bounce ideas around" mode - while discussions are happening around what terms or concepts mean.

In an Agile team (whatever flavour your group uses) if people are engaged in working toward better quality software, the role of a critical thinker is necessary - whatever you call it.

Some folks tend to get rather, emmm, pedantic over how words get used. Here's what I mean...

Each person in a team is trained to do something. Usually, they are better at that than the other activities needed to be done. Ideally, each person can contribute to each task that needs to be done - but their expertise in certain areas is needed to support and lead the team when it comes to doing those tasks and activities they are trained in particularly.

Some people are trained, and very good, at eliciting and discovering requirements. Some are trained in building a usable design. Some are trained in developing production code. Some are trained in database design. Some people are trained in assembling components together into a working, functioning build and/or release.

Testers have a role in each of these tasks.

Testers can help requirements be defined better.
Testers can help the design be better.
Testers can help the person writing production code write better code and execute unit tests better.
Testers can help with DB work (this may shock some people.)
Testers can help verify and validate the builds are as good as they can be.

Testers can test each of these things. It is what we do.

Getting to a position where testers are trusted, welcome and encouraged to participate fully in each of these tasks takes time, effort and gaining the trust of others on the team.

People tell me that testers only test code.

Those people have no idea what testing can be in their organization.

What some people are calling Quality Engineering tasks, from what I have been told (very patiently in some cases) are testing functions.



Saturday, May 14, 2016

On Releases and Making Decisions

I've gotten some interesting feed back in conversation and in email on this blog post.

It generally consisted of "Pete, that's fine for a small team or small organization. My team/department/organization is way too big for that to possibly work. We have very set processes documented and we rely on them to make sure each team with projects going in has met the objectives so we have a quality release."

To begin, I'm not suggesting you have no criteria around making decisions about what is in a release or if the release is ready to be distributed to customers. Instead, what if we reconsidered what it means to be "ready" to be distributed to customers?

In most organizations doing some form of "Agile" development, there is a product owner acting on behalf of the customers, looking after their needs, desires and expectations to the best of their ability. They are acting as the proxy for the customers themselves.

If they are involved in the regular discussions around progress of the development work, testing and results from the testing, and if they are weighing in on the significance of bugs found, is it not appropriate to have them meet and discuss the state of all the projects (stories) each team is working on for a given release?

Rather than IT representatives demanding certain measures be met, what if we were to have the representatives of our customers meet and discuss their criteria, their measures that need to be met for that release?

If each team is working on the most important items for their customers first, then does it matter if less important items are not included in the release, and are moved to the next? Does it matter if a team, working with the product owner, decides to spend more time on a given task than originally scheduled, as new information is discovered while working on it?

As we approach the scheduled release date, as the product owners from the various teams meet to discuss progress being made, is it really the place of IT to impose its own measures over the measures of the customers and their representatives?

I would suggest that doing so is a throw-back to the time when IT controlled everything, and customers got what they got and had to be content with it - or they would never get any other work done... ever.

I might gently suggest that whether your customers are internal or external, we, the people who are involved in making software, should give the decision on readiness to the customers and their representatives - the Product Owners. We can offer guidance. We can cajole and entreat. We should not demand.

Who is it, after all, that we are making to software for?