Saturday, June 30, 2012

On Value, Part 4: Testers Rising to the Challenge

This is the fourth  post which resulted from a simple question my lady-wfe asked at a local tester meeting recently.

This blog post described problems with value in testing.  The second blog post looked squarely at perceptions of testers as unskilled, non-knowledge workers and the lack of response from a large number of testers.  The third post looked more closely at the source of such perceptions on testers being unskilled, non-knowledge.  This post is a discussion of what testers can do in response to such perceptions.

How many people do we know who collect a paycheck for "testing software" who can not be bothered to do any more than their collective bosses tell them?

Story 1

I know one manager who is active in the community who went so far as to offer for pay for his team to go to any conference or training event they wanted to go to.  He offered to pay for AST's BBST courses (which absolutely rock, by the way).  He offered to send them individually where ever they wanted to go to learn, confer, whatever.  Not one person expressed any interest.

That is a problem.

Story 2 - one of my own

I remember one team I worked with.  I asked them what defects were being reported by customers of the software.  They responded with a long list of "Well, there's this and this and this and that and then this weird thing and..."  I asked if maybe we needed to reconsider how we tested.  UNIVERSAL RESPONSE:  "Why would we do that?  They are using the software wrong.  If they used the software right, they would not have those problems."

Story 3 - another one of my own

I remember another team I worked with.  That team, on the first day I was there, not just the boss, but the TEAM said "We have a problem with our testing.  We're doing the best we know to do and we catch a lot of stuff but there are still defects getting through and being found by customers.  If you have any ideas on how we can test better, let's talk about them."

Guess which team was a lot more fun to work with?  Guess which team saw broader product examination and evaluation?

These teams demonstrate precisely what I see as the problem with the majority of people in software testing today, and as long as I have been involved in software.  One group wants to learn more, does not know where to begin and dives in headfirst to learn.

The other group is perfectly justified in their minds that they need to change nothing.  The boss they had 3 bosses ago said "Just do it like this."  They do.  Now, several years on, they have not changed.  They added more people to do more of the same.

A Problem

In the stories above, the folks in story 1 demonstrated precisely one aspect of the problem.  The folks in story 2 demonstrated another.  The third group was the antithesis of both - and a hopeful sign.

While many people in many lines of work expect nothing more than they are given, in training, understanding, pay or ambition, others expect to be told precisely where they fit.  Anyone looking to develop them as craftsmen was a threat - upsetting the status quo is a, well, problem.

People want to walk in 2 minutes before starting time, do their thing and go home.  Ideally, they don't want to think about anything work related until its time to walk in the office door the next morning.

Fine.  I can really empathize with that.

I'm not talking about thinking about your job I'm talking about your profession - your chosen trade and craft.  Alas, it seems some folks don't think that way.

Upshot of That Problem

If you or people you know are in that camp, I fear you might want to look for another line of work.  I might suggest you look for a line of work that requires you to not do outside study, on your own, at your own expense.  Off the top of my head, I suggest you not look at these lines of work, all of which require you to do precisely that, if you wish to become good:

  • Teaching (anything at any level);
  • Law (anything related to law);
  • Computer Software (that one is kind of obvious);
  • Project Management (more than just software projects, lots of things need project managers);
  • Accounting/Accountancy;
  • Show Production (Music, Stage, Theater, Television, Motion Picture);
  • Advertising;
  • Distribution Center Management (that's how to run a warehouse);
  • Commercial Truck Driving (or lorry driver);
  • Commercial (other) Driver (bus, taxi, car-for-hire);
  • Restaurant Manager/Owner (that one is really harder than most folks think);
  • Chef (not a cook, but a proper Chef);
  • Carpenter/Joiner;
  • Construction Contractor/supervisor;
You get the idea.

Jobs that require you to do no development on your own are jobs that will go away - at least from you.  They may get sent to some other country where labor rates are lower - and low enough to off-set the cost of transporting the components or finished product back to end-market areas.  OR it will go away because a machine will do it instead of a human person.

We're big on buggy-whip analogies when we want to make a point about people not keeping up with changing times.  Well folks, Swiss Watches fell into the same boat as buggy-whips.  Not anachronistic, just too expensive and a function that can be done by a commodity item, instead of the product of an artist.

As I See It...

You can make all the cases you want about what value testing bring to a software organization.  You can talk about how your testing improves the quality of the software product.  You can talk about all this stuff.

It does not matter.

If someone or something can do the same thing you are doing, with about the same results, and it costs the organization less money, they will replace you.

Ask the thousands of manufacturing workers who lost their assembly-line jobs in the US starting, well, in the 1970's.  As the shipyard workers in Belfast and Glasgow in the 1950's and 60's.  Ask any of the miners anywhere, mining for anything, that have been displaced by machinery.

And that is the problem.

We, as a profession, have failed miserably in demonstrating that the Tayloristic management theory does not apply to software development - which includes testing.  

Why is this?

We have failed to address the solidly formed and closely held management belief that repeated practices will ensure quality products.  Concepts developed for assembly-line workers, when applied to knowledge work, fail.  Full stop.

Why is that?

Because no human being thinks in a linear manner.  We simply are not built that way.  Knowledge work requires a level of cognitive insight and practical experience to draw on to inform that insight when it is being done well.

Having everyone think about a problem in the same, precise, linear way is only possible if everyone involved has the same experiences, understanding, thought processes an bias. 

Introduce one variable that is different and the wheels fall off.  The model fails.

Addressing the Problem

After great thought, many conversations, and now at least four blog posts mapping my consideration of my lady-wife's question, what has been determined?

OK - I've edited this 4 times.  This is take #6.  And its also the most polite.

The simple fact is, we're pretty clueless when it comes to what testing is, how it is done and why we do it.  A fundamental failure.  The vast majority of people involved in software simply don't get it.  This is true of many people who are supposedly professional testers, their bosses, their bosses bosses, project managers, subject matter experts, developers and - anyone who works with them.

Testers MUST educate themselves.  If they rely on the company they work for, that will be a mistake.  They don't get it either.  They THINK they do, and they usually don't. 

I can give my definition for testing.  Some of you may say "Ah! He has clearly been influenced by the work of ... a bunch of people."  Some of you may say "Pete, you're wrong and here's why..."  Others might just say "Meh, whatever." 

That's fine.  But talk about it.  You don't need to agree with everything I say or what someone else says.  You do need to think.  Then you need to communicate.  Then you need to think some more.

Find many, many sources of information - then share them with others.   Talk about them - agree, disagree, whatever.  Share ideas and learn.

When we as testers limit what we do by some narrow definition or purpose, every project, every time, what are we really doing is boxing testing into a corner. 

When we do that, we limit what testing is.

When we do THAT we fail to be true knowledge workers.  We fail to think fully, and we walk right into the self-fulfilling prophesy that started this series.  Testing is treated as a commodity.  Many testers have been participants in that disservice.

When we fail to to broaden ourselves, we limit ourselves. 

Broadening Testing

We must broaden ourselves, our profession, our views, our colleagues views, our employers views.  We must engage directly in combating the "just test this" mindset.  We must confront the "anyone can do this attitude" that is so pervasive in so many circles.

If you are reading this, thank you.  I believe that the this is a start.  Reading blogs, papers, articles and anything where people are sharing ideas and thoughts around software testing is a start. 

Then write your own.  Spread the word.  Don't care if you are not an expert.  I'm not an expert.  Most folks I know are not experts.  Some people THINK those folks are experts or authorities, and they simply don't consider themselves as experts. 

They do care about what they do.

You can to.

That makes you an expert. 

If you have a blog on testing, please leave a comment below with how we can find it. Thanks.

Tuesday, June 26, 2012

On Value, Part 3: The Failure of Management

This is the third  post which resulted from a simple question my lady-wfe asked at a local tester meeting recently.

This blog post described problems with value in testing.  The second blog post looked squarely at perceptions of testers as unskilled, non-knowledge workers and the lack of response from a large number of testers. This post looks at the source of such perceptions.

I'm going to tell you a story.  Don't be scared, it has a happy ending and everyone will be happy at the end.

Once upon a time...

...there were a group of people trying to figure out how to make stuff quickly and inexpensively.  They were making all kinds of stuff: tools, pens, desks, chairs, beds, clothes, books, eye glasses, guns, railroad cars and train engines.  Lots of stuff.

One bright fellow said "This stuff is really hard to make and it costs so much to make it that no one can afford to buy it.  If they can't afford to buy it only the really wealthy will be able to afford it.  If only the really wealthy can afford it, they won't buy enough for all of us to make a lot of money and stay in business so we can get really wealthy too!"

Another bright fellow said "This stuff takes too long to make and costs too much.  If we can figure out a way to make it less expensively we can make MORE of it for the same cost and sell each one for a little more than it costs us and LOADS of people will be able to buy this stuff.  Then we'll make LOADS of MONEY!"

A third bright fellow said, "I have an idea.  Let's look at what it takes to make each of those things.  What does it take to make a desk?  What does it take to make a chair?  What does it take to make a table?  If we figure THAT out, maybe we can find a way to make them less expensively."

The first bright fellow blinked and said, "I think that's a fantastic idea!  I can sack all those expensive cabinet makers and joiners I have making my furniture and get some riff-raff off the street, teach one to work a lathe, teach another to use a plane, another to use a saw and a fourth to use wood glue and a hammer to assemble the pieces.  Brilliant!  I can charge the same amount of money, pay a fraction of the wages and I'll make a PILE OF MONEY!  I just need to get someone to divide the steps up and I'm good to go!"

The second bright fellow said, "I had not thought about it, but you're right!  I can build railroad cars the same way!  But instead of having all these specialists and experts who know how to work with steel and iron and rubber and wood and how to build engines, I can get a BUNCH of people who don't know anything, teach them each ONE PART of assembling a car, and that will be that!  I'll make a PILE OF MONEY!"

The third bright fellow said, "Ummm, bright fellows?  Do any of you see any possible downsides to this?"

The other two laughed and said "Downsides?  Don't be daft!  We're going to make a PILE OF MONEY!"

The Downside

Where craftsmen had used years and years of their own experience, and had the experience of others they could turn to for reference, the approach these bright fellow were considering would result in removing the human element from the process.

Using untrained masses to replace skilled craftsmen would certainly reduce costs - you could pay them a LOT less.  Would the end result be the same?  Well - maybe.  Not at first while the new process is being sorted.  Of course, errors would not be the fault of the bright fellows, or their immediate subordinates, but the fault of the untrained masses "not knowing their job."

The thing is, the bright fellows knew they could pile no end of abuse on the unskilled, untrained masses, and if they did not like it, they could leave and there would be MORE unskilled masses willing to take their place - and pay packet at the end of the week.

Because the unskilled masses were not un-smart, and could see their way through a wall (as they say in Bree) they knew this was exactly why the bright fellows were doing what they were, well, doing.

Because they can does not make it right.


We're not talking about making physical things, we're talking about making software.  Software that runs on computers that human beings will be working with.  Software that people work with every day.
The odd thing is, when these conversations, and subsequent actions, happened when it was about people making things, the result was turmoil.  Societal upheaval was not in it.  Social revolution in some places, literal revolution in others.  Violence - physical, emotional and mental were the result.  The political and physical fights from that time continue to color politics in the US, Canada, Britain, France, Russia, Poland, India... the list goes on.

When you treat people like unthinking machines, expect a reaction.  It may take a while, it may not be immediate.  These days, it may not be the violence of the 1890's, or 1910's or 1920's or 1930's or... yeah.  Look at the history of the labor movement.  Not taking sides - not blaming anyone.

But it happened.

When it comes to software, most managers, directors, whatever, who have people developing software reporting to them share a common background - most were software developers at one point in their career.

Most remember the skills the had to learn, at College, University, some where, and how hard it was to gain those skills.  Then they realize that other things are needed - things they may not understand.

Customers are complaining about software and bugs and problems and ... stuff.

Management consultants come in and consult.  Except they don't understand this stuff, really, either.  So they look in their case study books for examples that look like this same basic thing - where people are making stuff.

And they come up with a model that works great on assembly lines.  Processes need to be repeatable.  Define steps and follow them the same way, every time, and you will have a "positive result."  Make sure you are following these, and other, best practices.

Sounds great, right?  And then the world looks like a Dilbert cartoon.  You know, the one (well, several) where the pointy-haired boss says something about anything he does not understand must be easy.

Many of these same bosses know that software design does not fit neatly into repeatable processes.  They understand this.  They'll talk about following the process and using "intelligent cheating" or some such.

The Fault

... lies in the belief that by breaking things into small enough pieces, they can be done by anyone - or a machine.

That might work for things where thought is not required.  Things that can be done as well, more efficiently and at less cost by a machine.  This is patently true on the modern robotic assembly lines - which are nothing more than Henry Ford's assembly line taken to its logical conclusion.

This same logic is often applied to testing.  I believe that it is at the heart of the "automate everything" and "reduce manual testing by x% per year".  On some levels, it makes sense to do things as efficiently as possible.

What happens when it is not possible to reduce manual testing by whatever that magical percentage is?  Many of those same companies will tie performance measures to these other goals.

Did you reduce the testing by the magic percent?  No?  Too bad.

Did you automate all testing?  No?  There are no excuses for "that can't be automated."  Of course it can.  You did not try hard enough.

This proves to the people doing this work that no matter how well they do their work, empty meaningless slogans trump the reality of what the work is. 

If you treat your people as machines, as unthinking automatons, I pity you.

You are wasting the potential to soar - both yours and theirs.

One more thing.

If you treat your people as machines, as unthinking automatons, I will not work for or with you.

Oh.  The happy ending?  There isn't one, at least not yet.

Monday, June 25, 2012

On Value, Part 2: The Failure of Testers

This is the second post which resulted from a simple question my lady-wfe asked at a local tester meeting recently.

This blog post resulted in a fair number of visits, tweets, retweets and other measures that people often use to measure popularity or "quality" of a post.

The comments had some interesting observations.  I agree with some of them, can appreciate the ideas expressed in others.  Some, I'm not so sure about.

Observations on Value

For example, Jim wrote "Yes, it all comes down to how well we "sell" ourselves and our services. How well we "sell" testing to the people who matter, and get their buy-in."

Generally, I can agree with this.  We as testers have often failed to do just that - sell ourselves and what we do, and the value of that.

Aleksis wrote "I really don't think there are shortcuts in this. Our value comes through our work. In order to be recognized as a catalyst for the product, it requires countless hours of succeeding in different projects. So, the more we educate us (not school) and try to find better ways to practice our craft, the more people involved in projects will see our value."

Right.  There are no shortcuts.  I'm not so certain that our value comes through our work.  If there are people who can deliver the same results for less pay (i.e., lower cost) then what does this do to our value?  I wonder if the issue is what that work is?  More on that later, back to comments.

Aleksis also wrote "A lot of people come to computer industry from universities and lower level education. They just don't know well enough testing because it's not teach to them (I think there was 1 course in our university). This is probably one of the reasons why software testing is not that well known."

I think there's something to this as well.  Alas, many managers and directors and other boss-types testers deal with, work with and for, come from backgrounds other than software testing.  Most were developers, or programmers when I did the same job.  Reasonably few did more than minimal testing, or unit testing or some form of functional testing.  To them, when they were doing their testing, it was a side-activity to what their "real work" was.  Their goal was to show they had done their development work right and that was that.

Now, that is all well and good, except that no one is infallible in matters of software.  Everyone makes mistakes, and many deceive themselves about software behavior that does not quite match their expectations.

Jesper chimed in with "It's important that all testing people start considering how they add value for their salary. If they don't their job is on the line in the next offshoring or staff redux." 

That seems related to Jim's comment.  If people, meaning boss-types, don't see the point of your work, you will have "issues" to sort out - like finding your next gig.

The Problem: The View of Testing

Taken together, these views, and the ones expressed in the original blog post,can be summarized as this:  Convincing people (bosses) that there is value in what you do as a tester is hard.

The greater problem I see is not convincing one set of company bosses or another that you "add value."  The greater problem is what I see rampant in the world of software development:

Testers are not seen as knowledge workers by a significant portion of technical and corporate management.

I know - that is a huge sweeping statement.  It has been gnawing at me on how to express it.  There are many ideas bouncing around that eventually led me to this conclusion. For example, consider these statements (goals) I have heard and read in the last several weeks, as being highly desirable
  • Reduce time spent executing manual test cases by X%;
  • Reduce the number of manual test cases executed by Y%;
  • Automate everything (then reduce tester headcount);
There seems to be a pervasive belief that has not been shaken or broken, no matter the logic or arguments presented against it.  Anyone can do testing if the instructions (test steps) are detailed enough. 

The core tenet is that the skilled work is done by a "senior" tester writing the detailed test case instructions.  Then, the unskilled laborers (the testers) follow the scripts as written and report if their results match the documented, "expected" results.

The First Failure of Testers

The galling thing is that people working in these environments do not cry out against this.  Either debating the wisdom of such practices, or arguing that defects found in production could NOT have been found by following the documented steps they were required to follow.

Some folks may mumble and generally ask questions, but don't do more.  I know, the idea of questioning bosses when the economy is lousy is a freighting prospect.  You might be reprimanded.  You may get "written up."  You may get fired.

If you do not resist this position with every bit of your professional soul and spirit, you are contributing to the problem.

You can resist actively, as I do and as do others whom I respect.  In doing so, you confront people with alternatives.  You present logical arguments, politely, on how the model is flawed.  You engage in conversation, learning as you go how to communicate to each person you are dealing with.

Alternatively, you can resist passively, as some people I know advocate you do.  I find that to be more obstructionist than anything else.  Instead of presenting alternatives and putting yourself forward to steadfastly explain your beliefs, you simply say "No."  Or you don't say it, you just don't comply, obey, whatever.

One of the fairly common gripes that comes up every few months on various forums, including LinkedIn, are whinge-fests on how its not fair that developers are paid "so much more" than testers are.

If you...

If you are one of the people complaining about lack of  PAY or RESPECT or ANYTHING ELSE with your chosen line of work, and you do nothing to improve yourself, you have no one to blame but yourself.

If you work in an environment where bosses clearly have a commodity-view of testers, and you do nothing to convince them otherwise, you have no one to blame but yourself.

If you do something that a machine could do just as well, and you wonder why no one respects you, you have no one to blame but yourself.

If you are content to do Validation & Verification "testing" and never consider branching beyond that, you are contributing to the greater problem and have no one to blame but yourself.

I am not blaming the victims.  I am blaming people who are content to do whatever they are told as being a "best practice" and will accept everything at face value.

I am blaming people who have no interest in the greater community of software testers.  I am blaming people who have no vision beyond what they are told "good testers" do.

I am blaming the Lemmings that wrongfully call themselves Testers.

If you are in any of those descriptions above, the failure is yours.

The opportunity to correct it is likewise yours.

Tuesday, June 19, 2012

Testers, 1812, or What if what your beliefs are out of date?

I'm writing this the evening of 17 June, 2012.  It is a Sunday.  Father's Day actually.

I am going through some notes I will need for a project meeting.  I am struck by something in them that makes me think that the world is as cyclic as we really sometimes would wish it was not.

Knowledge, Belief and Assumptions

When a project is starting off we always have certain assumptions.  Well, we may have a bunch of things that we write down and put in notes as being "truths".  Or something.

We KNOW certain things.  They may be about the project or they may be about the application or they may be about what the business intent is or the market forces or reasons behind the project.

We also have a way of dealing with things.  A process that is used to get done what we need to get done, right?  We can move forward with some level of confidence that we will be able to do what we need to do.

We have decisions to make.  We have to get moving and get going so we can show progress and get things done.

The issue, of course, is that we have based our decisions on things we knew some time ago.  They are not wrong, but are they current?  If we based our initial estimates on the last project or the one before that, does it have any real relevance to this one?

Now, you're probably saying "Pete, we don't just take the last estimate and run with that.  That would be foolish."  I've said that too.  Except, even if we introduce variance or some "outside analysis" then we can safely make some assumptions based on previous experience, right?

What if our recollection of that experience does not exactly line up with the documented experience?  What if our estimate is based on the previous project's estimate and not on the actual time spent on that project?  What if things have changed and no one thought fit to pass those "little details" along, because of course you know everything that is going on.  Right? 

We always are operating on current, up-to-date information, and the assumptions for the project always reflect reality.  Every project is this way.  If that describes your organization, I'd be very interested in observing how you work, because I have not seen that to be true on a consistent basis.

June 18, 1812

So, if you went to school in the United States and took American History like I did, you learned about the War of 1812.  You learned about how HMS Leopard had fired on, and forced the surrender of, USS Chesapeake.  You learned about how the British Navy was stopping American ships and impressing sailors from them to serve in the British Navy in their war against the French.  You learned how the British supplied and generally encouraged the "Indians" to attack American settlements.  

You then learned about the USS Constitution defeating HMS Guerriere and HMS Java.  You also learned about the Burning of Washington DC, the defense of Fort McHenry and the writing of the Star Spangled Banner.  You may have learned of the American victory at the Battle of Lake Erie and the American surrender of Detroit to the British.

You probably were not told the Chesapeake surrendered to the Leopard on June 22,1807.  You also were probably not told that American ships going to French ports were being turned away as part of a declared blockade of France.  You also probably were not told that Britain did not recognize the "right" of British subjects to become American citizens, and those fleeing criminal prosecution, for example, for desertion from the navy, could be seized anywhere they were found.  You probably were also not told that Lord Liverpool, the Prime Minister, ordered the repeal of the Orders of Council, which covered pressing British-born sailors into British service from American ships.

Thus, the War of 1812 was actually fought after all the official reasons, save one, had been addressed.

That one reason, inciting Indian attacks against American settlements, was also true of Spain and France, until France sold the Louisiana Territory to the US in 1803.  The simple fact is, the Americans themselves were breaking the treaties with the various Indians, or First Nations, who responded in kind.

Now, some Canadian versions will tell of how the US wanted to expand into Canada.  The American version was that if the US were to invade Canada, they would have a strong bargaining point with Britain. The Northern part of the US was predominantly the stronghold of Federalist Party supporters.  They were generally opposed to the war, and opposed to the idea of invading Canada in particular.

A funny thing happened though.

The Farther Removed From the Situation You Are, the Easier it Appears.

The farther south you went, the stronger the support for the war could be found.  The Democratic-Republican Party was generally pro-war.  Some politicians spoke of how American troops would be welcomed as liberators by the people of Canada and how they would be greeted with cheers and flowers.

Except that a large portion of "Upper Canada" was inhabited by "United Empire Loyalists": the people who moved to Canada from the United States after the Revolution.  These were the "Tories" you learned about in school and were cast in the role of "the bad guys."  They had no notion of welcoming Americans as liberators or anything else.

People who were farther removed from the reality of the situation did not comprehend how difficult that situation was.  How many times have we seen software projects like that?

So invading Canada seemed like a better idea the farther away from Canada you were,  And when American militia with a handful of regulars crossed the Niagara, they ran into a mix of British regulars and Canadian militia who gave them a solid thrashing at Queenston Heights.  Several hundred captured, many dead, a complete route as the American invaders fled across the Niagara River and back to New York.

The hero of the day was General Brock, who was killed in the action.  He had just come East to deal with this invasion threat by soundly defeating another invasion threat to the West - at Detroit.  By capturing the town and forcing the surrender of the would-be invaders.

Know What the Situation Really Is Before You Make Commitments

How many software projects have boss-types said "Oh, this can be delivered by this date."  Thus setting the date in stone. Then you find out what needs to be done.

No one ever runs into that, right?

Projects where the rules are mandated and the "customer needs" are all "captured" in a three-ring binder a three inches thick and these are "simplified" to a seven bullet-point list and if you have questions they need to go through the "single point of contact" who will relay those questions to the appropriate expert who will get in touch with the person who may have had some input into the bullet list however.... ummm, yeah.

Translated, you have no real way of confirming or getting clarifying answers around questions you may have and no real way of finding what needs to be done or what is involved or what is going on or...  what you are in for until you are in it.

The myths told to American students mask the reality of what the country encountered when the decision was made to go to war with Great Britain.  No one knew what they were going to encounter, but they had definite ideas, plans and goals.  And the summer of 1812, near Detroit, Michigan, then near Queenston, across from Lewiston, New York those plans were quickly shredded.

They walked into a situation where they had no concept of what was involved.  The handful who did and spoke out were accused of disloyalty, and not being committed to "the cause."  Whatever that meant.

Software projects run by fiat can encounter the same problem.

Fortunately for the Americans, the British forces were extremely limited the first two years of the war.  British land forces were committed to fighting Napoleon's forces in Europe... until April, 1814 when Napoleon abdicated.  Then the full weight of the British Empire would come to bear.

Making Use of Found Opportunities

The American military learned from the initial, disastrous battles, and those that followed.  While American Naval Forces could hold their own.  Single ship actions and small squadrons actions helped buy time, for both sides.  Privateers wrought havoc on both sides, but the American ones tended to disrupt the British ships to and from the Caribbean.  The Battle of Lake Erie (September, 1813) isolated British forces North of Detroit by cutting off water routes to the British positions.

In the meantime, the American army trained and trained and trained.  Regulars and militia trained.  A small cadre of young officers drove the training, based partly on their experience facing the well trained British forces.  At places like Chippawa and Lundy's Lane the training proved its worth - The American forces did not run away.  Really - I'm not making that up.  Until those battles, they tended to do that when facing British Regulars, or even militia.

As testers, when we're in the middle of something we did not expect, we have a variety of options.  We can "inform management" and "let stakeholders know" that we are encountering difficulties and wait for instruction.  Or, we can look for ways to over come those difficulties and maybe learn something new.

In late 1813, a bright young officer of artillery named Winfield Scott saw a collection of challenges and boiled them down to a few common ideas.  He saw symptoms of a single large problem.  He then proposed a solution.   His opportunity was unplanned.  He knew the army could not take effective action, and his opponents did not take action.  He used the time then to teach his soldiers to be soldiers.  In the middle of a war, one that his government went looking for, he turned the organization on its ear.

As testers, our war is often not of our choosing.  The engagements we are in are not ones we typically go looking for.  As leaders, we need to look for opportunities to improve.
I know that being deep into a project that is floundering is not the best time to learn a tool or new technique.  It might be a good time to do some research on your own - to dig into questions around what is blocking you.

We need to determine what the problem is we are facing.  Now, it may not lead you to a resolution.  It may do something else - like allow you to step away from your immediate block so when you return to the problem, you are looking at it with fresh eyes.

What is the PROBLEM?

It may help you think about your problem differently.  You may be attempting to address symptoms, instead of the actual problem.  AS you find a solution to one "problem,"three others appear.  Are you fighting isolated, individual problems or are these actually aspects of one greater problem.

Problem:  Troops are not firing as rapidly as their British counter parts.
Problem:  Troops do not execute field maneuvers properly, let alone quickly.
Problem:  Troop morale is low.
Problem:  When facing organized opposition, even in inferior strength, troops withdraw in confusion.

Are these four problems? 

Winfield Scott said they were not.  He said the problems described above was actually one problem:
American Regular and Militia troops do not have the proper training to be able to fight against a modern, European army.  

Scott was promptly promoted to Brigadier General and told to "fix" the problem. He did not intend to teach an army how to teach itself, but that is precisely what he did. 

His solution:  "Camps of Instruction" where for 10 hours a day, every day, he trained troops.  They in turn trained others.

He then saw that officers needed training as well.  Those that were incapable of leading troops, he replaced.  Sometimes with non-commissioned officers he promoted on the spot.

He then saw that to maintain this level of activity, he needed to make sure his men had good, healthy food - and made sure they got that.

As morale improved, he noted something else - each sub-unit (sometimes companies in a single regiment) had a different "uniform" than the others troops.  He ordered enough uniforms for his entire command to be turned out in what looked, well, uniform.

Learning to differentiate symptoms from problems is really, really hard.  Ask anyone who has tried to do that.  When you're deep into it, taking a deep breath and stopping to think is hard to do - it is also the one thing that sets great testers apart from the "anyone can test" type of testers. 

And so...

The nature of projects have remained the same for... ever.  We find ourselves in situations we did not intend to be in. 

Think clearly, then act precisely.

How do you learn to think?  Some suggestions

Attend CAST - this July in San Jose, California ;
Attend Let'sTest - a conference new this year that was a smashing success from all I have seen (next yer's should be good too!) ;
Attend PSL - Problem Solving Leadership - come on - just search for it, you'll find it ;
Find a Meetup of testers near you - and go REGULARLY;
No Meetup near you?  Start one.

In general, hang with smart people and talk with them  Don't be the smartest person in the room - ask about things you don't know or are looking for insight on.

Then act on what you have learned.   You may not achieve the "great things" the spin-meisters would have you achieve - or would tell people you have achieved.  Sometimes status quo ante bellum is the best you can hope for.

For that, one must know the costs of what you are doing and what the risks are around stopping or continuing.

That is a topic for another day.

Thursday, June 14, 2012

You Call That Testing? Really? What is the value in THAT?

The local tester meetup was earlier this week.  As there was no formal presentation planned it was an extended round table discussion with calamari and pasta and wine and cannoli and the odd coffee.

"What is this testing stuff anyway?"

That was the official topic.

The result was folks sitting around describing testing at companies where they worked or had worked.  This was everything from definitions to war-stories to a bit of conjecture.  I was taking notes and tried hard to not let my views dominate the conversation - mostly because I wanted to hear what the others had to say.

The definitions ranged from "Testing is a bi-weekly paycheck" (yes, that was tongue-in-cheek, I think) to more philosophical, " Testing is an attempt to identify and quantify risk."  I kinda like that one.

James Bach was also referred to with "Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous."

What was interesting to me was how the focus of the discussion was experiential.  There were statements that "We only do really detailed, scripted testing.  I'm trying to get away from that, but the boss doesn't get it.  But, we do some 'exploratory' work to create the scripts.  I want to expand that but the boss says 'No.'" 

That led to an interesting branch in the discussion, prompted by a comment from the lady-wife who was listening in and having some pasta.

She asked "How do you change that?  How do you get people to see the value that you can bring the company so you are seen as an asset and not a liability or an expense?"

Yeah, that is kind of the question a lot of us are wrestling with.

How do you quantify quality?  Is what we do related to quality at all?  Really?

When we test we... 

We exercise software, based on some model.  We may not agree with the model, or charter or purpose or ... whatever.  There it is.  

If our stated mission is to "validate the explicit requirements have been implemented as described" then that is what we do, right?  

If our stated mission is to "evaluate the software product's suitability to the business purpose of the customer" then that is what we do, right?

When we exercise software to validate the requirements we received have been filled, have we done anything to exercise the suitability of purpose?  Well, maybe.  I suspect it depends on how far out of the lines we go.  

When we exercise software to evaluate the suitability to purpose, are we, by definition exercising the requirements?  Well, maybe.  My first question is, do we have any idea at all about how to judge the suitability of purpose?  At some shops, well, maybe - yes.  Others?  I think a fair number of people don't understand enough to understand that they don't understand.

So, the conversation swirled on around testing and good and bad points.

How do we do better testing?

I know reasonably few people who don't care about what kind of a job they do.  Most folks I know want to do the best work they can do.

The problem comes when we are following the instructions, mandate, orders, model, whatever, that we are told to follow, and defects are reported in production.  Sometimes by customers, sometimes by angry customers.  Sometimes by customers saying words like "withhold payment" or "cancel the contract" or "legal action" - that tends to get the attention of certain people.

Alas, sometimes it does not matter what we as testers say.  The customers can say scary words like that and get the attention of people who define the models us lowly testers work within.  Sometimes the result is we "get in trouble" for testing within the model we are told to test within.  Of course, when we go outside the model we may get in trouble for that as well.  Maybe that never happened to you?  Ah well.

Most people want to do good work - I kinda said that earlier.  We (at least I and many people I respect) want to do the absolute best we can.  We will make mistakes.  Bugs will get out into the wild.  Customers will report problems (or not and just grumble about them until they run into someone at the user conference and they compare notes - then watch the firestorm start!)

Part of the problem is many (most) businesses look at testing and testers as expenses.  Plain and simple.  It does not seem to matter if the testers are exercising software to be used internally or commercial software to be used by paying customers.  We are an expense in their minds.

If we do stuff they do not see as "needed" then testing "takes too long" and "costs too much."  What is the cost of testing?  What is the cost of NOT testing?

I don't know.  I need to think on that.  One of the companies I worked for, once upon a time, it was bankruptcy.  Other were less dramatic, but avoiding the national nightly news was adequate incentive for one organization I worked for.

One of the participants in the meeting compared testing to some form of insurance - you buy it, don't like paying the bill, but when something happens you are usually glad you did.  Of course, if nothing bad happens, then people wonder why they "spent so much" on something they "did not need."

I don't have an answer to that one.  I need to think on that, too.

So, when people know they have an issue - like a credibility gap or perceived value gap - how do you move forward?

I don't know that either - at least not for everyone.  No two shops I've been in have followed the same path to understanding, either.  Not the "All QA does is slow things down and get in the way" shop nor the "You guys are just going through the motions and not really doing anything" shop.  Nor any of the other groups I've worked with.

Making the Change

In each of these instances, it was nothing we as testers (or QA Engineers or QA Analysts or whatever) did to convince people we had value and what we did had value.  It was a Manager catching on that we were finding things their staff would not have found.  It was a Director realizing we were working with his business staff and learning from them while we were teaching them the ins and outs of the new system so they could test it adequately.  

They went to others and mentioned the work we were doing.  They SAW what was going on and realized it was helping them - The development bosses saw the work we did as, at its essence, making them and their teams look good.  The user's bosses realized we were training people and helping them get comfortable with the system so they could explain it to others, while we were learning about their jobs - which meant we could do better testing before they got their hands on it.

It was nothing we did, except our jobs - the day-in and day-out things that we did anyway - that got managers and directors and vice-presidents and all the other layers of bosses at the various companies - to see that we were onto something.

That something cost a lot of money in the short-term, to get going.  As time went on, they saw a change in the work going on - slowly.  They began talking about it and other residents of the mahogany row began talking about it.  Then word filtered down through the various channels that something good was going on.  

The people who refused to play along before began to wander in and "check it out" and "look around for themselves." Some looked for a way to turn it to their advantage - any small error or bug would be pounced on as "SEE!  They screwed up!"  Of course, before we came along, any small errors found in production would be swept under the rug as something pending a future enhancement (that never came, of course.)

We proved the value by doing what we did, and humbly, diplomatically going about our work.  In those shops that worked wonders.

And so...

We return then to the question above.  How do we change people's perspectives about what we do? 

Can we change entire industries?  Maybe.  But what do we mean by "industries?"  Can we at least get all the developers in the world to recognize we can add value and help them?  How about their bosses? 

How about we start with the people we all work with, and go from there?  I don't know how to do that in advance.  I hope someone can figure that out and help me understand.

I'll be waiting excitedly to hear back from you.