Monday, July 23, 2012

CAST 2012, or CASTing Through the Computer Screen

CAST 2012 recently wrapped up.  The annual Conference of the Association for Software Testing was held in San Jose, California, Monday and Tuesday, July 16 and 17, 2012.  Wednesday the 18th saw tutorials.  Thursday was the Board of Directors meeting.

First, even though I had helped plan the Emerging Topics track, I had some conflicts arise that kept me away (physically).  I was a tad disappointed that after the work that went in, I would not be drinking in the goodness that is the CAST experience.

Why is that a big deal? 

Its a conference, right?

CAST is unlike any conference I have ever attended.  They make it a point of being a participatory, engaged, function - not merely sitting listening to someone drone on reading power point slides.  There is required time in each session for questions and discussion around the presentation.  Some of them can be quite brutally honest.

This becomes an issue for some people.

When one engages with people who think, rote-learned answers do not hold up well.  Watching the interplay as people, including the presenters, learn, is something that is in itself, a top-flite education in testing.

And I could not be there.  Bummer.

I chose the next best thing - I joined in the video stream as much as the day-job and other responsibilities allowed.  I listened to Emerging Topics presentations, keynotes, a panel discussion on the results of Test Coach Camp and CASTLive - the evening series of interviews with speakers and participants - with online chat and... stuff.

While I could not be there in person, this was a pretty decent substitute.

Other cool things

Keynotes by Tripp Babbitt and Elizabeth Hendrickson.  Great panel discussions on what people learned from at Test Coach Camp, and, cool stuff.

Simply put, there are recordings to be viewed and listened to here.

Other things happened as well, like announcing the election results for AST Board of Directors.

I was elected to the Board of Directors to serve a single year, to fill a position left vacant by a Board Member who could not finish his term.

I am deeply honored to have been selected to serve in this way.

I am humbled, and looking forward to this new chapter in testing adventure.

Saturday, July 7, 2012

In the Beginning, on Testing and User Experience

Behold, an Angel of the Lord appeared before them and said "Lift up your faces, oh wandering ones, and hear the words of the One who is Most High. 

Thus says the Lord, the Keeper of the Way and Holder of Truth, 'Do not let your minds be troubled for I send a messenger unto you with great tidings.  The Path to Software Quality shall this messenger bring.  Heed the words and keep them whole. 

Worry not your hearts over the content of software releases.  Behold, one shall come bearing Requirements, functional and non-functional. And they shall be Good.  Study them and take them under your roof that you may know them wholly.

If, in your frail-mindedness, such Requirements are beyond the comprehension of lesser beings, Designers and Developers and Analysts, yea, unto the lowly Testers who are unable to comprehend such lofty concepts, fear not to come and humbly ask which these Requirements mean.  Lo, it shall be revealed unto you all that you need to know.  On asking, that which is given in answer shall be all that the lesser ones, unto the lowly Testers, shall be revealed.

Ask not more than once, for doing so will try the patience of the Great Ones who are given the task of sharing the Revelation of the Requirements.  To try the patience of these Great Ones can end with a comment in your permanent file, unto impacting your performance review and any pay raise you may long for now, and in the future.

Developers and Designers and lowly Testers shall then go forth from the place to the place of their cubicles and prepare such documents as is their task to prepare.'"

Then the Angel of the Lord spoke to them this warning with a voice that shook the Earth and the Hills and the Valleys and caused the Rivers to change from their courses.  "Seek not the counsel of those in the Help Desk nor those in Customer Support nor those in Services.  Their path will lead you astray for their knowledge is limited and perception flawed.  Avoid them in these tasks before you as they are not worthy to hear or read the words handed down to you from the Requirements Bearer.  Thus says the One who is Most High."

1st Book of Development Methodology, 2:7-45

I've seen project methodologies adapted at companies that look and read painfully close to this.  None have gone this far, perhaps - at least not in language and phrasing.  Alas, a painful number have the spirit and feeling of the above.

Rubbish.

As you sow, so shall you reap.

It does not matter what methodology your organization uses to make software.  It does not matter what approach you have to determining what the requirements are.  They shall be revealed in their own time.

If you are making software that people outside of your company will use - maybe they will pay money for using it.  Maybe that is how your company stays in business.  Maybe that is where the money coming into the company for things

If that is the case, I wonder where the "Requirements" are coming from.  Companies I have worked for, in this situation, the "requirements" come from Sales folk.  Now, don't get me wrong, many of these Sales folk are nice enough.  I'm just not sure they all understand certain aspects of technology and what can be done with software these days.

Frankly, I'm not sure if some of them have any idea what the software they are selling does.

That's a pity.  The bigger pity is that many times the people they are working with to "get the deal" have no real idea what is needed.

They can have a grand, overall needs view.  They can have a decent understanding of what the company wants, or at least what their bosses say the company wants, from the new or improved software,  They may know the names of some of the managers that the people using the software every day.

This does not include the people who glance at the dashboard and say things like, "Hmmm. There seems to be a divergence between widget delivery and thing-a-ma-bob capacity.  Look at these charts.  Something is not right."

That's a good start.  Do they have any idea where the data for those charts come from?  Do they have any idea on how to drill down a bit and see what can be going on?  In some cases, they might.  I really hope that this is true in the majority of cases.  From what I have seen though, this isn't the case.

The Average User

Ah yes.  What is an "average user"?  Well, some folks seem to have an idea and talk about what an "average user" of the system would do.  When they are sales folk who sell software (maybe that exists) I am not certain what they mean.

Do they mean an "everyman" kind of person?  Do they picture their mother trying to figure out the Internet and email and search engines?  I don't know. 

Do they mean someone who follows the script, the instructions they are given on "how to do this function" - probably copied from the user manual - for 2,5 hours (then a coffee break) then 2 hours (lunch!) then 2 hours (then an afternoon break) then 1.5 hours (then home)?  Maybe.

Have any of you reading this seen that actually happen?  I have not.

So, folks tasked with designing a system to meet requirements derived from conversations with people who may or may not have contact with the system/process in general and who may or may not understand the way the system is actually used at their company (and when you multiply this across the model of "collected enhancement requests" many companies) will then attempt to build a narrative that addresses all of these needs, some of them competing.

This generally describes the process at four companies I know.

The designers may or may not create "user stories" to walk through scenarios to aid in the design.  They will look at aspects of the system and say "Yup, that's covered all the requirements.  Good job, team/"

What has not happened in that model?   Anyone?

No one has actually talked with someone who is in contact with (or is) an "average user."

When I raised the point at one company that this was an opportunity for miscommunication, I was told "the users are having problems because they are using it wrong."

Really?  Users are using the system wrong?  

REALLY?

Or are they using it in a manner our model did not anticipate? 

Are they using it in a manner they need to in order to get their job done, and our application does not support them as they need it to?  Are they using it as they always needed to, working around the foibles of our software, and their boss' boss' boss - the guy talking with the sales folks - had no clue about?

Why?

Is it because the Customer Support, Help Desk, Professional Services... whatever they are called at your company - may know more about how customers actually use the software than, say, the "product experts"?  Is it because of the difference between how people expect the software to be used
and how those annoying carbon-based units actually use it?

As testers, is it reasonable to hold to testing strictly what one model of system use tells us is correct?   When we are to test strictly the "documented requirements" and adhere to the path as designed and intended by the design team, are we confirming anything other than their bias?

Are we limiting how we discover system behavior?  Are we testing?

I know I need to think on this some more. 

Thursday, July 5, 2012

On Best Practices, Or, What Marketing Taught Me About Buzzwords

An eon or two ago, whilst working toward my Bachelor's Degree, I had a way interesting professor.  Perfectly deadpan and recognized people's faces and names - but never the two together.  Crazy smart though.

He had multiple PhDs.  Before deciding he would enter academia (PhD #2), he worked as a geologist searching out potential oil deposits (PhD #1) for a large multinational oil company.  It was when he got tired of "making a sinful amount of money" (his words) he decided he should try something else.

He had an interesting lecture series on words and terms and what they mean.  One subset came back to me recently.  I was cleaning out my home office - going through stuff that at one point I thought I might need.  In one corner, in a box, was a collection of stuff that included the text book and lecture notes from that class.  I started flipping through them with a smile at my long ago youthful optimism I had recorded there.

One thing jumped out at me as I perused the notes of a young man less than half my current age - a 2 lecture discourse on the word "best" when used in advertising. 

Some of this I remembered after all these years.  Some came roaring back to me.  Some made me think "What?" 

X is the Best money can buy.

Now, I've noticed a decline in TV advertising that makes claims like this,  "Our product is the best product on the market.  Why pay more when you can have the best?"  Granted, I don't watch a whole lot of TV anymore.  Not that I did then either - no time for it.  (Now, I have no patience for most it.)

Those ads could be running on shows I don't watch.  This is entirely possible.  Perhaps some of you have seen these types of ads. 

Anyway, part of one lecture was a discussion on how so many competing products in the same market segment could possibly all claim to be the best: toothpaste, fabric softener, laundry detergent, dish detergent, soft drink, coffee, tea... whatever.  All of them were "the best."

The way this professor worked his lectures was kind of fun.  He'd get people talking, debating ideas, throwing things out and ripping the ideas apart as to why that logic was flawed or something.  He'd start with a general statement on the topic, then put up a couple of straw-men to get things going.  (I try and use the same general approach, when I can, when presenting.  It informs everyone, including the presenter.) 

The debate would rage on until he reeled everyone in and gave a summary of what was expressed.  He'd comment on what he thought was good and not so good.  Then he'd present his view and let the debate rage again.

I smiled as I read through the notes I made - and the comments he gave on the points raised.

Here, in short, is what I learned about the word "Best" in advertising: Best is a statement of equivalence.  If a product performs the function it was intended to do, and all other products do the same, one and all can claim to be "the best."

However, if a product had claims that it was "better" than the competition, they needed to be able to provide the proof they were better or withdraw the ad.

So Best Practices?

Does the same apply to that blessed, sanctified and revered phrase "Best Practice?"   

Ah, there be dragons! 

The proponents of said practices will defend them with things like, "These practices, as collected, are the best available.  So, they are Best Practices."  Others might say things like, "There are a variety of best practices.  You must use the best practice that fits the context."

What?  What are you saying?  What does that mean?

I've thought about this off and on for some time.  Then, I came across the notes from that class.

Ah-HA!  Eureka!  Zounds!  

Zounds?  Really? Yes, Zounds!  Aside from being the second archaic word I've used in the post, it does rather fit.  (I'll wait while you look up the definition if you like.)

OK, so now that we're back, consider this:  The only way for this to make any sense is to forget that these words that look and sound like perfectly ordinary words in the English language.  They do not mean what one might think they mean.

Just like X toothpaste and Y toothpaste both can not both be the best, because how can you have TWO best items?  Unless, they mean "best" as a statement of equivalence, not superiority. 

Then it makes sense.  Then I can understand what the meaning is.

The meaning is simple: Best Practices are Practices that may, or may not work in a given situation.

Therefore, they are merely practices.  Stuff that is done.

Fine.

Now find something better