Automated Driving Vs. Testing

self driving kit vs google

The National Highway Traffic Safety Association has wanted to ban self driving cars. Elon Musk says it could lead to ban human driving. Does this debate sound familiar? Testers have been struggling with coming to terms with the role of automation within their own companies and across the software industry as a whole for years and likely will continue to do so far into the future.

Napping Behind The Wheel

Maybe it's not napping, maybe it's reading, working or possibly just not getting lost. Whatever else you would rather be doing than driving, this is pretty much the heart of the appeal of self driving cars.

This same appeal is what has drawn so many people to automated testing. Testing is useful, just like your morning commute is useful it's also generally categorized as mundane and tedious. It's an obivious temptation for organizations to want to replace this activity with something they feel is a better use of their time and resources.

When Tech Goes Wrong?

A big fear around self driving cars are potential complicated situations, technical failures, glitches, bugs related to the autonomy of the car. People regularly accept similar risk when they take the wheel. There could be tire blow outs, hazardous conditions, failing parts. The difference here is that we are comfortable with how to reconcile these problems, for individual issues we go to mechanics or in larger cases there are recalls. We know the pattern and trust the process and results. We aren't sure what that process will be with self driving cars so there is some healthy apprehension.

That same logic gets applied to software everyday. When there is a bug in software everyone on the team knows the process for getting it fixed. I bet just like the mechanic metaphor, most organizations hand over the bug, wait for a fix and as long as it seems to work they don't question the developers internal process unless there are obvious and repeated failures. Automated tests fall right into this pattern, as long there aren't glaring problems many are inclined to trust it's working well until we are forced intervene.

Why is it so easy to see that autonomous driving could be problematic but automated testing is a sure-fire solution? Is it a question of liability? When a human is testing, a person is to blame for any problems or bugs that are not uncovered. It's an uncomfortable conversation if a person is performing as desired. When automation fails it's nameless and faceless, it's just another bug in code. There's also the question of cost, it's easier for people to quantify the cost of a testers salary than the cost of writing and maintaining automated test suites.

What If I Want To Drive?

What's interesting is that in most movies where you see self driving cars, if the main character gets in the drivers seat at some point (if not all the time) they will want to take manual control. When the time comes for harrowing action, the protagonist just can't count on the cars to drive itself, it's not enough.

The Venn diagram between human driving and self driving cars doesn't look like two completely overlapping circles and I doubt that many people would question that. There are scenarios where human interaction is preferable if not necessary. This is something we accept without issue from Hollywood, why is it controversial in testing? It's doesn't mean that your team is doing something wrong, it's just that sometime the situation warrants a different approach and there is value in recognizing that.

There's No Baggage in the Trunk

People have chosen sides on the battlefield of automated testing. The Test-Mobile's trunk is already full of emotional baggage from years of debate over this topic so maybe if the topic comes up take this metaphor out for a test drive.