Tag: abstraction

Why Electronic Intelligence?

Electronic Intelligence or ELINT  is the gathering of intelligence by using sensors to detect electromagnetic emissions from a system for use in locating, identifying and classifying the source.  I think this description is a good abstraction on what we do as software testers.  The sensors we use are automation, exploratory methods, error logs and our judgement to locate and isolate faults.  Once the defect is found and documented we can then begin the task of: identifying it’s taxonomy, figuring out how it was introduced and adding it to the regression list.  Gathering intelligence and classifying threats within the system is just one part of the equation.  The tests we write have to be targeted to create the most harmful or frequent failure conditions, often with a limited amount of time and manpower.  Compounding the problem, our production software exists in a dynamic environment full of different types of users: friendly, hostile and somewhere in-between.  These users have different expectations and ways of interacting with the system that can have subtle yet malicious effects over time.

There are many sources of intelligence about what you are testing.  The primary and often most elusive is the developers themselves.  I would have to say that most of the empathy and user advocacy skills that a tester can develop could most effectively be directed towards the developer.  Software developers have the most difficult and abstract interaction with the code and system as a whole, and often, the narrowest.  This is where the tester’s ability to use research, problem solving and heuristics comes in to play.  Empathy with the end user should be the default state of mind for the tester as they as are the user advocate in the development & testing process.  Testers can be thought of like spies, wearing the disguise of the user to trick the system under test to showing  it’s weakness.  Once that weakness is understood, you can bring in the main force of testing assets to bear on the vulnerability, mitigating it to a pre-integration automated check or a piece of intel that can be used to find other weaknesses in the design.

This is a loose & fun association that we’re playing with but it is also with a purpose; to use one context to help understand another.  Please leave a comment if you can expand on this abstraction or have thoughts on cross-context explanation.