Don’t Shift Left; Expand Outwards and Accelerate Failure

My boss sent round this article written in September 2017 from TechTarget about embracing shift-left testing as “the latest theory”. The reason he sent it around was not because we should look into adopting it, but because we’ve been doing it – and I checked – since 2008. “Sounds familiar” he quipped.

“Shift left”? Bleurgh!

Firstly, let’s address the term “shift left”. I’ve used the term for a while to explain how we test, but recent Twitter noises make it more clear that the term is limited.

It suggests movement in one direction. Put like that, I don’t think that’s what we really mean. We’re not suggesting that, in order to start getting involved earlier in the lifecycle that we stop doing testing at the end, are we? What we’re really doing is expanding the range of activities that testers get involved in.

“Left” leads to “right”, which suggests a one-dimensional reality in which your options are severely limited. Modern organisations are multi-dimensional, so why limit ourselves to a single direction in a single dimension? Are the activities that occur notionally to the left of testing in the software development lifecycle the only things we can usefully get involved with?

“Shift left” is about getting testing involved earlier in the software lifecycle so we can try and stop bugs even being written. Broadly, it’s about adding value sooner rather than later. But, again, why limit ourselves to adding value?  Why not look in the other direction – towards operations – and see if we can extract value to make our testing better in the future?

But wait, I hear you cry. If we’re expanding, surely we’re spreading ourselves more thinly, and doing less testing at the end? That may be the case, but the idea is that, by getting involved sooner and identifying problems earlier, an hour spent early in the release should save you many times that late in the day. It’s an investment that you hope pays off as fewer bugs and regressions.

Dan Ashby's Continuous Testing in DevOps
Dan Ashby’s Continuous Testing in DevOps

As I typed this, Dan Ashby‘s surely-famous-by-now image popped into my brain. You can test everywhere, not just a bit more to the left.

So instead of “shift left” I’m going to call it “expand outwards”. It doesn’t suggest that you have to give up anything, and it’s direction-agnostic. You can expand in any direction that lets you add or extract value.

What does “expanding outwards” look like?

We’ve been expanding testing outwards ever since we adopted agile-like behaviours. As a company, we embraced Agile methodologies as an R&D operation. Importantly, I had the freedom to come up with a test process from scratch that would allow testers to get involved from the earliest possible point in the release.

Initially, we did just expand in one direction; “left”, i.e into development, design, and requirements. We got involved in design and estimation meetings, we reviewed requirements and user stories, and we got developers to review our test ideas.

Over time, we looked to expand in other directions; firstly to the “right” and Customer Support.  As the people who work closely with the customer and who support the products in live usage, they are the role who know the most about real-world customer usage of the products. They are a vital resource to get involved if you can. We “shifted them left” into testing so that they could add value to us.

We also get involved with testing documentation, and tech authors share our test environments where possible.

Those are just some examples. We didn’t stop there, and neither should you. Remove your filters and think about everyone even loosely associated with the product, which probably means everyone in the business, and includes your customers. Think about what value you can give to, or extract from, each / any / all of them.

Let’s step back; what are we trying to do?

When we first looked to adopt Agile, we spoke to a consultant who probably said lots of words, but two have stuck with me ever since;

accelerate failure

Put another way, if you know you’re going to make mistakes – which you will, because you’re a human being – then do everything you can to make them, or identify them, as early as possible so it’s as cheap as possible to fix them.

We need to accelerate not just the failure, but the remediation loop as well. The goal has to be that errors spend as little time in the system as possible, whether they be requirements ambiguities, or code defects, or whatever. The longer an error remains in the system, the more it becomes cemented in, compounded upon, worked around, and expensive to remove.

“You mustn’t be afraid to dream a little bigger”

What we’re really trying to achieve is the acceleration of failure across the entire software development lifecycle, not just testing. We can expand testing outwards to try and find problems sooner, but if the rest of the operation doesn’t follow suit, or inhibits your ability to do testing in different places, then you will encounter a glass bubble beyond which you cannot add or extract value.

Adoption of accelerated failure needs to be systemic, not piecemeal, affecting only a single role. In order to realise all the value of accelerated failure, everyone needs to do it, or at the very least accommodate it.

The hardest part of this whole acceleration of failure business is overcoming any cultural momentum around who does what and when. An agile (in the small, nimble sense) company, with fewer organisational and physical boundaries between roles / projects, will be able to change direction more easily than a larger, distributed, older organisation whose culture is probably more rigid.

Even in a small organisation, it’s not straightforward. From experience, the hardest part can be getting other roles to allow testing to be part of the conversation. This is probably due to perception issues around what testers do more than anything else. Again, from experience, testers working closely with developers quickly erodes those concerns as the value testers add becomes more evident.

How do I  Accelerate Failure?

Think about the roles you interact with regularly. They are likely people involved in the software development process. Learn as much as you can about how they operate and what they do. Pair with them, read their process docs if they have them. Figure out what info they have that you can use, and vice versa.

Encourage them to reciprocate and to remove any barriers. There should be no reason for barriers to sharing of information that ultimately benefits the business and the customer. Everyone in the business is there to provide, or support the provision of, products to the customers.

Then expand your focus to other roles, and repeat. Ignore organisational hierarchies and chains of command. Knock on doors, cold-email people. Walk about and have coffee.

You’re a tester; a sponge for knowledge. Absorb it wherever you can find it, and let it moisten the brains of as many others as you can.

Don’t limit that value-add to the person in the seat to your left.

 

PS. Communicate!

If I come across as smug about having taken this approach for nearly ten years, it’s because I am (a little). However, the important takeaway for us all here is to be aware of your approach and compare it to the approaches that others are using. You might be ahead of the curve. You might be putting a new slant on an old idea. You might be using an old idea whose time has come back around.

Don’t assume that what you’re doing, or how you’re doing it, is normal, nothing special, or that everyone else is doing it (better perhaps). Consider that, just maybe, how you’re doing it is innovative, unique, The Next Big Thing.

Setting up an ELK stack with Docker

Recently I’ve been working in an environment comprised of nearly 60 Docker containers. When things go wrong, as they sometimes do, you can make educated guesses about which logs to look at but sometimes you’ll guess wrong. Or you know which ones to look at but want to watch several at once without having 8 PuTTY windows open.

You need something more fit for purpose. Enter ELK. ELK stands for Elasticsearch, Logstash, and Kibana. Rather than explain them in the order they appear in the acronym, I’ll do it in the order in which they are used, but rather than attempt to explain it myself and screw it up, the following summaries are from this helpful O’Reilly article.

Logstash “is responsible for giving structure to your data (like parsing unstructured logs) and sending it to Elasticsearch.”

ElasticSearch “is the search and analysis system. It is the place where your data is finally stored, from where it is fetched, and is responsible for providing all the search and analysis results.”

Kibana “allows you to build pretty graphs and dashboards to help understand the data so you don’t have to work with the raw data Elasticsearch returns.”

What am I building here?

OK, so what does all that look like? It’s pretty simple. You configure your logs to write to Logstash. Logstash then filters the log messages, turning the unstructured messages into structured data. That structured data is output to Elastic, where it is indexed. Kibana then allows you to query that indexed structured data to discover useful information and visualise it.

Getting Started

This is the easy part. Follow the instructions here to clone the ELK stack to your machine and start it up. This basically gets you to pull the three containers and fire them up, with some configuration taken care of. Nothing in this life, however, is for free, so there is some work to be done.

[I must offer all glory to Anthony Lapella for producing such a beautifully wrapped ELK package]

Wiring

I’ve observed before that modern software development is mostly plumbing. Happily, the ELK stack is pre-configured such that you don’t need to wire together the bits of ELK. However, with ELK, you do need to;

  1. Configure your Docker containers to log to a location,
  2. Configure Logstash to pick them up from that location.
  3. Configure Logstash to filter the data.
  4. Configure Logstash to send the data to Elastic

Configuring container log location

Because I might want to run my containers without sending their logs to ELK, I need a way to turn it on and off without too much messing around.

The way I did this was by using a second docker-compose.yml file. This secondary file – which I called “elk-logging.yml” – contains a section for each of your containers. Each section contains the following;

my_service:
    log_driver: syslog
    log_opt: 
        syslog-address: "tcp://localhost:5000"

What this does is tell the container to use syslog, and to send syslog over TCP to port 5000.

So what you need to do is create your secondary YAML file – elk-logging.yml – with as many of the above sections as you have containers you want to log to ELK, with the “my_service” parts replaced with the names of all your containers.

Configuring Logstash input

The next step is to configure Logstash’s input to listen on that port. Fortunately, the ELK stack you cloned earlier already has this configured. The configuration file in in docker-elk/logstash/pipeline/logstash.conf.

Looking at this file shows an input section that shows it listening on TCP:5000.

input {input { tcp { port => 5000 }}

So you don’t need to do anything; this is just for information.

Grokking in Fullness: configuring Logstash filters

This is the most fiddly part of the exercise, because you need to mess about with grok regexes. When you’re dealing with this many containers, the chances of them all using the same log output syntax is remote. There are a couple of approaches you could take;

  1. Specify multiple filters, and fall through each until a working filter is found.
  2. Specify a single filter that is powerful enough to recognise optional components as and when they appear.

I tried both of these approaches, but struggled to get the syntax in the logstash.conf file right, so I eventually settled on the One Grok To Rule Them All. And this is pretty much what it looks like;

<%{NUMBER:priority}>%{SYSLOGTIMESTAMP:syslogtimestamp}\s%{GREEDYDATA:container}\[%{POSINT:containerprocessid}\]\:\s*((?<springtimestamp>%{YEAR}[\-\/]%{MONTHNUM2}[\-\/]%{MONTHDAY}\s%{TIME}))?\s*((\[)?%{LOGLEVEL:loglevel}(\])?)?\s*(%{POSINT:processid})?\s*(---)?\s*(\[\s*(?<thread>[A-Za-z0-9\-]*)\])?\s*((?<loggingfunction>[A-Za-z0-9\.\$\[\]\/]*)\s*\:)?\s*%{GREEDYDATA:logmessage}

Impressive looking, right? I won’t attempt to explain every component, but will try to summarise. Firstly, my log messages seem to broadly be “Syslog format followed by Spring Boot format”, e.g.

<30>Sep 11 10:18:58 my_container_name[1234]: 2017-09-11 14:18:58.328 WARN 14 --- [      thread_name] a.b.c.d.CallingFunction : Oops, something went a little strange, but nothing to worry about.

Everything up to the red colon is SysLog, everything after is Spring Boot. Here’s a map of each bit of the message with the part of the grok that extracts it;

<%{NUMBER:priority}>                <30>
%{SYSLOGTIMESTAMP:syslogtimestamp}  Sep 11 10:18:58
%{GREEDYDATA:container}             my_container_name
[%{POSINT:containerprocessid}\]     [1234]
((?<springtimestamp>%{YEAR}[\-\/]%{MONTHNUM2}[\-\/]%{MONTHDAY}\s%{TIME}))?         2017-09-11 14:18:58.328
((\[)?%{LOGLEVEL:loglevel}(\])?)?   WARN
(%{POSINT:processid})?              14
(\[\s*(?<thread>[A-Za-z0-9\-]*)\])? [     thread_name]
((?<loggingfunction>[A-Za-z0-9\.\$\[\]\/]*)\s*\:)?
                               a.b.c.d.CallingFunction
%{GREEDYDATA:logmessage}            Oops, something went a little strange, but nothing to worry about.

The blue text are the names of the fields that we’re extracting. The red text are built-in grok patterns that we’re using; you can find them all here. Everything else is hardcore grok!

Everything after containerprocessid is wrapped inside “()?”, which indicates that everything inside the brackets is optional. This is because, from container to container, message to message, the various components of the Spring Boot log message weren’t always present, and so I need to tell the grok that.

Like it or not, you’re going to have to figure this bit out for yourself. It involves trial and error, and learning each aspect of the regex / grok syntax as you go. When I started this process, I had used regexes a bit over the years, but was by no means a guru. Once you understand the syntax, it’s quite quick to get something powerful working, plus it looks super hard and you can impress all your coworkers who don’t grok 🙂

My advice is; start small, and figure out how to extract each part of your log message one bit at a time. The good news is that there are lots of online grok checkers you can use. Kibana has one built in, and this one was really useful. A lists of grok patterns is also available here.

Configuring Logstash output

Finally, you need to configure Logstash to send the data to Elastic. As with the input section, this is already pre-configured as part of the ELK stack your cloned at the start.

output { elasticsearch { hosts => "elasticsearch:9200" }}

This shows that Logstash is sending the logs out to elasticsearch on TCP:9200.

OK, how do I get all this working together?

It’s pretty easy. There are only 3 steps to it.

1. Update the logstash.conf with your filter grok

Your final logstash.conf file should look something like;

input {
    tcp { 
        port => 5000
    }
}

filter {
    grok {
        match => {"message" => "<<your grok here>>"}
    }
}

output { 
    elasticsearch { 
        hosts => "elasticsearch:9200" 
    }
}

You just need to copy your logstash file over the top of the existing one in /docker-elk/logstash/pipeline. You can copy the original aside if you want, but move it out of that folder as logstash can get confused if it finds two possible config files.

If you need more help on the format of the logstash.conf file, this page is useful.

2. Restart logstash to pick up the changes to the config file

Restarting the logstash container is simple. Make sure you’re in the /docker-elk folder, then simply;

docker-compose restart logstash

Because it’s easy to make a formatting / syntax error in the file, or especially the grok, make sure you check that your logstash container is running, and that it isn’t complaining about syntax errors. To check whether the container is running;

docker ps | grep logstash

If you logstash container is there, that’s good. Then, to check the logstash log to see if any errors are visible;

docker logs -f dockerelk_logstash_1

At the end of the log, you see a message showing the logstash started successfully. If it wasn’t running when you ran “docker ps”, there’s probably an error in the log showing where the syntax error is.

3. Restart your application containers with the ELK override file

Finally, to restart your application containers and get them to log to logstash, navigate to where you application docker-compose.yml and your elk-logging.yml files live, and;

docker-compose stop

docker-compose -f docker-compose.yml -f elk-logging.yml up -d

This tells docker to start all the containers  detailed in the docker-compose.yml, with the additional logging parameters detailed in elk-logging.yml, creating them if necessary, and doing it in detached mode (so you can disconnect and the containers will keep running).

Configuring Kibana

We’re into the final stretch. The last piece of configuration we need to do is tell Kibana the name of the Elastic index that we want to search.

Again, the pre-configuration sorts most of this for you. All you need to do is navigate to http://<your server>:5601. Kibana will load and show a page that asks you select an Index pattern and a time field.

Now, between Logstash and Elastic (full disclosure: I couldn’t figure out which was responsible for this), a new Elastic index is created every day whose names follow the convention “logstash-dd.mm.yyyy”.

The Kibana page will be pre-populated with an Index pattern of “logstash-*”, which means it will pick up every logstash-related index in Elastic.

It will also be populated with a default time field. You can override this with one of the timestamps your grok should have extracted, but you’re as well keeping the default.

All you then do is accept these settings. Kibana will then connect to Elastic and return a list of fields stored in the indexes that match the “logstash-*” pattern, which should be all the fields your grok extracted (the blue text from earlier on).

Can I look at my data now?

Yes! Go to Kibana’s Discover page and your data should be there. You can explore to your hearts content and start messing around with filters and visualisations to extract all the hidden value that, without ELK, you would find very difficult to extract.

Enjoy!


If you find any errors in my instructions, or any part of the above is unclear, or you want / need more information, comment below and I promise I will update it.

Helpful pages

https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

https://discuss.elastic.co/t/grok-multiple-match-logstash/27870

https://github.com/hpcugent/logstash-patterns/blob/master/files/grok-patterns

http://knes1.github.io/blog/2015/2015-08-16-manage-spring-boot-logs-with-elasticsearch-kibana-and-logstash.html

The Sucky Answer Heuristic

We all know that there’s no such thing as a stupid question, but there are certainly stupid answers. All this recent talk of heuristics caused my brain to come up with what I’m calling The Sucky Answer Heuristic.

The Sucky Answer Heuristic is a specific instance of a more general emotional heuristic; if you feel an emotion, something is causing you to feel that emotion. If you’re testing something and have a response then – assuming you’re emotionally stable at the time – that response was likely triggered by something about the thing you are testing.

In this case, if you ask a question and the answer causes you to exclaim some version of “That sucks!”, then it’s worth examining further, and you should keep asking questions until the answers stop sucking, or they only suck. I’ll explain more about when to stop asking questions later.

What’s the point of focusing on Sucky Answers?

A sucky answer to a question is likely an indication of a lack of knowledge. If you’re asking a question of some oracle – which, as a tester, is quite likely – then a sucky response means your oracle isn’t a good oracle, or an oracle at all.

In either case, that’s a risk, and it’s our place to highlight all risks that threaten the value the product is meant to deliver.

Over time, as the product scope and architecture stabilises, the answers to the questions should improve as your oracles have a much better idea of what they’re producing. If the answers are getting worse, you’ve got problems.

In the worst case, really sucky answers could mean that you should stop asking questions, or perhaps even look for another role.

What does a sucky answer feel like?

How do you know what a sucky answer feels like? Well, it feels sucky. You get the answer and are left feeling unsatisfied, or more worried, or frustrated, or any number of negative emotions. It’s a subconscious, subjective judgement, but that’s fine.

Quality in all things is subjective, so the reaction to a poor quality response will also be subjective. However, here are some possible indicators of Sucky Answers, in ascending order of suckiness;

  • An answer that contradicts what you already “know” (this is “good-sucky”;  it suggests some clarification is required.)
  • An answer that takes a while to arrive (though there could be reasons other than the suckiness of the answer for the delay)
  • An answer that answers the question very vaguely (either in delivery, or content)
  • An answer that doesn’t answer the question, or answers a different question
  • An answer never even appears
  • A response that doesn’t acknowledge the validity of the question; “That’s not a sensible question.”
  • A response that doesn’t acknowledge the validity of the questioner; “Why are you asking that question?”

“The answer was sucky because it wasn’t the answer I wanted!”

If you get an answer that wasn’t what you wanted, it’s not necessarily sucky. If a decision is made that you don’t agree with, but the reasoning is explained and is sound and logical, then that’s not sucky. Those sorts of answers you have to accept.

Continuing to ask questions because you don’t like that answer will, in this situation, not improve quality and will just label you as a pain in the ass who always wants things their way.

OK, so how do I work this thing?

You already know how to work this thing. You do this already. It’s not a new idea, I’ve just given it a stupid name. It probably already has a sensible name, of which, in my ignorance, I am unaware.

That said, when I was thinking about this, a picture was appearing in my brain, so I drew it. It has some interesting bits on it that are worth explaining, that might help you decide how to use this feeling of suckiness more effectively.

Broadly, this is a graph of Answer Quality against Time. The Time axis can be a duration of your choosing. I chose the length of a release because, as I covered above, I expect answer quality to gradually increase over the course of a release until all questions are satisfactorily answered before release.

Each time a question is posed and an answer received, we rate the quality of the response, based on the criteria I listed above. The wobbly line is a rolling average of all answer quality scores. A poor answer drags the average down, a good one improves it.

Product Problems

Let’s focus above the X axis first. In this region, the answers are coming, but are delayed, or vague, or confused. I term this the Product Problem zone, because sucky answers point to likely issues with the product. It’s where people don’t yet know everything about the product – which is a risk – so their answers are a bit sucky.

Over time, as people learn more about the product and the scope is clarified, answers should suck less and the answer score average should improve.

When some major aspect of the release changes – the scope changes, a new platform is introduced, a feature is dropped / refactored – the level of uncertainty increases, answers get suckier, and the average falls.

Hopefully, at some point in the release, the answers you get will be readily available and pretty non-sucky. This is your personal “good enough” level. You want your answer to continue to be at least this good to feel confident about how things are going. Not that you should be asked if you feel the product is “good enough” to be released but, if you were, this would be one gut-check you could use.

If the answers are “good enough”, can I stop testing? As long as the product has stopped changing, you could. But as long as things are changing, and new things to be known are being created, you need to keep asking questions and sense-checking the answers. If you declare victory too early and take your eye off the ball, you could not ask a question that would have uncovered a major issue. So, regardless of how good the answers are, keep testing until you have to stop.

If you run out of questions, that’s a different problem. If you’ve run out of testing time, it’s probably not a problem. However, if you do have time left, you need to figure out how to ask more questions.

All this is well and good, and gives you a sense of how things are going, but’s that only useful if you communicate your findings to the right people. You are going to have to translate “My answer suckiness rating is 4” into something that those people understand. Risks are good. Managers understand risks.

Organisational Problems

Let’s now dip below the X axis. Down here are more fundamental problems. Down here your questions aren’t answered. The questions themselves are questioned. You are questioned for asking questions.

Down here, you are no longer able to add value to the product because elements of your organisation are preventing you from doing so. Your place as a valuable member of the organisation is in question.

Any questions you ask are seen as a negative and are interpreted as being destructive. Asking more only makes things worse.

You need to flag these sorts of responses up ASAP to anyone up to and including HR. If it’s only certain people / oracles who give these responses, you can possibly work around it, but if the problem is more systemic, then it’s probably time to look for a new company.

Summary

Pay attention to the responses you get when you ask questions. If you don’t like them, or how you got them, ask yourself why. Ask more questions, and flag up any risks that you identify.

Things will always suck a bit, but if you call attention to the suckiness you can use it as a way to help the team focus on areas of risk.

 

 

 

AACID, a Test Mnemonic

I recently attended the local incarnation of the Software Testing Clinic (all glory to Dan, Mark, Del and Tracey) where the topic was Heuristics and Oracles. To my shame, a vast array of test-related heuristics and heuristic-related mnemonics were presented that I’d never heard of.

At home, chastened, I did some more digging and uncovered even more. One thing that struck me was how, while there were a few pages that listed the testing mnemonics, and clarified the words that each mnemonic should help you remember, I didn’t find much explanation about what the words should mean to you, and how they should help you do better testing.

[Kinda like the International Phonetic Alphabet; you’ll see IPA glyphs in pronunciation help when you look up words. The obvious problem is that if you don’t know which sounds correspond to things like “ʊ” or “ə”, then “haɪ poʊ kənˈdrɪ ə” is next to useless in explaining how to pronounce “hypochondria”. But I digress…]

[Heurist|Mnemonic].*

So this led me to throw together the [Heurist-o-matic | Heuristicator | Heuristonomicon | Mnemonicomatic] (haven’t decided yet, but I think ‘Mnemonicomatic’ is winning);

  1. to bring together all the test mnemonics on a pretty webpage,
  2. to make it easy for people stuck for ideas to have some thrown at them without any effort,
  3. Most Importantly,  to expand upon what they actually mean.

Coding stuff aside – which didn’t really take all that long; I reused some old stuff – the bit that took longest was collating the information, trying to determine who the author is, but mostly what each bit of the mnemonic was actually trying to convey.

And as I was doing this, and reading the names of the authors, I felt that I should come up with something to add to my list, so I could be cool like them 🙂 And then I remembered; I already had!

Apologies for the 80s flashbacks, buuut……..AACID!

Some time ago I was trying to summarise my / our approach to testing, and to distill it down so I could communicate it to others. As a bit of a logophile – someone who loves words – I wanted to use similar word forms – in this case all ending in -ate – as another mechanism to aid recollection, in addition to the mnemonic.

The original mnemonic was ACIAD, because that’s broadly the order in which the things that each letter represents occur, but who’s going to remember ACIAD? So I messed it about, and boom. AACID. In reality, you will likely do aspects of each of these notional “stages” in different orders, or even at the same time, so it’s not too much of a cheat.

I wasn’t quite sure how to categorise this mnemonic, which might not be a big deal, but all the other mnemonics have one and I don’t want mine to picked on. So I think these are general testing principles to act as your touchstones, your center, the principles to which you can return if things get frustrating and confused.

1. Appreciate

The Appreciate stage is about learning as much as you can and building up your understanding of everything related to the project or product. Find and consume as many sources of any kind of information you can.

2. Communicate

The Communicate stage is about building up your support and information network. Talk to other roles and build up mutually beneficial relationships. If you uncover conflicting information, check your understanding and ask questions. Be constructive and support others.

3. Investigate

The Investigate stage is the exploratory, learning, testing phase where you are examining the product. Go slowly and pay attention. Take pictures. If you feel something, find out why. Look everywhere. Trust your instincts. Keep your mouth shut and your eyes open.

4. Assimilate

The Assimilate stage is about allowing what you learn to alter what and how you test. Don’t forget to apply what you have learned. Adapt your expectations, but don’t abandon your values. Add their distinctiveness to our own. Resistance is futile.

5. Deviate

The Deviate stage is about doing all the things “they” say aren’t important or will never happen. Nothing ever happened until the first time. Mess about and get it “wrong”. Try different stuff, weird stuff. Find the edge and go beyond it. Get distracted and follow your nose.

There will doubtless be other guiding testing principles which I’d be happy to add to the list.

So remember; when in doubt, AACID.