TSQA 2020

Posted by Ryan Himmelwright on Wed, Apr 22, 2020
Tags dev, testing, events
TSQA 2020 - Durham Convention Center, Durham NC

Several weeks ago I attend TSQA 2020, a conference presented every two years by the Triangle Software Quality Association (TSQA). Despite being hosted by my local software testing group, the speakers and attendees were from all over the country. While only a single-day conference, it was packed full with solid advice and ideas I left with. Here are a few.

Just to clarify: TSQA occured in February, right before COVID-19 really started spreading in the US. I’m just very late in this post.

TSQA 2020

Getting There

TSQA was held at the Durham Convention Center, which is located in the middle of downtown Durham (NC, USA). Being so close to our office, I managed to help convince/remind several of my co-workers to register last minute. On the morning of the event, I drove the one mile to the office parking garage, then hustled coat-less through the cold morning wind to the conference center.

As I walked inside, I immediately saw my manager at the entrance handing out bandages, and knew I was in the right place (he was a TSQA volunteer). After saying hello and checking in, I made my way to the main ballroom to grab some food. After awhile, my co-workers started to trickle in, so we found a table near the front and gathered for the opening statements.

Keynote

The Jetsons
The Jetsons

They keynote, presented by Angie Jones was an entertaining look at the history and future of technology. She took ideas that were shown in The Jetsons, and compared them to how close (or far) we have come to many of them. She then used this as an example of how we should be planning to test future technology, because like it or not, it’s coming. She concluded by providing some examples of what testing in the future could look like.

Talks

For the rest of the day, I attended several talks interspaced with a lunch and snack break. I mainly focused on attending talks around automation, but also went to a few about developing test cases or UI testing tools. Similar to my All Things Open 2019 post, instead of narrating each talk I went to, I picked out few lessons I learned and will share them below.

After the conference ended, my co-workers and I made our way back to the office to share experiences and debate our thoughts over a drink. It’s always great to hash out ideas with others after a conference, while they are still fresh. Also… it’s fun :) .

Lessons Learned/Strengthened

Now to summarize a few of the many lessons I picked up while at TSQA 2020. I’ve heard many of these suggestions before, but the speakers presented them so well, I really want to ensure I start implementing the at work. Lets get started.

No failing tests

It is all too easy to let a backlog of failing tests build up. This may be due to the test being out of date, a low priority issue, or worse of all… just a flaky test. Regardless of the reason, failing tests really should be mitigated immediately, for several reasons.

Leaving failing tests to continuously run causes failure fatigue. This normalizes the failing tests and causes a team to ignore other failing tests in the future. Basically, it decreases the competence in the test suite, and thus the QE team as a whole.

Failing tests meme
This is supposed to be a joke, but it's actually the point I'm sharing. If tests are failing, disable or get rid of them, *AFTER* filing an issue!

So, if a project has continuously failing tests, try to fix them as quick as possible. If there currently isn’t time to fix it (we’ve all been there), that’s fine, but file an issue to remember to fix it later. Then disable it. It will make a test suite more meaningful, because each failure means something important.

One last statement here: No flaky tests either. If we can’t depend on them, they aren’t worth having. Either figure out how to fix them so they are consistent, or remove them.

Write tests to pass for known issues

Now, lets expand that suggestion of disabling failing tests, and write tests that pass when a known defect happens. While sounding counter-intuitive at first, this idea makes more sense if you accept the fact that tests don’t actually find defects. Rather, automated tests depict the state of a system, and fail when something has changed. A passed or failed test simply acts as a data point. When a quality engineer goes and investigates the test results, they determine if there is a defect based on the data.

For example, lets assume I have a test that asserts that an api call returns a 200 status. However, due to a known issue, it is currently returning 404s. I know that in its current state, the system returns a 404 for that api call, so I can change/add a test to assert that it is indeed, returning 404s. This technique removes a failing test (as suggest in the previous section), while allowing us to maintain that data point. The test will pass while it continues to return 404’s, but will fail and notify us of any related state changes.

Those changes could be from the developers merging a bug fix… or a new defect popping up, resulting in the api call now returning 500 errors. Regardless, we know that something has changed again, and we should investigate. By comparison, an always-failing or fully removed test would not have demanded our attention so easily.

Better Documentation

So… if we have our tests setup to pass for known defects, lets make one thing clear… we need to make sure we have amazing documentation. This is imperative.

Documentation Meme

Any test designed to pass on a known defect should contain a few pieces of information in its documentation:

  • Why the test is there: If a test is checking for undesired behavior, it is a good idea to quickly document something along the lines of ‘This test is checking for behavior we don’t want. It is just here until an issue is fixed’.

  • The issue number. If a test exists until a known issue is resolved… please include the issue/bug number in the documentation. This will make it much easier for others (or you!) to find more information, or check if the issue is already closed once the test starts failing (due to a fix).

  • What we expect from the test: Try to document why the test is currently passing, and what we expect when it starts failing again.

Continuing with our example from above:

“This test has been altered until issue #23483 is resolved. Currently, this api call returns a 404 status. If the issue is resolved, it should return a 200 status, at which point this test needs to be updated.”

Lastly, remember to periodically clean up the documentation. When filing an issue, state were a failing test can be found if it is being updated to match the behavior. This way when the issue is closed, it will be easier to go update the test. Temporary test states can also be marked with a tag, such as TODO to make them easier to search through. Honestly, whatever works for your team. The goal is to create enough context so that someone else is able to know what is going on and fix it without having to find you.

Conclusion

Overall, TSQA 2020 turned out to be a wonderful conference and was a desirable size. It was not crazy and over-crowded, but also not so small that it felt awkward. It had a diverse mix of people from all over the country, and the highest percentage of women I have ever seen at a tech conference. I had a great time and definitely plan to attend the next one… even if I have to travel more than a mile next time.

Next Post:
Prev Post:

Ansible Quickstart Creating Tests For This Website: Docker Jenkins Nodes