What an amazing time we are living in. I find it mega-exciting and am thrilled by the opportunities that are opening up for us in software development. I've just seen Gartner's hype cycle, where GenAI is just at its peak - i.e. before it falls into the valley of tears. But what comes next is exciting. Where will AI be used sustainably and productively - beyond SEO optimization and text generators?
We would do well not to fall into blind actionism - but, on the other hand, not to bury our heads in the sand when it comes to AI. It's a balancing act! I have also noticed that my personal opinion on AI is constantly changing. Nevertheless - or perhaps because of this - I would like to share my current state of ignorance here ;-)
Software testing with AI
As in all industries, AI is a big topic right now. There is of course a lot of potential here: test case creation, test data generation, execution, error analysis - a playing field for AI tools. However, it feels like every tool now has the addition AI in its name - even if not much has changed. That's why we should take a close look at what's really behind it.
And we should always ask ourselves: what problem do we actually want to solve with AI? It's tempting to be dazzled by the latest technology, but the key to success lies in using it in a targeted way to overcome specific challenges in our projects. And when I look at my customers, it's less about even more efficient test execution and more about things like test data management and the classic: how do I overcome the media discontinuity between the requirements of the requester and the technology.
But how do you get started with AI in testing? Quite simply: start. Experiment. Try it out. This is pioneering work, much is not yet finished, not yet developed and not yet thought through. The first websites were not created with a CMS - but with Notepad.exe.
When we talk about testing AI itself, we are thrown back to the survey that we tend to avoid in software development: What does quality actually mean to us? When I come into new projects and companies, that's always my first question: what does quality mean to you? Oh. Ambiguity. Um. Reference to ISO 25010. Specifically? Silence.
Traditional quality criteria such as functionality need to be rethought for AI systems. Other criteria such as accuracy, learning ability, adaptability, data quality and statistical criteria are becoming much more important here.
As testers, test managers and quality engineers, we need to rethink both fields. Both when using AI and when testing AI. We have to let go of things we have grown fond of - and we can learn new things. This is not always comfortable, but it is necessary.
And there is help available: The popular protagonists are a good place to start using AI: ChatGPT, Midjourney, Gemini, Copilot, etc. Just give them a try :-) And for testers? The ISTQB Certified Tester AI Testing training and certification is a good option here. Incidentally, this certification has been around since 2021 and its predecessors even longer. After all, AI hasn't just been around since ChatGPT. And somehow it's just software ;-)