<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2491958190832967&amp;ev=PageView&amp;noscript=1">

Page Title Here

Below is a list of all the blog posts you are posting that your
visitors might be interested in...

EXPERT TALK: Moving to test predictions

Posted by Tom van de Ven & Rik Marselis on Nov 5, 2020 2:19:04 PM

By Tom van de Ven, senior test consultant digital manufacturing, and Rik Marselis, principal quality consultant at Sogeti in the Netherlands

Moving to test predictions

Artificial Intelligence makes the difference

The digital age is all around us. This directly influences test craftsmanship. The amount of test activities increases rapidly with the rise of digital products around us. Not only are there more products but the amount of combinations of digital products grows exponentially. A good product is more important than ever. Ten years ago, applications were mostly back office focussed. Nowadays, we see a new app or website having a direct effect  on company results. For example, a new payment method for a bank needs to be tested thoroughly before introduction. With products such as medical devices (for example X-ray) or an autonomous driving car, testing can be related to a matter of life and death. In short, we want to be very sure of the quality of a product before it enters the market (now even more than before). Endless testing is not an option. It is directly related to time and money constraints. Now 30% of IT managers say that the cost for quality and testing have risen to too high levels. A change is needed. We need to add something to traditional test execution. Test monitoring and test predictions expands the test craftsmanship with new mechanisms that cope with the challenges out there.

Traditional test execution can be described as follows. A new product is designed and based on the domain knowledge of that product (for example idea descriptions, manuals, drawings, models or user wishes) test cases are designed. A test case contains at least the following elements:

  • a starting condition
  • steps to go through
  • an expected result

Execution of this test case can only be done when the system under test is available. If you want to do a crash test with a car, you need to have the car first (or a prototype) before the test case is run. When a fault is found after executing the crash test, this is quite late in the product life cycle. Aside from a very costly prototype that has been crashed, now a new design needs to be made and the entire process to create a new prototype needs to be done. This is very time consuming. In this day and age, market introduction is crucial and extending market introduction with these types of faults is very costly. Potentially this can mean missing a whole user group and may prohibit  the chances of a product line entering the market successfully (you could go as far as to say that this could mean bankruptcy for some companies).

In the digital age we collect a lot of data. Many  activities are digital and data storage is relatively cheap. We can collect a significant amount of data from products that are used on a daily basis. Let's take the car as an example again. New cars are driving databases. The amount of sensors in cars potentially collect gigabytes of data on a daily basis. This data can be used to focus the test activities. We now know what to test and what not to test. This can mitigate the problem of testing every situation or scenario. Monitoring can be used to check the status of a product. An app on a mobile phone can be monitored on its usage. In this case monitoring shows the occurence of a failure with a very specific combination of keystrokes. With test monitoring we can get closer to the end user or end user situations and we get insight into what to test and what not to test. On the other hand we still see a fault only when it occurs. Ideally you want to go to the next step: prevent the fault from occurring in the first place. 

The real improvement lies in predicting quality. We can distinguish prediction of possible defects, risks or outright failure of a device. Before we delve deeper into the possibilities here, let us look at the definition of the word prediction. Weather predictions or football match predictions are all heavily based on historical data. The company SciSports collects huge amounts of data from games and football players. This is used to predict which team potentially wins a game and with how many goals. These predictions do not take any anomalies into account. Some freak changes in the surroundings can cause a completely different result. This is where artificial intelligence can make a difference. Smart algorithms can add to the historical data the strange changes that we may not be able to see. Historical predictions can be greatly improved in this way.

Adding monitoring and predictions to test execution really brings testing into the digital age. It uses all the possibilities the we have at hand. More importantly, it adds these elements because we need to cope with exponential rise of data density, infinite amounts of product combinations and environments and an ever quicker time-to-market.

Applying the prediction mechanism in testing

With the right data it is possible to predict the set of test cases that is needed for a new version of a product. You can also use smart algorithms to find the impact of a software change. A telecommunications company in the Netherlands is using this mechanism to determine the impact of a software change with respect to different quality attributes like safety and performance.

Quality predictions do not only give insight as to  the right test approach fitting a specific scenario. Eventually you want to predict how a product will act in the crazy world out there. Predictions are no absolute guarantee. In order to build a prediction that is as accurate as possible it might help to build a "digital twin". Simulating a product during the many stages of the product life cycle gives insight into how a product acts in its environment. A digital twin can be a costly exercise and is not in all situations a feasible approach.

The digital twin of sensor-aided driving cars can be used to check over 10 million situations where the car has to react correctly. It is not only car manufacturers that can use this for their product development but also a country's vehicle authority. They can use this to predict autonomous driving vehicles behaving correctly after an over-the-air-update.

Predicting quality can be done in a variety of ways:

  • Analysing the impact of changes using AI tools. Based on software changes these tools give insight in the impact to quality attributes such as safety, performance and correctness. This can serve as the steering wheel for the development process and corresponding test activities, thus predicting the next steps in the test process.
  • AI tools can be used to analyse product usage data. With monitoring in place a lot of data can be gathered for review. Smart algorithms can sift through the data to give insights that we would possibly have trouble finding. These are the predictions for future usage of a product and its possible response, better yet its possible error situation.
  • Analyzing test data. Test activities produce a lot of test related data: test scripts, test results from all the test runs, simulation data, etc. The predictions mechanism can be used to go through all the test results and produce a smart dashboard that helps you make predictions on future defects, usage or what test scripts need to be run the next time.

How to learn to predict

Let us take a look at what elements we need to predict quality in the digital age. Firstly, a set of historical data will help. With a new product this is difficult, but think of the possibilities of using data from older (and similar) products or equip existing products with features that can log data so you start collecting (with Internet of Things we can unlock virtually any value in the field). Do not forget that series of automated test runs can also provide data useful for doing predictions. Simply doing measurements (physical or digital) provides a source of data (maybe not that extensive as 1 year of logging product use but nonetheless!).

The next step is to use this data with models or to extract models from the data. A model really helps to structure data and to filter relevant data. It serves as a good check for both accuracy of the model and the validity of the data. Data not fitting the model or vice versa leads to an action of finding out whether model or data is not correct.

Final element for starting with quality predictions is the knowledge on artificial intelligence. A great source of learning can be online environments like Google TensorFlow. With this on-line python environment you get hands-on examples of predictive algorithms such as linear regression algorithms. The great thing here is that without setting up complex environments a quick trial can be done and your first steps in building a neural network for recognizing cat pictures are but a day away.

Now, we have data, models and validity of both, together with knowledge on artificial intelligence algorithms. The ideal mix to start building your own predictive solutions in the world of testing and quality.

Build your own defect predictor

Recognizing pictures with a neural network may allow you to learn something about neural networks, but it is not yet an application in the test domain. Let us take a defect database instead of pictures as input for the neural network. The defect database contains a lot of data. For example about the severity of the error, the probability of occurrence, the description of the error and links with other findings. Here you can get a lot of information. A good first exercise to apply a neural network in a test environment is to make the defect predictor.

We choose one attribute of a defect to predict: the severity of a defect. Let’s keep it simple and only classify a defect as: „blocking“ or „cosmetic“. With a defect database as the starting point where these values are correctly entered, a neural network can learn to classify a defect. You can also test how good the defect predictor is by offering a test set of defects to the algorithm. This set contains where the classification is also known as with the learning set (these defects are specifically not used for learning, only for testing!). We therefore distinguish between the training set for the neural network and the test set. After training and testing we have our own defect predictor and we also know how good it is. And of course the predictor can get better by offering more training data.

The neural network can learn much more. This way, the network can learn to recognize code changes in a domain where the risk of errors is high, or learn from test cases and test data that has already been used. The neural network could also gradually learn the time at which defects occur more often or what the effect is of many complex code changes so that you can anticipate them. The defect predictor is a nice starting point to use AI in the test domain and to allow it to grow further. Always keep an eye on how well the neural network has been trained, using test data. Without understanding how well your prediction is, the prediction itself is not worth much.

The figure  gives the schematic overview of building, training and testing the defect predictor.

Summary

The design of a neural network is an important first step towards so-called predictive quality. The first predictions will usually apply to short-term instances and providenswers to questions about what will happen within seconds or minutes. The more knowledge we build, the more often we can look further ahead. That way you can really fix mistakes before they occur at all. That will save a lot of time and money that is desperately needed to bring large quantities of digital products to the market.

 
 

About the Authors

 

Tom van de Ven (l.) is a senior test consultant digital manufacturing at Sogeti in the Netherlands. Testing in a high tech environment gives the bonus of working with physical products and not testing software only. Tom's specialties are for example Digital Twin, AI, Adding Smartness, Testing Embedded Software, Test management, Test consultant, Automotive, Quick Tech testing, and IoT.

Rik Marselis (r.) is principal quality consultant at Sogeti in the Netherlands. He is a well-appreciated presenter, trainer, author, consultant and coach who supported many organizations, teams and people in improving their testing practice by providing useful knowledge, tools & checklists, practical support and having in-depth discussions.

Tom and Rik are authors of the book Testing in the digital age: AI makes the difference.

 
 
 
 

References

  • World Quality report, retrieved from https://www.capgemini.com/service/world-quality-report-2018-19/
  • Science supporting football, Scisports collects sports data, retrieved from https://www.scisports.com
  • Van de Ven, T., Marselis, R., & Shaukat, H. (2018). Testing in the digital age: Ai makes the difference. Vianen, The Netherlands: Kleine Uil, Uitgeverij.
  • Prescan computer aided driving digital twin, retrieved from https://tass.plm.automation.siemens.com/prescan-overview
  • Learn and use machine learning with Google Tensorflow, retrieved from https://www.tensorflow.org/tutorials/keras/
  • Van de Ven, T., Bloem, J., & Duniau, J. P. (2016). IoTMap: testing in an IoT environment. Vianen, The Netherlands: Kleine Uil, Uitgeverij.
 

Topics: Blogposts EN

Posts by Tag

See all