Article

Test Case Design for AI-Based Tests

August 15, 2023

Learn test case design tips and techniques including test design skills for AI-based tests and best practices on how to write test cases

Learn test case design tips and techniques including test design skills for AI-based tests and best practices on how to write test cases

What is Test Case Design?

Test case design refers to a structured and sequential list of action items that attempt to verify a specific feature. At the heart of a test case design is a sequence of steps describing actions to be performed, the test data to be used, and an expected response for each action performed. 

You should write tests to cover the business, functional, and technical requirements. For adequate test coverage, you can refer to the requirements artifacts, whether they’re written in the form of user stories or technical design documents. The test case template and level of detail required will vary depending on the organization, type of software delivery project, and or the test management tool used. In this guide, we’ll walk you through the basics of writing test cases and how to write test cases for AI-based software testing tools.

The Objective of Writing Test Cases

  • Tests should help assess the specific functionality of a software application.
  • Tests should document a sequence of steps to be executed, which can be useful to reference when a bug is found in the application
  • To identify issues in user experience and to identify gaps in design at the early stages.

Standard Test Case Format

The test case template that you use in your organization may vary slightly, but here are the basic components of a standard test case:

  • Test Case ID - usually auto-generated by the test management system
  • Description of the test scenario
  • Prerequisites
  • Test steps (including expected results, description, and test data)
  • Actual results for each test step
  • Overall status - either pass or fail

Software Testing Techniques

Exploratory vs Scripted Testing

There are two main approaches to test case design: exploratory testing vs scripted testing. In a scripted approach, test cases are designed in advance. In the exploratory approach, they are designed on the fly. Exploratory testing allows testers to bring creativity when performing the test. Scripted tests are easier to replicate and good candidates for automation. The idea here is that when you automate scripted tests, it frees up testers to manually perform exploratory testing.

Example of the Test Steps from a Scripted Test:

  • Testing without entering any username and password
  • Test it only with Username
  • Test it only with a password.
  • Username with wrong password
  • Password with wrong username
  • Right username and right password
  • Cancel, after entering username and password.
  • Enter a long username and password that exceeds the set limit of characters.
  • Try copy/paste in the password text box.
  • After successful sign-out, try the “Back” option from your browser. Check whether it gets you to the “signed-in” page.

Black-box vs White-box Testing

The distinction between black-box versus white-box testing is based on how much of the application’s internal details are considered when designing test cases. Black-box testing, also known as specification-based or responsibility-based testing, disregards how the software was architected and simply design test cases based on the system's specified external behavior or how it is perceived by the user. White-box testing, also known as glass-box or structured or implementation-based testing, designs test cases based on what we know about the system’s implementation i.e. the code.

Knowing some important information about the implementation can actually help in black-box testing. This kind of testing is sometimes called gray-box testing. For example, if the implementation of a sort operation uses an algorithm to sort lists shorter than 1000 items and another to sort lists longer than 1000 items, we can add more meaningful test cases to verify the correctness of both algorithms.

Use Case Testing

Use case testing is straightforward in principle: we base our test cases on the use cases. It is used for system testing (i.e. testing the system as a whole). For example, the main success scenario can be one test case while each variation (due to extensions) can form another test case. However, note that use cases do not specify the exact data entered into the system. Instead, it might say something like “user enters his personal data into the system”. Therefore, the tester has to choose data by considering equivalence partitions and boundary values. Combining these often results in one use case producing many test cases. 

Since exhaustive testing is impossible, your test plan needs to be efficient and focus on higher priority use cases. One way to do this is to implement a scripted approach to test high priority test cases, while exploratory testing lower priority areas of concern that may emerge during testing.

How to Write Test Cases

Tips for Writing Tests

Identification and classification:

  • Each test case should have an ID and title so that it’s easily referenced in the future
  • Indicate the system, subsystem or module being tested so that it can be categorized for reporting and analysis purposes.

Instructions:

  • Tell the tester exactly what to do, step by step.
  • Testers should have all the information needed to run the test and only need refer to the test case documentation provided.

Expected result:

  • Cite expected results and provide to tester
  • Report test failure if the expected results are not produced. Be clear in comparing the expected versus actual results.

Cleanup (when needed):

  • Inform tester how to restart / recover in the event of failure.

Best Practices for Writing Test Cases

  1. Align with requirements:
    First understand the requirements & while writing test cases do not assume any requirements on your own. Raise the question which is not clear in requirement or requirements are misleading or incomplete, feel free to ask questions to your business analyst or client. Don’t ask developers about this. Prior to designing the test cases, figure out all features of the application and ensure that the test case should cover all functionality mentioned in the requirement document. Use a traceability matrix to make sure that no requirements are left untested. 
  2. Avoid redundancies:
    Plan your scope of testing in advance to prevent repeating duplicate test cases. Generic test cases should be collected & combined together in the test suite, which helps to minimize the effort of writing standard common test cases every time and can be used over the project life cycle.
  3. Prioritize tests:
    Assign a priority to each test case. Select the test case priority depending on the business impact of the feature, component, or the product. As you plan your test executions, pick high priority test cases first, then medium, and then lastly low priority test cases.
  4. Group tests properly for reporting:
    As end users or clients are always interested in reports, so test cases should be group properly (PhaseI, II wise, Module wise, Sprint wise or User story wise if Agile methodology), so end user will come to know about Quality of the product based on test case execution (number of test cases Pass/Failed).
  5. Write clearly and succinctly:
    Your test cases should be simple and easy to understand. Avoid writing explanations like essays. Keep to the point. Be mindful of the input data your tests are using since your test cases should validate the range of input data. Also, check how the system behaves in the normal & abnormal conditions.
  6. Be practical:
    Concentrate on real life scenarios that are likely to be faced by end users. To make sure that defects are verified, log bugs appropriately and provide evidence they’re fixed. Remember, every test case may or may not have defect linked, but each defect should have test case linked.

Test Case Design for AI-Based Tests

According to the World Quality Report 2021-2022, organizations have an increasing demand for adopting test automation techniques with AI/ML. To implement AI-based testing, test strategy and test design skills (32%) and understanding of AI implications on business processes (36%) are seen as some of the highest areas where skills are lacking to implement AI for test automation

Extent to which artification intelligence changes the skills needed from QA and test professionals
Extent to which artificial intelligence changes the skills needed from QA and test professionals - World Quality Report 2021-2022

One of the most important skills in designing AI-based tests is knowing how and when to incorporate verifications.

Importance of Verifications 

A verification is a step inside of a test that is a boolean expression that checks to see if the application is working as expected. It is a piece of logic that verifies whether a bug exists or not. A test without verifications is merely a set of steps, so the presence of a verification is truly what allows the test to be considered passed or failed. For AI-based tests that dynamically keep up to date, verifications are even more crucial since non-verification steps can change. So, the presence of verifications in your test ensures that the test stays true to its original intent while machine learning autonomously heals other steps.

When to add verifications to your test

Make sure to add a verification whenever you run into these scenarios:

  • Page Load: Is there a step that involves redirection to a new page? If so, this should be a verification.
  • Element Update: As a consequence of a test step, is there any change to the element on the page? For example, after logging into an app, a welcome banner is displayed. There should be a verification added to ensure the banner is displayed.
  • Application Logic: If the code has some internal logic. For example: there is a validation in the application to check if a required field is filled out properly. There should be a verification to make sure that the validation doesn’t get triggered by any other action.

About the author

author photo: Tamas Cser

Tamas Cser

FOUNDER & CTO

Tamas Cser is the founder, CTO, and Chief Evangelist at Functionize, the leading provider of AI-powered test automation. With over 15 years in the software industry, he launched Functionize after experiencing the painstaking bottlenecks with software testing at his previous consulting company. Tamas is a former child violin prodigy turned AI-powered software testing guru. He grew up under a communist regime in Hungary, and after studying the violin at the University for Music and Performing Arts in Vienna, toured the world playing violin. He was bitten by the tech bug and decided to shift his talents to coding, eventually starting a consulting company before Functionize. Tamas and his family live in the San Francisco Bay Area.

Author linkedin profile