Skip to main content

AI Test Generation

TMS ONE uses AI to automatically generate test cases from your acceptance criteria. This is also powerful for Product Owners and Business Analysts who want to turn user stories and requirements into ready-to-use test cases — without needing to write them manually.

How it works

You provide a description of the feature or user story, and the AI generates:
  • A Gherkin style scenario (Given / When / Then format) for human readability
  • Executable step definition code in your chosen programming language (JavaScript, Python, Java, or C#)
Both outputs can be reviewed, edited, and saved directly into your project.

Step-by-step guide

1. Navigate to AI Test Generation

From the left sidebar, click AI Test Generation under your project.

2. Fill in the generation form

Test Case Title
text
required
A name for the test case you want to generate.
Example: “User login with valid credentials”
Test Description & Acceptance Criteria
text
required
Describe the feature or scenario in plain language. The more detail you provide, the better the AI output will be.Example:
Users should be able to log in using a valid email and password.
On successful login they should be redirected to the dashboard.
On failure, an error message should be displayed.
Module
select
required
Select which module this test case belongs to. This gives the AI context about the area of the application being tested.
Priority
select
required
Set the priority level for the generated test case.
Tags
multiselect
Optional. Add one or more tags to categorise the test case (e.g. regression, smoke, ui).

3. Generate the test case

Click Generate Test Case. The AI will process your input and return results in the right panel.
If the first result isn’t quite right, click Regenerate to try again with the same inputs.

Understanding the output

Once generated, you’ll see two tabs on the right:

Gherkins Preview tab

Displays the test case in Gherkin format — a plain English, structured format using Given / When / Then syntax.
Feature: User Login

  Scenario: Successful login with valid credentials
    Given the user is on the login page
    When the user enters a valid email and password
    And clicks the Login button
    Then the user should be redirected to the dashboard
This format is readable by both technical and non-technical stakeholders, and can be used directly in BDD frameworks like Cucumber or SpecFlow.

Test Code tab

Displays executable step definition code in your chosen language. You can switch languages at any time — the code will regenerate automatically while keeping the Gherkin preview unchanged.

Editing the output

Before saving, you can refine the generated test case: To edit the Gherkin preview:
  1. Click the Edit button in the top right of the results panel
  2. Modify the text directly in the editor
  3. Click Save Edit to apply your changes
To change the language:
  • Use the language selector in the Test Code tab — the code regenerates automatically
To start over:
  • Click Reset on the left form panel to clear everything and start fresh
If you navigate away without saving, your generated test case will be lost. Make sure to save before leaving the page.

Saving to your project

When you’re happy with the output, click Save Test Case. The test case will be saved to the selected module in your project and will appear in your test cases list. You can also:
  • Copy the current tab content to clipboard
  • Download the test code as a file (.js, .py, .java, or .cs)

Tips for better AI output

Vague inputs produce vague outputs. Include specific details about what the user does, what the system should do, and what the expected result is. Think of it as a user story with clear acceptance criteria.
Mention both happy path and failure scenarios in your description. For example: “valid credentials should redirect to dashboard; invalid credentials should show an error message”. The AI will generate separate scenarios for each.
The module provides context to the AI. Selecting the correct module (e.g. Login, Checkout, Dashboard) helps the AI generate more relevant test cases.
AI outputs can vary. If the first result isn’t quite right, click Regenerate to get a fresh attempt with the same inputs. You can also tweak your acceptance criteria and regenerate.

Permissions

Access to AI test generation is role-based:
ActionWho can do it
Generate test casesUsers with Generate permission
RegenerateUsers with Regenerate permission
Edit the Gherkin previewAdmins and users with Edit permission
Save to projectUsers with Save permission
Contact your Admin if you don’t have access to this feature.

What’s next?

Running Tests

Execute your generated test cases in a test run

Writing Test Cases Manually

Prefer to write test cases yourself? Here’s how