Automation workflows - Specification-first

The specification-first automation workflow

Specification-first is an approach where test case design is done prior to implementing the automated tests code. In this workflow, teams first outline and design their test cases in TestRail, giving them an opportunity to write test cases in multiple formats, review, categorize, prioritize and select for automation. This approach typically suites teams who are automating test cases that they already have documented in TestRail, work in a context that requires being more thorough documenting their test cases, or want to use TestRail to help prioritize which test cases to automate and when.

 

If you are writing test cases directly in your code base and don't have any of those test cases in TestRail, you might want to read more about the code-first automation approach.

Using the TestRail CLI, you can make use of the --case-matcher option by passing the value name or property, depending on how you add the ID of your TestRail test cases in your test automation framework.

mceclip0_3.png

Pros:

  •   Test case mappings are kept when structural code changes occur
  •   Reduces chances of duplicating test cases through automatic creation
  •   Enables thorough test case design and planning
  •   Suites teams who require good test documentation

Cons:

  •   Test cases without case ID in the code will not be created and result will not be submitted
  •   Mapping requires manual work which can be laborious and error-prone

Mapping test cases

There are two ways you can add your TestRail test case ID in your test automation code:

  • If you use the --case-matcher "name" option, then add the case ID from your test case in the name of the test. This will ensure that the case ID appears the JUnit report as part of the test case name. You can add the ID following multiple patterns, such as: [C123] test_case_1, test_case_1 [C123], C123 test_case_1, C123_test_case_1. test_case_1 C123, test_case_1_C123
    <testcase classname="tests.LoginTests" name="C123_test_case_1" time="650">
    </testcase>
  • If you use the --case-matcher "property" option, then you have to set the JUnit property named test_id to be attached to your test cases in the JUnit report
    <testcase classname="tests.LoginTests" name="test_case_1" time="650">
    <properties>
    <property name="test_id" value="C123"/>
    </properties> </testcase>

The test case ID you should use is the one displayed in the Test Cases page with the prefix C.

mceclip0.png

Using the TestRail CLI to upload test automation results

The TestRail CLI is designed to be simple and efficient. Using the specification-first approach, once the CLI has been installed and your TestRail instance is properly configured, a JUnit results file can be passed through the command line to quickly create a run and add test results for the matching test cases in your test automation project.

Below, you can see a sample JUnit XML with the test case IDs in the test name. In this case, we will use the --case-matcher "name" option to match the test cases in TestRail, but if you want to use the --case-matcher "property" option, the remaining steps are the same. You can execute a simple command such as the one just below it to send the test results to TestRail.

 

We recommend using the -n option to skip automatic test creation if you are using a specification-first approach. Nevertheless you can use the -y option if you want tests that do not have IDs in your code to be created, but beware that if you do not map the IDs in your code afterwards, these test cases will be duplicated the next time the CLI runs.

<testsuites name="test suites root">
  <testsuite failures="0" errors="0" skipped="1" tests="1" time="0.05" name="tests.LoginTests">
    <properties>
      <property name="setting1" value="True"/>
    </properties>
    <testcase classname="tests.LoginTests" name="C2647_test_case_1" time="159">
      <skipped type="pytest.skip" message="Please skip">
        skipped by user
      </skipped>
    </testcase>
    <testcase classname="tests.LoginTests" name="C2645_test_case_2" time="650">
    </testcase>
    <testcase classname="tests.LoginTests" name="C2648_test_case_3" time="159">
      <failure type="pytest.failure" message="Fail due to...">
        failed due to…
      </failure>
    </testcase>
  </testsuite>
</testsuites>
$ trcli -n \
>    -h https://INSTANCE-NAME.testrail.io \
>    --project "TRCLI Test" \
>    --username user@domain.com \
>    --password passwordORapikey \
>    parse_junit \
> --case-matcher "name" \ > --title "Automated Tests Run" \ > -f results.xml
Parsing JUnit report.
Processed 3 test cases in 1 sections.
Checking project. Done.
Creating test run. Run created: https://INSTANCE-NAME.testrail.io/index.php?/runs/view/123
Adding results: 3/3, Done.
Submitted 3 test results in 5.5 secs.

Once the process is complete, you can go to the Test Runs and Results page where you will see a new run titled Automated Tests Run within the TRCLI Test project. Also notice that the TestRail CLI outputs a direct link to the Test Run in TestRail.

Test Run

By opening the test run, you can see the results for each test case. You can then drill further into a failed test case and check the error message that was imported directly from the JUnit report. This can be helpful to have a quick overview of what went wrong during the test.

Test results

 

 

 

Was this article helpful?
2 out of 2 found this helpful