I gave a proprietary tool a fair chance - and was not disappointed
The views expressed in this post are my own and do not represent the position of my employer. This post is not a commercial endorsement of QASE or any of its products.
I very seldom write about proprietary solutions and software. Not because I have any problem with them, but simply because I usually prefer open source solutions, and during my daily work and hobby projects I use almost exclusively open source software.
But this time I made an exception. My attention was drawn to the QASE tool by my colleagues and I was asked to explore it.
QASE is a cloud based test management tool that was developed by Nikita Fedorov back in 2017. In his own words, he “got frustrated with a testing tool and decided to fix it.” I do not know what testing tool upset him that badly, but the fix is actually pretty good.
It is true that in the open source scene there are no well known, broadly used test plan, test suite and test case management tools. At least I am not aware of anything like that.
My own personal explanation is that in most community driven, open source development projects the testing strategy is very different from the commercial software industry. Most open source projects do not have master test plans, documented test suites and dedicated quality engineering teams. Community driven open source projects usually crowdsource the quality engineering. They typically stage out the production, development and experimental releases, and quality assurance is organically done by the community who are adventurous enough to use the experimental or development release of the software. So by the time the software reaches the production stage most bugs are caught and fixed in the development branches. This model can be observed with SUSE Linux Enterprise Server, where openSUSE Leap and Tumbleweed serve as development and experimental stages. Those who know the Debian release model can see the same pattern with the stable, testing and unstable (Sid) stages. In such models, projects do not really need master test plans, documented test suites and test cases. Naturally, depending on the style and taste of the developers, most open source projects have good unit test or even functional test coverage. But they seldom bother too much about documenting and formalizing it. They tend to follow the old and, in my opinion, rather annoying discipline that a well written piece of software needs no documentation - the code is the documentation. In that same spirit, a well designed test takes care of its own documentation in the code.
But in the professional and corporate culture driven software development industry this is very different. When you cannot outsource quality engineering to the community you need dedicated quality engineers who develop and maintain test plans, test suites, test cases and keep a proper paper trail of what has been tested and what tests passed or failed. And here I totally understand Nikita’s frustration, as I myself know from experience that most companies and QE teams work it out with spreadsheet editors, issue tracker applications and a testing tool cobbled together. It is not uncommon to see heavily customized Jenkins instances and homegrown web applications alongside Google Sheets, Confluence pages and Excel files serving this need.
QASE truly does address this problem well.
In that sense QASE is an extremely simple application. And this is neither a praising nor a critical statement. Anybody who considers themselves a software developer knows the software genre of inventory management software. I do not think there is any software developer out there who has never written such an application, even a simple one. With my friends we used to call it “workwear inventory management software.” I have done my fair share of that genre. I was young, I needed the money.
So basically QASE is an application like that. One can create test plans, test suites and test cases in a nested tree structure - literally just like a directory structure of an inventory. And because this structure and model is basically an industry commonplace, most users find it extremely easy to use. The project - test suites - test cases hierarchy, with suites and cases organized into test plans, is intuitive and simple. QASE also provides really nice looking dashboards where QE engineers can create all kinds of graphs, gauges and pie charts to keep upper management happy and satisfy the most demanding customers with genuinely good looking information visualization.
But the true strength of QASE is not the extremely easy to use UI and the very professional interaction and UX design. What impressed me is the API it provides. And here I must give credit to Nikita. I totally sensed that he is a coder and developer who was envisioning a test management tool that does not drive quality engineers crazy. Because let’s be honest, there are no engineers who enjoy clicking through web apps and feeding test results into web forms, and who would not passionately dislike administrative work. QASE is really friendly with test developers and engineers. Basically every relevant task that can be done through the web UI can be done from bash, Python or Perl scripts just as easily.
For demonstration purposes I created a quick and dirty Python script that shows how simple it is to work with the QASE API: https://github.com/bzoltan1/qase-manage
The API in practice
The script requires only the standard requests library and a QASE API token. Authentication is handled by a single HTTP header, which means every call is just a GET or POST to a clean REST endpoint. The four examples below give a good feel for how little code is actually needed.
1. Listing test suites
The most basic operation - see what structure already exists in a project. Under the hood this is a single GET request to /v1/suite/{project}.
import requests
BASE_URL = "https://api.qase.io/v1"
TOKEN = "your_api_token"
PROJECT = "XXX"
response = requests.get(
f"{BASE_URL}/suite/{PROJECT}",
headers={"Token": TOKEN, "Accept": "application/json"},
params={"limit": 100},
)
response.raise_for_status()
for suite in response.json()["result"]["entities"]:
parent = suite.get("parent_id") or "-"
print(f"{suite['id']:<8} {str(parent):<12} {suite['title']}")
Output:
5 - Tests
6 5 Unit Tests
7 5 Integration Tests
2. Creating a test suite
Creating a nested suite hierarchy is a matter of two POST calls. The first creates the parent, the second creates a child by passing parent_id.
import requests
BASE_URL = "https://api.qase.io/v1"
HEADERS = {"Token": "your_api_token", "Content-Type": "application/json"}
PROJECT = "XXX"
# Create a top-level suite
r = requests.post(f"{BASE_URL}/suite/{PROJECT}",
json={"title": "Systemd MCP Tests"},
headers=HEADERS)
r.raise_for_status()
parent_id = r.json()["result"]["id"]
print(f"Created parent suite with ID {parent_id}")
# Create a child suite nested under it
r = requests.post(f"{BASE_URL}/suite/{PROJECT}",
json={"title": "Unit Tests", "parent_id": parent_id},
headers=HEADERS)
r.raise_for_status()
child_id = r.json()["result"]["id"]
print(f"Created child suite with ID {child_id}")
3. Creating a test case with full metadata
Test cases accept a rich set of optional fields - priority, severity, preconditions, expected result and step-by-step instructions. All of them are just keys in a JSON payload.
import requests
BASE_URL = "https://api.qase.io/v1"
HEADERS = {"Token": "your_api_token", "Content-Type": "application/json"}
PROJECT = "XXX"
SUITE_ID = 6 # ID returned from the previous step
payload = {
"title": "Unit test suite passes",
"suite_id": SUITE_ID,
"priority": 3, # 1=low 2=medium 3=high
"severity": 2, # 1=blocker 2=critical 3=major ...
"description": "Run unit-test.bats and verify all tests pass",
"preconditions": "systemd-mcp is installed and bats is available",
"expected_result": "All bats tests exit with code 0",
"steps": [
{
"action": "Run bats unit-test.bats",
"expected_result": "TAP output shows all tests passing",
}
],
}
r = requests.post(f"{BASE_URL}/case/{PROJECT}", json=payload, headers=HEADERS)
r.raise_for_status()
case_id = r.json()["result"]["id"]
print(f"Created test case with ID {case_id}")
4. Reporting a test result
Reporting a result involves three API calls that naturally chain together: create a run, post the result, complete the run. The entire thing fits in a small helper function that you can drop into any CI script.
import requests
BASE_URL = "https://api.qase.io/v1"
HEADERS = {"Token": "your_api_token", "Content-Type": "application/json"}
PROJECT = "XXX"
def report_result(case_id, status, comment=None):
# 1. Create a run scoped to this single case
r = requests.post(f"{BASE_URL}/run/{PROJECT}",
json={"title": f"Result for case #{case_id}", "cases": [case_id]},
headers=HEADERS)
r.raise_for_status()
run_id = r.json()["result"]["id"]
# 2. Post the result
payload = {"case_id": case_id, "status": status}
if comment:
payload["comment"] = comment
r = requests.post(f"{BASE_URL}/result/{PROJECT}/{run_id}",
json=payload, headers=HEADERS)
r.raise_for_status()
# 3. Close the run
requests.post(f"{BASE_URL}/run/{PROJECT}/{run_id}/complete",
json={}, headers=HEADERS).raise_for_status()
print(f"Case #{case_id} marked as {status.upper()}, run #{run_id} completed")
# Usage
report_result(case_id=42, status="passed")
report_result(case_id=43, status="failed", comment="Service did not start within timeout")
As these examples show, the QASE API is genuinely pleasant to work with. There is no complicated authentication flow, no deeply nested request schema and no surprise in the response format. If you can write a for loop and call requests.post, you can automate your entire test reporting pipeline. Which, when you think about it, is exactly the point.
Would this make sense for teams with existing test automation?
Someone will ask, so let me address it directly.
Most engineering teams that take testing seriously already have a test automation platform in place. openSUSE QE runs openQA. Many other teams run their test suites through Jenkins pipelines, GitHub Actions workflows or GitLab CI. The test definitions already exist - written as YAML, Groovy, shell scripts or framework-specific files - and the results already land somewhere, be it a Jenkins build page, a GitHub Actions summary or a purpose built dashboard.
Wiring any of these into QASE is technically straightforward. The existing test structure maps naturally onto QASE suites and cases, and seeding the catalog from what is already there is a one-time scripting job. The live result reporting is three API calls per test run, as shown above, and can be dropped into a post-build step or a workflow action without touching the test logic itself. Jenkins has a post-build hook, GitHub Actions has always() steps, and most other CI systems have equivalent mechanisms. The integration work is not the hard part.
The real question is not feasibility but value. For engineers, the native interfaces of these platforms already provide everything that matters - build history, per-test pass/fail trends, log output, failure diffs and flakiness tracking. Nobody who works in a Jenkins instance every day is going to switch to a separate web app to see the same results rendered as a pie chart. The signal does not improve and a synchronization problem is introduced where none existed before.
Where QASE earns its place is at the boundary between the engineering team and the rest of the organization. Customers, auditors, product managers and management layers often need a structured, readable view of what is tested and what is not, without having to understand how Jenkins or GitHub Actions or openQA represents that information internally. QASE is well suited to producing exactly that artifact. Keeping it current is cheap once the initial integration is in place.
So the honest answer is: the integration is easy, the automation is trivial, but the benefit depends entirely on who needs to read the output. If the audience is engineers, they are probably already looking at the right tool. If the audience is anyone else, QASE earns its place.