Generally our tasks are scored on the basis of test cases.
- We provide candidates with task description and ask them to find a solution to the stated problem.
- They have an example test case (which is just an example and doesn't count towards their overall score), and upon submission, we use test cases of similar nature to grade their solution.
- Based on how many test cases are passed by their solution, they get the score.
Every task contains at least 6 test cases, often more. For example, if a candidate’s solution fails 4 out of 10 assessed test cases, the score for this task is 60% because 6 out of the 10 test cases were passed successfully.
When reviewing a candidate report, you will see the analysis summary which will go into details about which test cases the solution passed, and which returned the wrong value or had a time out error.
Tasks with an exception to the rule:
The nature of QA tasks is different from the rest of the tasks in our library, so we apply a different principle. Instead of being presenting the candidate with a problem to solve, we provide candidates with an example of a page that works as it should, acceptance criteria for this page, and ask them to write test cases themselves. Their test cases, which are assessed altogether as a suite, are required to include all acceptance criteria.
If we were to score such tasks based on our usual principle, candidates could gain a certain score just by providing a test case that would get the FAIL result on all the datasets containing incorrect information. The first test case that we score is the most important one. This test case includes a perfect page in a perfect environment with all the correct data. The candidate’s test suite should pass this test case. If it does, we score it against the rest of the test cases which include pages where some of the acceptance criteria are missing. If it doesn’t, it automatically receives the score of 0%.
In summary, every test case includes an actual page on which test suite is being executed. However, if candidate’s solution doesn’t perform well on the first test case (which includes a perfect page), it isn’t scored against any other test cases and receives the score of 0.
If you have any questions about how we score automated tasks, please contact us at firstname.lastname@example.org and we'll be happy to help.