Custom test cases can be used to accomplish almost anything that you can't do with the other test case types. Inside of these test cases, we allow you to upload more files in a zip (optional) and write a script in BASH that can be used to control whether a test fails or succeeds.

Scoring - Pass/Fail

The only thing you need to supply for this test is a file named 'OUTPUT' containing a single value (0-100, or true/false) in it when the script concludes.

If you write a value (0 - 100) to the output file, this value will be compared against your success threshold for the test case. If the output value is greater than the success threshold, the test case will pass with full credit based on the weight/points of the test case, else it will fail with 0 credit.

Here are some examples:

Basic always return true

echo true > OUTPUT

Tests if test.java exists

if [ -a test.java ] ; then
    echo true > OUTPUT
fi

You can even pull in a different language projects and compile those, using any language that Mimir Classroom supports.

g++ test.c > DEBUG 2>&1
./a.out > OUT1
echo "Hello World\n" > OUT2
if diff OUT1 OUT2 >/dev/null ; then
    echo true > OUTPUT
else
    echo false > OUTPUT
fi

Scoring - Partial Credit

You can also allow partial credit to be earned on your custom tests, by toggling on the ALLOW PARTIAL CREDIT option. This will remove the option for setting a threshold, but will instead take your output to OUTPUT and use that as a portion of the test case's weight to be awarded to the student. 

With this option turned on, the instructor's solution code will pass so long as it does not error out, encounter a segmentation fault, or time out.

Students will receive the portion of the test case's weight associated with their score as a percentage out of 100.

Magic Files

There are a few special files that you can use in custom tests to help grade and debug output. These files must always be written to at the root of the run directory (if you cd into another directory, make sure you write up all the way to these files or they won't be read correctly!). Here are the current files you can write to:

OUTPUT
------
Format: true/false/{0..100}
Use:
Grades the actual submission.
You can enter true false for pass fail,
or a number from 0 to 100 indicating a score
that will be scored by the threshold set by the test case.
Strings for true and false are case insensitive.


DEBUG
-----
Format: text
Use:
Show custom debug output that you can see in the debug menu of test cases.
This info is also available to the students
(generated from the run of their submission)
if you write to it.

Notice above how we piped both the output and error messages from the
compile statement 'g++ test.c' into DEBUG.
This makes it so students see compiler/run errors
if their submission fails to execute properly.

Remember to add 2>&1 at the end of the command
if you want to capture stderr (fd=2)!

Nuances

There are a couple of other cool (although potentially un-intuitive) things you can do with custom tests:

Python Versions

Custom tests actually have access to all four Mimir-supported versions of python, you just need to make sure you're using the correct binary in your bash script:

python         -> python 2.7.17
python2        -> python 2.7.17
python3        -> python 3.8.2
python3.6      -> python 3.6.9
python3.7      -> python 3.7.7
python3.8 -> python 3.8.2

this also applies to pip if you're using it!
pip
pip2
pip3
pip3.6
pip3.7
pip3.8

Magic Environment Variables

There are a few Mimir related environment variables available for use in your custom tests:

__M_TEST        -> a unique identifier for the test case
__M_PROJECT     -> a unique identifier for the project
__M_SUBMISSION  -> a unique identifier for the submission
__M_USER        -> a unique identifier for the user
__M_EMAIL       -> the user's email
__M_SUBTIME     -> the time of the submission, in seconds

While you're in the test case create/edit view,
these variables will appear to be slightly different
(but for student submissions they will be filled correctly!):

__M_SUBMISSION  -> 'INSTRUCTOR_SUBMISSION'
__M_USER        -> 'INSTRUCTOR_USER'
__M_EMAIL       -> 'INSTRUCTOR_EMAIL'
__M_SUBTIME     -> the time that the project was created

Helper Functions
There are also a few helper functions to shorthand grading and debug text:

success      -> sets the score to 100
fail         -> sets the score to 0
score        -> sets the score to the first argument passed to it
debug        -> sends the given args to the DEBUG file

Example:

result=$(echo "2+2" | bc)
if [ "$result" -eq 4 ]; then
  debug Correct! 2+2 is 4!
  success
else
  debug How did you get here?
  fail
fi

The above example will print Correct! 2+2 is 4! to the DEBUG file, and will pass with a score of 100. 

Automatic Conversions

Another thing to be aware of in certain cases is our automated conversions of files. These conversions are done to prevent environment specific issues, and to reduce the cognitive load of having to worry about cross platform standards for encodings and line endings.

These conversions include:

  • File encodings (most other encodings -> UTF-8)

  • Windows to Unix Line endings (CRLF -> LF, '\r\n' -> '\n') 

This all happens in the background by default to all files submitted to Mimir Classroom, but is functionality you can disable on custom test cases.
To disable this functionality, you must include the following magic flags in the first line of your custom test:

#!DISABLE_UTF8_CONVERSION #!DISABLE_LINE_ENDING_CONVERSION 

 You can use either option, or both together - the only stipulation is that they must appear on line 1. Keep in mind that if you use these options, you must ensure that your students submit files with the correct encodings and line endings - certain languages will have issues handling files with mismatched encodings or line ending types.

Did this answer your question?