IO test cases are the most popular type of test in Mimir Classroom. They provide a standardized way to automatically grade student submissions as long as they can read standard input, process that data, and print the standard out. For that reason, this test case type is supported in every language that Mimir Classroom supports.
Getting Started
Basics
You'll simply need to specify your main file to run and your input and click save, and Mimir Classroom will set up the rest of the test case provided your perfect code compiles/runs without error.
Make sure that your code accepts input on standard in and outputs the results to standard out. It is not what your functions return/pass back, only what is printed to standard out.
Advanced
There are more advanced options for niche use cases, including the ability to include additional files for the test case, setting command line arguments for compilation or the run itself, specifying Makefiles, and even templated output matching.
Automatically Generate Output
You can automatically generate the output for each IO test case that you create. Simply set up a project and go to the test cases step. Create a new IO test case, fill out the required fields, and make sure you have "Automatically Generate Output" switched on. When you save/create that test case, it will take your input, run it against the perfect code you have uploaded, store the correct output, and automatically make the test case working and ready to go!
Makefile
If you want to use a Makefile in your test cases, you only need to make sure that you have one in your perfect code zip, or that you upload one in additional files. At that point, toggle on the Use Makefile option, and the main file select box will change to a text box called executable name. That is where you will specify the executable that will be created by your Makefile, and ultimately what will be tested.
Additional Options
Weight/Points
You can change the points (weight) that the test case will award if successful, as well as the success threshold. The success threshold is a percentage that determines how close the student's output must be to the perfect output in order to receive points. The default success threshold is 100, meaning the student's submission should output exactly what your perfect code did in order to receive the amount of points specified. As of right now, there is no partial credit option for IO test cases. We believe that test cases should be atomic, so each one tests a small subset that is either pass or fail.
Command Line Arguments (run-time and compile-time)
Also, we allow you to optionally specify command line arguments to both the run and the compile time (if you're using a compiled language). These arguments are passed directly to either the program being tested, or the program doing the compiling, respectively.
Other Flags
There are a few other autograder flags available inside of Run Command Line Arguments
:
--only-first-line
to check your expected output against the first line of the student output--only-last-line
to check your expected output against the last line of the student output--echo-input
(python only!) this will make output of CLI programs look as if they had interacted with it via the REPL, or interactively. It will print the prompts frominput()
calls, and also print what is passed in via stdin to give the appearance of having been run via the CLI.
These flags will be used only for the autograder setting, and will be removed from the set of flags that will be passed to the program run itself.
Note: Currently the pretty diff functionality cannot recognize the --only-first-line
and --only-last-line
, and so the diffs may be confusing to students if left on, as the only first and only last lines checks will still show the other lines as entirely missing.
IMPORTANT!
Visibility
The toggle options at the bottom of the IO test case dialog box are influential on the operation of your test case. If you toggle on any of the 'Show' options, your test case input is at risk of extraction when students run their code. For this reason, we encourage instructors to make multiple test cases, some completely hidden (meaning all 'Show' options are toggled off), and some other test cases that give feedback to students.
The setup of IO test cases can be difficult to understand at first. We recommend checking out some of our examples, and as always, if you have any questions at all, make sure to contact us through the chat button in the bottom right corner on the platform.