Custom Python Test Cases are just as powerful and flexible as Custom Test Cases, but with the added benefit of writing in Python instead of bash. You can use this test case to write more advanced tests leveraging Python 3.7, and use Mimir Classroom metadata using our mimir  module!

Scoring

Grading a submission is easy by calling mimir.set_score() with a bool , float , or int

  • If you use a bool , it will be interpreted as pass/fail (True/False).

  • If you set a score between 0 - 100 (float or int ) with this function, this value will be compared against your success threshold for the test case. If the output value is greater than the success threshold, the test case will pass with full credit based on the weight/points of the test case, else it will fail with 0 credit. 

If you have Allow Partial Credit enabled, the value will be used to give partial credit on the Test Case.

Here are some examples:

mimir.set_score(True)  # pass

mimir.set_score(100)   # pass

mimir.set_score(33.0)  # 33/100

The Mimir Module

We inject a python module named mimir to provide a way for instructors to easily perform common use cases and write cleaner, more maintainable test cases.  

Metadata

We provide access to Mimir Classroom related metadata variables for use in your Python custom tests:

mimir.FILES          # list of files in the root directory
mimir.DIRECTORIES    # list of directories in the root directory


mimir.COURSE         # course namespace object
    .id              # id of the course

mimir.SUBMISSION     # submission namespace object
    .id              # id of the submission
    .created_at      # int timestamp of the created submission

mimir.TEST_CASE      # test case namespace object
    .id              # id of the test case
    .name            # name of the test case

mimir.USER           # user namespace object
    .id              # id of the user
    .email           # email of the user
    .first_name      # first name of the user
    .last_name       # last name of the user

Functions

There are also a few helper functions to allow shorthand grading, debug messages, and other common operations:

# sets the score on the test case run
mimir.set_score(score: Union[bool, int, float])


# write a message to the debug file
mimir.debug(text: str)


# write a message to the debug file and set score to 0
mimir.fail(text: str)


# returns the contents of a file at a given path
contents = mimir.read_file(path: str)
contents  # str


# run a given command (optionally passing in stdin)
# returns an object with: stdout, stderr, and the exit_code
response = mimir.sh(command: str, stdin: str ="")
response.stdout     # str
response.stderr     # str
response.exit_code  # int

 

Did this answer your question?