Logisim Evolution Autograder

3 minute read

If you’re an educator interested in the autograder, feel free to contact me about it. I’m no longer teaching, so the autograder hasn’t been updated in a while, but I’m happy to send you what I have.

The teaser image, showing the tools I used: the GitHub repository reds-heig/logisim-evolution, Docker, and Gradescope.

Background

In Winter Quarter 2020, I was teaching ECS 154A in UC Davis’s Department of Computer Science. ECS 154A is an introductory computer architecture class, focusing on digital design and architectural building blocks like the memory hierarchy. Given the focus on digital design, I designed the class’s lab assignments around Logisim Evolution. Logisim Evolution is “an educational tool for designing and simulating digital logic circuits,” based off the original Logisim program created by Carl Burch. You can find my lab assignments and other open-sourced class materials on the course’s homepage on GitHub.

That quarter, I was teaching almost 200 students—with the waitlist at its maximum, it was somewhere around 220. Even with the significant TA help I was provided, that’s a lot of students. If I had my TAs grading each submission manually (like I did for my previous version of the course with 60 students), it would have taken most of their time commitment and left them with little time to work on anything else. There had to be a better way, hence my desire to develop autogradable assignments.

Project

Originally, I was going to use the original Logisim again, copying what I had done in my previous version of the course. My prior efforts to autograde Logisim assignments weren’t successful, so I wasn’t keen on bashing my head against it further.

Logisim Evolution

I had heard about Logisim Evolution from another TA in my department, so I decided to look into it. Turns out it had many more features and an improved UI from the original Logisim. In addition, it was much easier for me to generate output from a student’s submission via command line. With some tinkering, I successfully made a test assignment, convincing me that developing these autograded assignments was worth it.

Gradescope and Autograders

I had already been using Gradescope, a platform that was originally built to help speed up grading paper-based homework and examinations. I highly recommend it to educators on that feature alone—it’s saved me so much time grading quizzes and exams, both as an instructor and a TA. Gradescope also supports programming assignments, a feature that I hadn’t tried prior to that quarter. Since I was investing time in developing an autograder, I figured that I might as well go all the way and see if Gradescope could run the autograder instead of doing it on my own machine.

For Gradescope’s autograder platform, you create an autograder setup for use in a standard Ubuntu container. Your setup does the following:

  1. downloads necessary dependencies
  2. runs a student’s submission
  3. tests their output
  4. generates a grade based on criteria that you provide

Gradescope provides a general output-checking autograder example built on Python which was perfect for my use case. Using that as my base, my setup looked like this:

  1. download OpenJDK to run Logisim Evolution and Python to run the autograder script
  2. run the student’s submission in Logisim Evolution
  3. compare the student’s output versus my expected output using Python
  4. generate their final grade and submit it to Gradescope

Much less work than grading it manually!

Docker Containers and Autograder Deployment

Once you’re done with your autograding script, you zip it up and give it to Gradescope. Gradescope’s autograder harness takes your script and packages it into a Docker container for use with your assignment. Every time a student submits their work, a container is spun up with their submission. The container runs and generates the student’s score, with all the internals hidden to the student. What they see is “submit files, get grade.” Another advantage is the near-instantaneous feedback, which allows them to pinpoint and fix errors quickly.

I successfully used this for all four of my lab assignments that quarter, including the computer architecture rite of passage of designing a CPU. Some modifications had to be made to support sequential circuitry, but that was relatively minor. It saved me and my TAs so much time and allowed us to focus on other aspects of the class. Hats off to Gradescope.

If you’re an educator interested in the autograder, feel free to contact me about it. I’m no longer teaching, so the autograder hasn’t been updated in a while, but I’m happy to send you what I have.

Acknowledgments

Thanks to the following for their help creating the autograder:

  • Gradescope, for providing the platform and documentation that let me do this
  • my helpful team of TAs for that quarter: Matthew Farrer, Minqiang Hu, and Oleg Igouchkine

Updated: