Hacker News new | past | comments | ask | show | jobs | submit login

I graded for a class that took a similar approach to this - the students had a small amount of tests available to them that allowed them to iterate over the requirements for the assignment, but we had more exhaustive tests that we ran automatically. In addition to checking test completeness, we also looked at the source code itself and graded for quality of that. The automated runner handled a lot of the hard work for us, but it definitely wasn't a situation where we saw some failed unit tests and then proceeded to fail them or anything.

The system used to pull this all off is pretty interesting, I think. It's all open source: https://github.com/redkyn




Ha, I did something very similar also with Gitlab and runners. I was always amazed at the new and creative ways students managed to break the autograders we wrote -- particularly for our BASH scripting assignments.

I was also never brave enough to have my autograders submit grades directly to Canvas after the first time I did it. I confidently had it directly submit grades the first week during the first year I implemented autograders. Half the class forgot a shebang on their shell script, and the autograder happily gave them a 0.

For bonus fun times: Gitlab's runner timeout is pretty much nonexistent if you're using the shell runner. It just won't kill them if they take too long, so I had to write our own management system for runners. Silly university IT not support containers :(




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: