An Online Programming Judge System

จาก Theory Wiki
ไปยังการนำทาง ไปยังการค้นหา

This page chronicles the design of an online programming judge system. The system consists of a web interface that allows user to view problems, submit solution programs, and view reports on how the programs performed. This document also includes a specification of the grading process and how to specify programming problems.

Candidate Tools

  • Ruby as glue.
  • Ruby on Rails for the web interface.

Programming Problem Specification

A programming problem is specified a collection of files contained inside a directory. The structure of the director is as follow.

<problem_id>/
    public_html/                --- All the files that the user can see and download via the Internet.
        index.html              --- The problem statement.
    test_cases/                 --- Contains all test data.
        all_test.cfg            --- A configuration file for all the test data.
        <test_number>/          --- Contain files related to a particular test data.
            test.cfg            --- A configuration file specific to this test data.
            <other files #1>
            <other files #2>
                 .
                 .
                 .
    script/
        compile                 --- A shell or ruby script that compiles a source code.
        run                     --- A script that runs and grade the programming solution.
        grade                   --- A script that collects information from various runs and tally the points.

The Scripts

Running the Scripts

It is guaranteed that when the above 4 scripts are being run, the environment variable PROBLEM_HOME will be set to the problem's directory.

Compile Script

The compile script receives one input: the language that the solution is written in. So, it can be invoked as follows:

$PROBLEM_HOME/script/compile <language>

For examples

$PROBLEM_HOME/script/compile c
$PROBLEM_HOME/script/compile c++
$PROBLEM_HOME/script/compile pascal
$PROBLEM_HOME/script/compile java

Before the script runs, the current directory will have the file source, the source submitted by the user.

After the compile script finishes running, it must create the file compile_result in the current directory. The file is empty if the compilation is successful, and contains the compiler's error message otherwise.

If the compilation is successful, the result of the compilation should be contained in the file a.out, which, again, should reside the current directory.

Run Script

The run script receives two inputs: the language the solution was written in and the test case number. It can be invoked as follows:

$PROBLEM_HOME/script/run <language> <test case number>

For examples,

$PROBLEM_HOME/script/run c++ 1
$PROBLEM_HOME/script/run java 10

Before the script runs, the current directory will contain the file a.out that created by the compile script.

After the run script finishes running, it should leave two files in the current directory.

  • result: The file must have the following format.
status "<a string>"
points <the points gained through this test data>
elapsed_time <the time used by the solution program in milliseconds>

For examples,

status "Correct"
points 10
elapsed_time 950

or

status "Time limit exceeded"
points 0
elapsed_time 2000
  • comments: Any diagnostic messages the script wishes to output. Useful for comparing differences between the correct output and the submitted program's output.

Grade Script

The grade script receives no input. It can be invoked as follows:

$PROBLEM_HOME/script/grade

Before the script runs, the current directory has the following structure:

<pwd>/
    1/
        result          
        comment
    2/
        result
        comment
    3/
        result
        comment
        .
        .
        .

The result and comment files are those produced by the run script for the corresponding test case.

After the script runs, it should leave the following files in the current directory.

  • result: Must have the following format.
point <the total points earned by the solution program for the problem>
  • comment: Any message the grade script wants to output. Useful for messages like "You suck!" or "Kanat would have done it 10 times faster than you did!"
  • For each test run, the grade script should produce two files
  • result-#, where # is the number of the run: Must have the following format:
point <the points earned by the program for this test run>
  • comment-#. where # is the number of the run: Any message the grade script wants to output. Useful for messages like "Can't finish this test run? What a n00b."

For example, if a problem has 3 test cases and 2 test runs, the directory structure after may look like the following:

<pwd>/
    result
    result-1
    result-2
    comment
    comment-1
    comment-2
    1/
        result          
        comment
    2/
        result
        comment
    3/
        result
        comment

Test Data Configuration Files

all_test.cfg

The file should look something like this:

 problem do
   full_score 120
   time_limit 1.s
   mem_limit 1.M

   run 1 do
     test_cases 1, 2, 3, 4
     scores 10, 10, 5, 15
     time_limit 1.s, 1.s, 2.s, 1.s       # This line is optional.
     mem_limit 1.M, 2.M, 1.M, 32.M       # This line is also optional.
   end

   run 2 do
     test_cases 5, 6, 7, 8, 9, 10
     score_each 10
     time_limit_each 1.s                 # This line is optional.
     mem_limit_each 64.k                 # This line is also optional.
   end

   run 3 do
     test_cases 11, 12
     score_each 10
   end
 end

test.cfg

The file may or may not exist. The setting in the file overrides the setting in all_test.cfg. The file should look something like this:

test 10 do
  score 10
  time_limit 1.s
  mem_limit 32.M
end

Database Structure

users

  • name
  • hashed_password
  • salt
  • alias

problems

  • name (string)
  • full_score (integer)
  • date_added (datetime)

test_runs

  • problem_id
  • test_run_num

test_cases

  • problem_id
  • test_case_num

test_run_test_cases

  • test_run_id
  • test_case_id

languages

  • name (string)
  • pretty_name (string)

submissions

  • user_id (integer)
  • problem_id (integer)
  • language_id (integer)
  • submitted_at (datetime)
  • compiled_at (datetime)
  • graded_at (datetime)
  • points (integer)
  • comment (text)

test_case_results

  • submission_id (integer)
  • test_run_test_case_id (integer)
  • status (string)
  • points (integer)
  • time_elapsed (integer)
  • graded_at (datetime)
  • comment (text)

test_run_results

  • submission_id (integer)
  • test_run_id (integer)
  • points (integer)
  • comment (text)

sources

  • submission_id (integer)
  • source (text)

binaries

  • submission_id (integer)
  • binary (blob)