Abstract
Generally programming contests are managed, monitored and judged manually which tends to be a major defect, thus leading contest to be more time consuming and complex to judge. The proposed project aims at developing a system wherein the participants can login to an online system and upload their codes and automated system will judge their programs. Judge aspect and Participant aspect are the two main views of system. Students can work on problem, compile their solutions and submit them to the online system. They will get the feedback about the code whether it is acceptable or specific error specifications if any. Judges can view the current status of code submission and rankings accordingly, which they can use to evaluate the work. Automated Assessment System for Source Code (AASSC) aims at providing fair evaluation of source codes be in programming contest, or online practical exams held as a part of university curriculum. Unlike other competitive products in market, which rank contest based on timing and number of submissions only, proposed system uses some benchmarking metrics for analysis of code and thus yielding a fair outcome. Plagiarism by students is the biggest emerging threat which this proposed system tries to tackle. System aims at identifying the plagiarized source code, thus leading to a more fair evaluation and helping to improve the quality of programming among the students.