// ExamGrader.ai
AI-powered academic evaluation SaaS serving 5,000+ educators — automates grading of subjective, handwritten, and MCQ assessments using a Vision LLM, going beyond keyword matching to evaluate student reasoning and logic.
The Challenge
Educators spend 40% of their time grading subjective assessments manually. Existing tools only handle MCQs, missing the nuance of free-response answers and handwritten work.
The Solution
Built a context-aware neural grading engine achieving 95%+ accuracy on subjective and free-response questions. Engineered a high-speed OCR pipeline to process scanned PDFs and handwritten answer sheets, and designed a parallel-processing backend using Bull queues and worker threads to handle bulk institutional uploads at millisecond latency.
// Key Impact Metrics
// Tech Stack
Key Learnings
Optimizing LLM prompts for academic context requires domain-specific fine-tuning of evaluation criteria rather than generic prompting.
Queue-based architectures handle variable institutional upload patterns far more gracefully than synchronous request-response cycles.
OCR accuracy depends heavily on preprocessing steps to normalise handwriting variations before feeding into the vision model.