BACK TO WORK
MithilaStack2024 - PresentLead Full-Stack & AI Engineer

// ExamGrader.ai

AI-powered academic evaluation SaaS serving 5,000+ educators — automates grading of subjective, handwritten, and MCQ assessments using a Vision LLM, going beyond keyword matching to evaluate student reasoning and logic.

The Challenge

Educators spend 40% of their time grading subjective assessments manually. Existing tools only handle MCQs, missing the nuance of free-response answers and handwritten work.

The Solution

Built a context-aware neural grading engine achieving 95%+ accuracy on subjective and free-response questions. Engineered a high-speed OCR pipeline to process scanned PDFs and handwritten answer sheets, and designed a parallel-processing backend using Bull queues and worker threads to handle bulk institutional uploads at millisecond latency.

// Key Impact Metrics

0+
Educators Served
0%
Grading Accuracy
0×
Time Saved

// Tech Stack

Next.jsNode.jsVision LLMOCR PipelineBull QueuesWorker ThreadsMongoDB

Key Learnings

01.

Optimizing LLM prompts for academic context requires domain-specific fine-tuning of evaluation criteria rather than generic prompting.

02.

Queue-based architectures handle variable institutional upload patterns far more gracefully than synchronous request-response cycles.

03.

OCR accuracy depends heavily on preprocessing steps to normalise handwriting variations before feeding into the vision model.