Case Study
CampusContext
CampusContext embeds an AI sidebar directly into Canvas LMS so students get context-aware help without leaving their coursework.
Founder & Engineer
2026
Problem
Students are constantly copy-pasting assignment prompts, quiz questions, and lecture PDFs into ChatGPT in another tab, losing context and breaking flow. The answers they get back know nothing about their course, their deadlines, or the materials their professor actually uploaded.
Solution
I built a Chrome extension (Manifest V3) that injects an AI sidebar into every Canvas page, indexes each course's materials into a local knowledge base, and streams answers from Google Gemini grounded in the student's actual coursework.
Impact
Beta v0.4.0 in active use; streaming Gemini chat, Canvas knowledge-base indexing across PDFs, DOCX, and Canvas Pages, and generative UI widgets rendered inline in chat responses.
The Problem
Every student already uses AI for coursework. The workflow is ugly: open a new tab, copy the assignment prompt, paste it into ChatGPT, paste back the answer. The model has no idea what course this is, what the professor emphasized, what the syllabus says, or when the assignment is due. Every question starts from zero context.
The product gap is not "students need AI." They already have it. The gap is that generic chatbots don't know anything about the class the student is actively sitting in.
What I Built
CampusContext is a Chrome extension (Manifest V3) that injects an AI sidebar directly into Canvas LMS. The sidebar is there on every Canvas page — dashboard, assignment view, quiz, grades, discussion — with awareness of what the student is currently looking at.
Course agents auto-detect from Canvas's dashboard API and cache for 24 hours. When a student opens a course, the extension indexes the relevant materials into a local knowledge base: PDFs through pdf.js, DOCX through mammoth on a small Express proxy, and Canvas Pages and Discussion threads via the Canvas API using the student's existing session cookies. No separate auth flow.
Chat responses stream from Gemini 3.1 Flash over SSE, with two interaction modes: Direct answers and Study and Learn, which uses Socratic-style tutoring instead of just giving the solution. Responses render through marked.js with DOMPurify sanitization, and the system can emit generative UI widgets that render inline inside the chat — grade breakdowns, assignment reminders, study plans, not just text.
The Hard Parts
Manifest V3 service workers cannot hold long-lived connections. SSE streaming had to flow server to background to content script with careful lifecycle management so streams survive service worker suspension.
Security surface is large for an extension that touches student data. SSRF validation on the extract-text endpoint restricts fetches to instructure.com and canvas.com domains only. A 13-agent swarm review surfaced roughly 160 findings across security, accessibility, performance, and resilience — the current build cycle is working through P0 items (widget sanitization, postMessage origin validation, storage partitioning) before a public Chrome Web Store release.
Current State
Beta v0.4.0. Core loop (sidebar injection, course agent auto-detection, knowledge-base indexing, streaming chat, generative widgets) runs reliably. Next priorities are security hardening, storage architecture rework to stay under the 5MB per-item chrome.storage.local limit, keyboard accessibility for the sidebar toggle, and production deploy of the Express API.
What I Learned
The value of a product like this is not the model. It is the context the model never had access to before. The engineering work is almost entirely about getting the right slice of Canvas data into the prompt at the right time, not about the LLM itself. Shipping a browser extension also means you cannot hide behind a server — security and performance issues are visible on the user's machine, and there is no graceful way to roll back a broken release.