Josh Cannon
Back to projects

Case Study

CampusNova

CampusNova turns static degree audits into clear, actionable advising workflows for students and advisors.

Founder & AI Engineer

2026

ReactTypeScriptFastAPIPythonSupabasePyMuPDFGemini 2.0 FlashClaude Sonnet

Problem

Degree audits are static PDFs disconnected from advising workflows, causing wasted meeting time and preventable graduation risk.

Solution

I built an end-to-end platform that parses DARS audits, structures requirement data, and generates personalized advising recommendations using a two-model AI pipeline.

Impact

Active development with a pending pilot in UTK's College of Emerging and Collaborative Studies; the core upload-to-personalized-advising loop runs reliably.

The Problem

At most universities, the degree audit is a static PDF. Advisors spend the first half of every meeting just figuring out where a student stands. Students find out they are missing a graduation requirement in week eight of their final semester. The document that contains everything about a student's academic standing is not connected to anything useful.

What I Built

CampusNova turns a DARS degree audit into a clear, actionable picture of where a student stands and what to do next.

A student uploads their PDF. PyMuPDF extracts the raw text, then Gemini 2.0 Flash classifies requirement sections dynamically (major core, VolCore, capstone, electives) with no hardcoded logic, so the system works for any major at any college. From there the student sees exactly what they have completed, what is in progress, and what is still missing, organized into dashboards they can actually read: a requirements tracker broken down by category, a semester planner for mapping out the rest of their degree, and a degree path view showing how their remaining courses land between now and graduation.

The AI advising layer sits on top of that structured data. When a student has multiple courses that could satisfy a requirement, the system uses their previously stated career goals to recommend 3-5 options that fit where they are trying to go, not just what checks a box. The recommendation pipeline uses a two-model architecture: Gemini 2.0 Flash handles the data-heavy retrieval and course catalog matching against a Supabase database of available courses, then passes a structured context to Claude Sonnet, which generates the actual advising response. Gemini handles the data. Claude handles the communication. Each model does what it is best at.

The platform also includes a course cart and plan submission flow and an advisor dashboard for reviewing and approving student plans. Deployed on Render and Vercel with Supabase handling auth and persistent student data.

The Hard Parts

Context and token management. Early in development I was hitting my Claude Pro rate limit, setting a timer, and picking back up when it reset. That is not a workflow, that is a ceiling. Learning to work within token constraints forced real discipline: structured shared markdown files for agent memory, explicit handoff documents between agents so nothing gets re-explained, and tight prompt design that front-loads the context that matters.

That process eventually became the development system itself. The platform is built and maintained by a multi-agent team: a lead planner agent (Claude Sonnet 4.6) for architecture decisions and prompt design, a lead architect agent (Claude Opus 4.6) for complex backend logic and systems design, and a frontend agent (OpenAI Codex) for UI implementation and token-heavy frontend changes. Each agent operates from a detailed handoff document with explicit task scope, file paths, acceptance criteria, and Playwright-verified validation. Getting the most out of AI agents at this level requires the same skills as managing engineers: clear specs, defined interfaces, and knowing when to push back on the output.

Current State

Active development with a pending pilot in UTK's College of Emerging and Collaborative Studies. The core loop (audit upload to personalized advising session) runs reliably. Next up: model evals to simulate real student usage patterns across different majors and edge cases, and an LLM-as-judge layer to independently verify AI course suggestions before they are surfaced to students. Canvas API integration is also in the roadmap for deeper real-time course context.

What I Learned

Trust is the product. One wrong recommendation (a course that does not actually satisfy the requirement, or a prerequisite chain that was not checked) destroys confidence in the whole system. Every suggestion surfaces its reasoning, not just "take AI 302" but why it satisfies your requirements and fits your goals. The parsing has to be right before any of that matters. And getting an advisor to change their workflow is a harder problem than any of the engineering.