Summary
Renin OSCE Simulator is a full-stack training platform for medical interview practice.
It combines:
- AI-based patient conversation,
- structured scoring and feedback,
- progress history,
- and admin controls for users and case content.
Why This Project Matters
This project solves a real training gap: learners often understand clinical theory but struggle in live OSCE communication.
The product turns that gap into a repeatable system:
- guided practice,
- structured evaluation,
- persistent progress review,
- and operational control for education teams.
Snapshot
- Type: Full-stack web app
- Role: Solo builder (end-to-end implementation, with AI coding assistance)
- Stack: Vue 3, TypeScript, Pinia, Vue Router, Express 5, Drizzle ORM, Neon Postgres, Clerk, OpenAI
- Deployment: Netlify frontend + Netlify Functions backend
- Architecture: SPA client + serverless API + PostgreSQL persistence
- Auth model: Clerk identity + backend user provisioning (
user/admin)
Problem and Goal
Medical learners need repeated OSCE communication practice, not one-time drills.
Goal: build a reliable practice loop that is simple for learners and manageable for tutoring teams.
What I Built
Learner flow
- Login with Clerk and open dashboard.
- Start case based on role/credit eligibility.
- Complete pre-exam device check.
- Run 3-phase simulation (anamnesis, supporting data, diagnosis/therapy).
- Receive score and feedback, then review saved history.
Admin flow
- Manage users (create, update role/credits, delete).
- Manage case bank (create, edit, delete).
Scope and Capability
- Role-aware access (
user,admin) across route and API layers. - Credit-based access control with server-side decrement on case start.
- 3-phase exam simulation with transcript and reasoning capture.
- History detail retrieval for reflective learning.
- In-product docs endpoint for API visibility.
Technical Highlights
- Route-level and API-level authorization with role-aware guards.
- Server-enforced credit deduction to prevent client-side bypass.
- Store-driven frontend state for predictable async behavior.
- Typed payload normalization between frontend and backend.
- OpenAPI docs published at
/docsand/docs.json. - Fallback-aware client behavior for partial backend unavailability.
Architecture
- Frontend: Vue 3 + Pinia + Vue Router with auth and exam access guards.
- Backend: Express 5 + Clerk middleware + Drizzle ORM + Neon Postgres.
- AI layer: OpenAI for patient reply generation and scoring pipeline.
- Data model:
users(role, credits),cases(scenario bank, prompt context, gold standard),exam_sessions(transcript, learner reasoning, score, feedback).
API Surface (High-Level)
- Identity/Profile: authenticated user sync and role/credit exposure.
- Learner: case listing, case start, history list, history detail.
- Simulation: chat interaction and evaluation submission.
- Admin: user CRUD and case CRUD.
Engineering Quality
- Separation of concerns via stores, service modules, and view components.
- Policy enforcement on backend for integrity-critical operations.
- Data normalization layer to handle mixed payload formats safely.
- Explicit pre-exam checks for microphone/speech capability.
- API documentation available for easier onboarding and handoff.
Outcomes
- Full learner journey implemented from login to persisted exam results.
- Admin operations integrated in the same app surface.
- Policy controls are enforceable (role + credits).
- Review quality improved through stored transcript and reasoning artifacts.
Evidence and Validation
- Supported roles:
user,admin. - Default learner credits:
3. - Simulation stages:
3. - Supporting tool cap per session:
5. - API docs available at runtime:
/docs,/docs.json.
Key Constraints
- Speech recognition varies by browser/device.
- AI output needs deterministic normalization to keep scoring stable.
Risks and Mitigations
- Inconsistent speech APIs across browsers => explicit boarding checks before exam starts.
- Low-effort input distorting evaluation quality => strictness caps and normalization in backend scoring.
- Client-side manipulation of usage limits => credit logic enforced server-side.
Next Steps
- Provide more cases.
- Improving feedback quality with more detailed scoring rubrics.
- Add admin analytics on user performance and case difficulty.