Context: My portfolio used to be a pile of CodePen links and GitHub repos. Recruiters were confused, so I rebuilt everything around case studies and honesty logs.
AI assist: ChatGPT helped brainstorm section names/checklists; the content comes from actual analytics + recruiter feedback dated 2025-10-15.
Status: Still job-hunting. This is the system I actively maintain, not a retrospective on a finished product.

Reality snapshot

  • Portfolio surface = Gatsby/Netlify site + GitHub Pages template + PDF résumé.
  • Content lives in MDX case studies (content/pages/projects/*.mdx) so I can diff claims.
  • Analytics + recruiter feedback drive updates. If a case study stops performing, it gets rewritten or archived.
  • Honesty docs (honesty.md, honestplan.md) log every change with dates + rationale.

Inventory first

| Project | Asset Type | Primary Skill | Outcome |
|-----------------------|----------------------------|---------------------------|-------------------------------------------------------|
| Car-Match | GitHub repo + live demo | React + Express practice | Documented GitHub Pages + Render backend demo |
| Triangle Shader Lab | Static site + repo | WebGPU study | Adapted Hello Triangle/Textured Cube with explanations|
| CheeseMath | GitHub repo + Pages demo | Testing + Next.js | Calculator UI + Jest practice |
| CodePen experiments | CodePen embeds | UI/UX + JS fundamentals | Recreated in blog posts + templates |
  • This spreadsheet (Notion) highlights overlap and gaps. Everything I want to highlight must have context, constraints, and proof.

Narrative buckets I use

  1. Learning by Experimentation: CodePens + smaller demos (Garbage Collection, Sound Machine).
  2. Front-End/Product Work: Interactive Pokédex, CheeseMath, SPA résumés.
  3. Full-Stack/API: Car-Match, React + AWS CRUD, ProjectHub.
  4. Infrastructure/Automation: Docker Multilang, GitHub Actions, AWS internship capstone.

Each bucket links to detailed case studies and blog posts so recruiters can scan or deep-dive.

Case study template

## Reality snapshot
- Sentence about scope, hosting, limitations.
## Context & constraints
- Problem, users, deadlines, tooling.
## What I built
- Architecture diagram or bullet list.
- Screenshots / gifs / proof links.
## Observability & honesty
- Health checks, analytics, TODOs, known gaps.
## Evidence
- Repo link, live demo, prompt log, runbooks.
  • Every case study also starts with a callout: “Demo runs on free Render; expect 5-minute cold starts.” No surprises.

Maintenance loop

  1. Measure: Netlify analytics (time on page, exits), recruiter feedback, personal retros.
  2. Decide: If a case study underperforms or becomes misleading, demote it, rewrite it, or archive it.
  3. Update: Edit MDX + honesty docs. Note the date + reason.
  4. Verify: Run bun run lint, npm run build, and manual smoke tests.
  5. Communicate: Post the update on LinkedIn + in the honesty changelog so hiring teams see transparency.

Results (qualitative but real)

  • Recruiters now comment on specific projects (“Saw your Car-Match honesty block…”) instead of saying “nice site.”
  • I spend less time explaining what’s real because the case studies already do it.
  • I can onboard mentors quickly: “Read /projects/caris-ai/, then we’ll pair.”

Next steps

  • Automate analytics exports (Netlify → Google Sheets) so I can spot stale content faster.
  • Produce short Loom walkthroughs for each case study to help visual learners.
  • Ship a template repo others can fork (MDX + honesty log scaffold).

References