Context: These practices apply to my small projects (Car-Match, Interactive Pokédex, Triangle Shader Lab, CheeseMath). They run on GitHub Pages, Netlify, or Render—not on enterprise infra.
AI assist: ChatGPT helped reorganize my scattered checklists into readable sections; every item maps to real repos and docs.
Status: Student perspective. Think “personal SRE playbook” more than “platform engineering manifesto.”

Reality snapshot

  • Front ends: GitHub Pages (PWA/React/Vite builds) or Netlify (Gatsby). Build logs live in Actions + Netlify dashboards.
  • Back ends / APIs: Render free tier (Express + Mongo Atlas) or small serverless functions. Cold starts are real, so every README states the wake-up delay.
  • Observability: DevTools, Netlify analytics, basic CloudWatch dashboards, and manual smoke tests. No PagerDuty, no multi-region failover.
  • Goal: Publish demos that anyone can run, show how I think about resilience, and be transparent about what’s missing.

My five-part pre-launch checklist

1. Accessibility

  • Semantic HTML (main, nav, button) before considering ARIA.
  • Keyboard walkthrough recorded before each deploy (tab through, ensure focus outlines, confirm skip links).
  • Automated scans: axe CLI + Lighthouse (desktop + mobile). Failures block the release.
  • Manual screen reader pass (VoiceOver on Mac). I note rough edges in the repo if I can’t fix them immediately.

2. Performance

  • Bundle budgets in Vite/Gatsby (performanceBudget file). If LCP > 2.5 s locally, I trim images/fonts before publishing.
  • Lazy-load heavy sections (Three.js demos, code examples) and respect prefers-reduced-motion.
  • Use Netlify deploy previews to test throttled 3G speeds. I log actual numbers in the PR description.

3. Reliability

  • GitHub Actions handles lint/tests/builds. Netlify + Render webhooks notify Slack so I see failures quickly.
  • Each repo has a /healthz endpoint (backend) or /status.json file (frontend) describing what “healthy” means.
  • Feature flags live in JSON or env vars to hide in-progress sections while still deploying often.

4. Security

  • Secrets live in GitHub Actions secrets or Render dashboard. READMEs show exactly what to set (REACT_APP_API_BASE_URL, MONGODB_URI, etc.).
  • npm audit + OWASP ZAP baseline workflows run at least weekly. Issues get logged, even if the fix is “upgrade when maintainer releases patch.”
  • If a project uses third-party scripts, I document why and add subresource integrity hashes when possible.

5. Cost & honesty

  • Keep a tiny Notion table of monthly spend (Render free tier, Atlas, Netlify). Anything that risks a charge gets an alert.
  • README “Reality Check” section spells out cold starts, missing features, and TODOs so nobody mistakes a demo for production.
  • When something’s down (e.g., Render sleeping), the site tells you up front. No silent failures.

Case study: Interactive Pokédex (GitHub Pages)

AreaWhat I didWhere it lives
Accessibilityaxe + manual screen reader pass; swap out decorative sprites for role="img" w/ descriptive labelscontent/posts/pokedex/index.mdx + repo README
PerformanceIndexedDB cache for most recent results, localStorage for user preferences, throttled tests recorded in PRpokedex/README.md
ReliabilityNetlify analytics for referring traffic + error rates; manual smoke test script in scripts/smoke.shrepo scripts/
SecurityNo secrets (public PokeAPI). README includes rate-limit handling tips.README
Honesty“Reality” block on the project page outlines missing offline mode and limited dataset./projects/interactive-pokedex/

Other examples

  • Car-Match: Frontend on Pages, backend on Render/Atlas. README and /projects/car-match/ warn about 5-minute cold starts. Health checks + logs in Render dashboard show when the API is up.
  • Triangle Shader Lab: Static Next.js site on GitHub Pages. Observability is a simple log overlay + Netlify analytics—enough for a study project.
  • CheeseMath: Next.js demo + Jest practice repo. Accessibility checklist stored in docs/a11y.md, deploys automated via GitHub Actions → Pages.

What still needs work

  • End-to-end synthetic tests that run after each deployment (Playwright/Checkly). Right now smoke tests are manual.
  • Better logging for front-end demos (would love to send errors to Sentry or a lightweight endpoint).
  • Document cost impacts when I experiment with larger datasets or GPU workloads.
  • Expand the checklist into a real template repo so I stop copy/pasting between projects.

References