Context: Early-career developer documenting the test strategy I actually run on my repos (Car-Match, CheeseMath, BasicServerSetup, AWS labs). No production on-call experience yet.
AI assist: ChatGPT helped me reorder notes; every tool listed below is in use today (or clearly labeled “pilot”).
Status: Snapshot, not perfection. Contract testing + accessibility automation still need work.

Reality snapshot

  • Unit/component tests: Jest/Vitest + Testing Library. Run locally (watch mode) and in CI on every PR.
  • Integration tests: Supertest + Dockerized Postgres/Mongo + LocalStack for AWS services. Run on PRs touching backend code.
  • End-to-end: Playwright smoke tests on Netlify deploy previews, Percy (pilot) for Gatsby visual diffs.
  • Contract tests: Pact/OpenAPI schema checks run manually before major refactors—automation coming soon.
  • Observability: Test runs push results to GitHub Checks + Slack notifications. Failures block merges.

Pyramid breakdown

LayerScopeToolsCadenceStatus
UnitPure functions, hooksJest, VitestWatch + PR
ComponentReact UI, accessibilityTesting Library, Storybook test runnerPR + nightly
IntegrationAPI + DB, AWS mocksSupertest, LocalStack, Docker ComposePRs touching backend
ContractAPI request/response contractsPact, OpenAPI validatorsManual before breaking changes🧪 Pilot
End-to-endUser flowsPlaywright, Cypress (legacy), Percy (visual)Main merges + scheduled✅ / 🧪

Tooling details

  • Jest/Vitest: Cover utility modules (date math, data transforms), React hooks, and components. Mocks replaced with MSW where possible.
  • Testing Library: Queries by role/label to ensure accessibility. If a component is hard to test, it’s usually poorly structured.
  • Supertest + Docker Compose: Spins up Express + Postgres containers, seeds data, runs API tests, tears everything down.
  • LocalStack: Emulates S3/DynamoDB/SNS for AWS labs. Lets me test IaC templates without hitting real AWS (saves $$).
  • Playwright: Automates login → CRUD → logout. Runs on Netlify deploy previews so I can review failures before shipping.
  • Percy (pilot): Visual snapshots for this Gatsby site. Still deciding if the cost is worth it.

Reliability practices

  • Each test suite uses isolated data (per-worker DB, temporary DynamoDB tables).
  • Factories generate deterministic fixtures to avoid flaky assertions.
  • Only end-to-end tests have retries (max 2) to handle occasional network hiccups.
  • Monthly “test hygiene” session: delete redundant tests, update snapshots, document new commands.

Workflow

  1. Pre-commit: Lint + unit tests via Husky.
  2. Pull request: Unit, component, integration suites run in GitHub Actions.
  3. Deploy preview (Netlify/Render): Playwright smoke suite + optional Percy run.
  4. Main merge: Deploy + regressions checks, nightly contract suite.
  5. Weekly: Manual contract tests (until automated) + accessibility spot checks.

Known gaps

  • Contract tests aren’t automated yet. I run them before schema changes, but a nightly job is on the roadmap.
  • Accessibility checks only run manually; I want axe in CI for components/pages.
  • Mobile end-to-end tests are limited—need to add Playwright mobile viewports.
  • Observability for Playwright runs is basic (GitHub logs). Would like better dashboards.

Links

  • Testing templates: https://github.com/BradleyMatera/testing-templates
  • Example suites: Car-Match (tests/), BasicServerSetup (postman/), CheeseMath (__tests__/).
  • Prompt logs + retros: notes/testing-journal.md

References