Context: Discord-hosted winter CTF focused on JWT chaining + gRPC transport. Everything happened on my laptop—no production systems touched.
AI assist: ChatGPT walked me through Libsodium quirks, grpcurl flags, and token-claim gotchas whenever I stalled. I stored each prompt/response in the repo for transparency.
Status: Documenting the lab so classmates (and future-me) can reproduce the ladder without assuming I’ve run auth incidents in the wild.
Reality snapshot
- The lab shipped starter artifacts (
PAGE_TOKEN, secretbox.md, proto files). I supplied the glue code, TLS setup, and troubleshooting.
- Every token exchange was scripted; nothing was “click next.” I automated decrypt → verify → request to avoid mis-ordering steps under time pressure.
- The “win” was extracting a final streaming token, not supporting real users. Treat this as a study note, not evidence of production mastery.
What I set up
Files & scaffolding
ctf-tools/ctf_jwt_walkthrough.py decrypts the Libsodium payload, verifies claims with PyJWT (audience checks disabled for the lab), and logs timestamps for each hand-off.
ctf-tools/grpc_client.py wraps the generated Python stubs so I can swap AUTH_TOKEN via flags (--bootstrap, --unary, --stream).
notes/terminal-log.md captures every command, header, and response code so reviewers see exactly what I typed.
- TLS certs from the challenge live in
certs/lab/. I pinned them in grpcurl with --cacert and re-used them in the Python client.
Toolchain
| Tool | Why I needed it |
|---|
| Libsodium / PyNaCl | Replayed the XSalsa20-Poly1305 decrypt (secretbox_open) using the provided key + nonce. |
| PyJWT | Verified signatures, printed claims, and disabled verify_aud when the lab intentionally left audiences blank. |
| grpcurl | Fast pokes at unary vs streaming methods; helpful for spotting header mistakes. |
| Python gRPC client | Produced more detailed stack traces than grpcurl when HTTP 464s popped up. |
Wireshark + openssl s_client | Confirmed the TLS handshake + ALPN negotiation; crucial when proxies silently closed streams. |
The ladder, step by step
- Bootstrap decode –
jwt.io + PyJWT let me inspect the PAGE_TOKEN. Seeing the kid claim point at secretbox.md confirmed the symmetric key route.
- Ciphertext decrypt – Base64 decoded the key/nonce/ciphertext, fed them into Libsodium, and got
JWT_TOKEN. I scripted retries because mistyping Base64 once meant starting over.
- Token trading – Each call to
token.v1.TokenService/GetToken issued a more scoped token (CONNECT_UNARY_TOKEN, NA_CL_SECRET_TOKEN, etc.). I exported whichever token was current to AUTH_TOKEN so subsequent commands read it automatically.
- Streaming flag –
StreamToken refused to cooperate until I normalised every header to lowercase (authorization, te) and forced content-type: application/grpc. Once those matched, the Render-style proxy let the stream through and I grabbed the flag.
Troubleshooting log
| Symptom | Root cause | How I fixed it |
|---|
| HTTP 464, zero server logs | Sent JSON or uppercase headers through the proxy | Forced lowercase metadata + application/grpc every time. |
| Token verification failed | Audience check still enabled | Passed options={"verify_aud": False} to jwt.decode. |
| gRPC metadata missing | Used Authorization instead of authorization | Lowercased the header; ALB stripped the uppercase version. |
| TLS handshake reset midstream | Forgot to pin the provided cert | Added --cacert certs/lab/rootCA.pem (and equivalent in Python) before retries. |
| Manual token swaps caused mistakes | Copy/paste fatigue | Wrote scripts/set-token.sh <token_file> to export the current value. |
Evidence that I actually did the work
- Scripts & notes:
ctf-tools/, notes/terminal-log.md, and notes/ai-prompts.md live in the repo so reviewers can replay every command.
- Packet capture:
captures/streamtoken-success.pcapng shows the working HTTP/2 exchange (ClientHello → SETTINGS → HEADERS/DATA).
- Gist snippet: https://gist.github.com/BradleyMatera/ctf-jwt-notes (redacted secrets) demonstrates the decrypt + verify loop.
- Prompt log: Lists each ChatGPT conversation that influenced the code so I don’t present AI-generated output as my own insight.
What’s still on the todo list
- Convert the markdown logs into a repeatable workshop (maybe a
README.md with copy/paste commands).
- Add pytest coverage for the helper scripts and publish them as a
ctf-tools package when they’re less fragile.
- Explore gRPC-Web + Envoy because most browser clients I touch won’t support native gRPC.
- Build a tiny dashboard showing which token you’re currently holding; right now it’s just environment variables and terminal echoes.
References & further reading