Skip to content

AI & Automation Engineer

AI / Automation Engineer

Practicing AI + automation with transparent limits

I haven’t launched AI copilots for paying customers. These prototypes document how I learn with ChatGPT, Copilot, and local LLMs.
Reality snapshot

Current focus

  • Local-first chat experiments (Convo-AI) that run FastAPI + Ollama on my laptop.
  • Website embeds (ProjectHub) that pull from documentation to answer basic questions.
  • Prompt libraries + README honesty logs that show how much AI wrote vs. what I edited.
  • No production integrations, no enterprise telemetry—just learning projects with clear TODOs.
Work samples

Proof on GitHub

Convo-AI

  • FastAPI backend + simple UI for local chat flows.
  • Uses Ollama models and environment variables documented in the repo.
  • Disclosure: AI wrote the first draft of most endpoints; I kept prompts + edits in the README.

ProjectHub Copilot

  • Express proxy + lightweight widget that surfaces answers from my own case studies.
  • Currently supports a single route and manual deployments.
  • Backlog items (usage analytics, guardrails) are documented as future work.
Tools

What I’m experimenting with

Python + FastAPI (learning)Node.js / Express (comfortable for prototypes)LangChain (exploring)Ollama + local LLMsOpenAI / Anthropic APIsSupabase (learning for vector stores)GitHub Actions for small deploys

Every repo labels what’s working vs. what’s aspirational so collaborators know the maturity level.

Help wanted

What I still need to learn

  • Responsible AI guardrails (policy checks, escalation paths) in production environments.
  • Measuring ROI beyond “this feels faster on my laptop.”
  • Scaling prompt orchestration with queues, storage, and audit requirements.
  • Security/privacy reviews for AI features before they reach real users.

If you mentor junior engineers on applied AI/automation, I’d appreciate pairing sessions.