Engineer transitioning from generic backend to AI work
Before
Worked with OpenAI's API to add chatbot features to our product.
After
Designed and shipped a 3-step agentic workflow on Claude Sonnet 4.6 serving ~280K user queries/month at p95 latency 1.6s; built an LLM-as-judge eval harness that gated 6 prompt-iteration deploys (caught 2 regressions before rollout).
Why this works: Names the foundation model and version, gives production scale (monthly users), names the eval discipline, and quantifies the eval's effect (regressions caught).

