What I Learned Building a Multi-Agent Document Analysis System

This is the retrospective for the multi-agent document analysis project. The first posts covered: why use multiple agents how the specialist agents work how the coordinator synthesizes findings This one covers what worked, what broke, and what I would change. In short, the architecture worked, the coordinator was the most valuable part, and chunking caused the worst failure mode. What worked The BaseAgent abstraction was enough. I did not need a framework. A simple base class handled the repeated LLM-call logic: model name, system prompt, max tokens, response cleaning, and JSON parsing. ...

April 24, 2026 · 7 min · Tyler

Building Ozark Ridge: Lessons Learned and What I'd Do Differently

This is the final post in the series. The first four covered what I built and how. This one covers what I learned, what I’d do differently, and why this architecture matters beyond the demo. What worked Archetype-based catalog generation scaled cleanly. Writing 1180 product descriptions by hand would have been infeasible. Generating them one-by-one with Claude would have been slow and inconsistent. The archetype system with variation logic produced realistic, diverse products at scale with no manual writing and consistent quality across the catalog. ...

April 16, 2026 · 9 min · Tyler

What I Learned Building a LangGraph Agent From Scratch

I wanted to understand what it actually takes to build something that makes real decisions. So I built a job research agent using LangGraph: give it a company name, it autonomously gathers information from multiple sources, evaluates whether it has enough to work with, and loops back if it doesn’t. This post is about what that process taught me about state, nodes, and conditional nodes. The Problem With Linear Pipelines A typical “agent” pattern looks like this: ...

March 30, 2026 · 5 min · Tyler