Building Ozark Ridge: Lessons Learned and What I'd Do Differently

This is the final post in the series. The first four covered what I built and how. This one covers what I learned, what I’d do differently, and why this architecture matters beyond the demo. What worked Archetype-based catalog generation scaled cleanly. Writing 1180 product descriptions by hand would have been infeasible. Generating them one-by-one with Claude would have been slow and inconsistent. The archetype system with variation logic produced realistic, diverse products at scale with no manual writing and consistent quality across the catalog. ...

April 16, 2026 · 9 min · Tyler

Keyword Search vs Semantic Search: Why Natural Language Queries Need Vector Embeddings

The previous posts covered architecture and data ingestion. This one is about the core value proposition: why semantic search matters and how to demonstrate it. The approach: build both keyword and AI search, run the same queries through each, and document where keyword search fails. The results make the case for semantic search more effectively than any architectural explanation could. What keyword search actually does Postgres full-text search works by tokenizing text into lexemes (normalized words), removing stop words, and matching query tokens against indexed documents. It’s fast, deterministic, and has been reliable for decades. ...

April 14, 2026 · 10 min · Tyler

Building AI Search for a Retail Website: The Stack and Why

I built Ozark Ridge, a mock outdoor gear retail site with AI-powered product search and a Rufus-style product assistant. The project exists to demonstrate RAG (Retrieval-Augmented Generation) in a realistic e-commerce context. This is the first post in a series documenting the build. This one covers the architecture and stack decisions. Later posts cover the RAG pipeline, keyword vs semantic search comparison, and building the AI assistant. What it does Two features: ...

April 12, 2026 · 7 min · Tyler