I wanted to understand what it actually takes to build something that makes real decisions. So I built a job research agent using LangGraph: give it a company name, it autonomously gathers information from multiple sources, evaluates whether it has enough to work with, and loops back if it doesn’t. This post is about what that process taught me about state, nodes, and conditional nodes.
The Problem With Linear Pipelines
A typical “agent” pattern looks like this:
search() → summarize() → format_output()
Each step runs unconditionally. There’s no evaluation, no branching, no ability to decide “I don’t have enough yet, let me try again.” You can dress it up with a streaming UI and call it an agent, but the control flow is hardcoded. It does the same thing every time regardless of what it finds.
Real agentic behavior requires the system to evaluate its own results and decide what to do next. That’s the gap LangGraph is designed to fill.
State: Not Memory, A Typed Contract
The first concept LangGraph forces you to think about is state. Not memory in the vague AI sense. State is a concrete, typed data structure that every node in your graph reads from and writes back to.
Here’s the state definition for this agent:
from dataclasses import dataclass
@dataclass
class AgentState:
company_name: str
company_search_results: str = ""
news_search_results: str = ""
evaluation: str = ""
search_attempts: int = 0
final_briefing: str = ""
This is the baton that gets passed through the entire workflow. Every node receives this exact shape. Every node returns a dict of only the fields it changed. LangGraph merges those changes back into state automatically.
Note search_attempts. It’s a safety brake on the loop-back logic. Without it, an agent that keeps deciding its results are insufficient will run forever and burn through your API credits. Hopefully you set a $20 limit, like I did. Every loop-capable agent needs some kind of counter like this.
The discipline of defining state upfront forces a useful question: what does this agent actually need to know at each step? That question is harder to answer with plain functions, where data tends to get passed around implicitly.
Nodes: Pure Functions That Do One Thing
A node is just a Python function. It takes state as its only argument and returns a dict of updated fields. That’s it.
The search nodes use Tavily, a search API built for LLM agents. Unlike scraping Google or using a general-purpose search API, Tavily returns clean, pre-extracted text content from each result. No HTML parsing, no noise filtering. You get a list of results where each one has a content field ready to drop into a prompt. For an agent that needs to pass search results directly to an LLM, it’s ideal.
def search_company(state: AgentState) -> dict:
query = f"{state.company_name} company overview"
results = tavily.search(query, max_results=5)
content = "\n\n".join([r["content"] for r in results["results"]])
return {
"company_search_results": content,
"search_attempts": state.search_attempts + 1
}
The evaluate node uses the LLM as a decision-maker, not just an output generator:
def evaluate_results(state: AgentState) -> dict:
prompt = f"""
Company Overview:
{state.company_search_results}
Latest News:
{state.news_search_results}
Confirm our data includes company overview, latest news, possible
salary range, technical signals, and interview topics.
Reply 'sufficient' if the results contain what we want.
Reply 'not sufficient' if they don't.
Briefly explain why you made this evaluation.
"""
response = anthropic.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": prompt}]
)
text_block = next(
(b for b in response.content if isinstance(b, TextBlock)), None
)
return {"evaluation": text_block.text if text_block else ""}
evaluation is a string, not a boolean, for a specific reason: the LLM doesn’t return True or False. It returns reasoning. Keeping the full text means you can debug why it decided what it decided.
Conditional Edges: Where the Agent Actually Decides Something
The graph assembly looks mechanical — add nodes, add edges — but the conditional edge is where the agent stops being a pipeline and starts being something that actually reasons about its own state.
def route_evaluation(state: AgentState) -> str:
if "sufficient" in state.evaluation.lower():
return "generate_briefing"
elif state.search_attempts >= 3:
return "generate_briefing"
else:
return "search_company"
This function reads state and returns a string — the name of the next node to run. LangGraph uses that string to route execution. The .lower() call is deliberate: Claude might respond with “Sufficient” or “SUFFICIENT” depending on formatting, and a case-sensitive check would route incorrectly.
Wiring it into the graph:
graph.add_conditional_edges(
"evaluate_results",
route_evaluation,
{
"generate_briefing": "generate_briefing",
"search_company": "search_company"
}
)
The third argument is a mapping of every possible return value to a node name. LangGraph validates this at compile time — if route_evaluation ever returns something not in that mapping, it fails immediately rather than silently routing to the wrong place.
The full graph looks like this:
search_company → search_news → evaluate_results
│
┌───────┴───────┐
│ │
sufficient not sufficient
│ (or attempts >= 3)
▼ ▼
generate_briefing search_company
That loop is what makes this agentic. The agent searches, evaluates what it found, and decides whether to keep going or produce output. That decision is made at runtime based on actual results — not hardcoded into the control flow.
When LangGraph Is Overkill
Let’s be honest: for a 4-node agent, you could skip LangGraph entirely. A while loop with a counter does the same thing and has less overhead.
LangGraph earns its complexity when:
- You have many nodes with non-trivial branching logic
- You need parallel node execution (LangGraph supports this natively)
- You want built-in observability and the ability to replay or inspect runs
- You’re building human-in-the-loop workflows where a node needs to pause and wait for input
For this demo, the value was mostly pedagogical. The graph model forces you to think explicitly about state shape, node responsibilities, and every possible transition, which are the right questions to ask when building any agent regardless of framework.