Understanding the Landscape: Agents vs. Agentic Systems
The phrase “AI agent” has become ubiquitous in today’s tech discourse, appearing in everything from product announcements to academic publications. Yet clarity around what it actually means—and how it differs from “Agentic AI” – remains elusive for many.
Think of an AI agent as a specialized digital professional. It operates within a defined role, processes inputs (such as task instructions), applies reasoning through an AI model, and delivers outputs—whether that’s a recommendation, summary, or executable action. Consider a research agent that distills recent publications into key insights, or a coding agent that suggests function implementations.
Agentic AI represents something fundamentally different: an ecosystem of multiple agents working in concert, orchestrated to achieve outcomes beyond what any single agent could accomplish alone. Rather than one worker, picture a team of specialists coordinated by a skilled manager. One agent might handle task planning, another gathers intelligence, a third drafts content, while yet another refines the final output. When intelligently connected, these systems create emergent capabilities that transcend their individual components.
It’s important to recognize that multi-agent systems aren’t new – they’ve existed in computer science for decades. What has changed dramatically is the advent of sophisticated large language models (LLMs), which have transformed these systems from rigid, narrowly-scoped tools into flexible, adaptive platforms accessible to a much broader audience. With LLMs, agentic systems can process natural language tasks, dynamically integrate new capabilities, and collaborate in remarkably intuitive ways.
Why Businesses Should Care: The Strategic Advantages
The practical benefits of Agentic AI translate directly into business value across several dimensions.
- Autonomy means these systems can decompose high-level objectives into actionable subtasks and execute them without constant human supervision. This reduces the need for micromanagement and enables AI to deliver value continuously in the background, freeing your team to focus on strategic initiatives.
- Composability offers unprecedented flexibility. Once you’ve established a framework, you can plug in or swap out specialized agents based on evolving needs. Today’s “coffee industry analyst” agent becomes tomorrow’s “regulatory compliance checker” with minimal friction.
- Privacy-first execution becomes achievable when running models locally, as demonstrated in our proof of concept. Sensitive data never leaves your controlled environment, supporting both privacy requirements and regulatory compliance while reducing dependence on external API providers.
- Predictable economics emerge from using open-weight models locally, eliminating per-call API fees. For organizations processing high volumes of tasks, this architectural choice can yield substantial cost advantages over time.
To put it in visionary terms: imagine an automated market research assistant that takes a single question like “What are the key sustainability trends in packaging for 2025?” and then:
- Plans the research steps,
- Collects and summarizes articles,
- Drafts a report,
- And presents it in a structured, shareable format.
That’s the power of agentic systems – chaining together different capabilities to deliver results that a single agent could not.

Our Experimental Journey: Building the Proof of Concept
We developed our own Agentic AI proof of concept to translate these theoretical concepts into tangible workflows. The objective wasn’t to ship production software, but rather to create an experimental environment where we could test different orchestration paradigms.
A foundational architectural decision was running everything locally using Ollama with the llama3.1:8b model, ensuring data sovereignty and cost predictability.
Version 1: Hardcoded Pipeline
- The planner breaks a goal into subtasks.
- The researcher writes short paragraphs addressing each subtask.
- The coder compiles everything into a nicely formatted Markdown report.
Our initial implementation established a straightforward sequence: planner → researcher → coder. The planner decomposes objectives into subtasks, the researcher addresses each with focused analysis, and the coder assembles everything into polished Markdown documentation.
This minimal approach validated the core concept—that chaining specialized agents could transform vague prompts into structured outputs.
# ---- Orchestration (simple imperative pipeline) ----
def run_pipeline(initial: GraphState) -> GraphState:
state = planner_node(initial)
state = researcher_node(state)
state = coder_node(state)
return state
Version 2: StateGraph (Explicit Flow Architecture)
We then reconstructed the same sequence using LangChain‘s StateGraph framework, which renders the workflow as an explicit graph structure. While the execution path remained linear, this foundation enabled more sophisticated branching and conditional logic in subsequent iterations.
This evolution resembles moving from a rigid assembly line to a modular workflow system where new processing steps can be easily introduced.
# ---- Build the LangGraph ----
workflow = StateGraph(GraphState)
workflow.add_node("planner", planner_node)
workflow.add_node("researcher", researcher_node)
workflow.add_node("coder", coder_node)
workflow.set_entry_point("planner")
workflow.add_edge("planner", "researcher")
workflow.add_edge("researcher", "coder")
workflow.add_edge("coder", END)
app_graph = workflow.compile()
Version 3: Conditional Logic and Contextual Expertise
The third iteration introduced context-aware branching: plan → research → [coffee expert] → code. Here, a specialized coffee industry agent activates only when the user’s objective references coffee-related topics.
This seemingly modest enhancement demonstrated how agentic systems can become contextually intelligent, dynamically engaging domain expertise only when relevant to the task at hand.
# ---- Build the LangGraph ----
workflow = StateGraph(GraphState)
workflow.add_node("planner", planner_node)
workflow.add_node("researcher", researcher_node)
workflow.add_node("coffee_expert", coffee_expert_node)
workflow.add_node("coder", coder_node)
workflow.set_entry_point("planner")
workflow.add_edge("planner", "researcher")
# Conditional branch after research
workflow.add_conditional_edges(
"researcher",
coffee_router_llm,
{"coffee_expert": "coffee_expert", "skip_coffee": "coder"},
)
# Continue after optional agent
workflow.add_edge("coffee_expert", "coder")
workflow.add_edge("coder", END)
app_graph = workflow.compile()
Version 4: Supervisor-Driven Self-Orchestration
Our final iteration introduced a Supervisor agent that makes dynamic routing decisions. Rather than following predetermined paths, the Supervisor evaluates both the objective and current progress state, then routes work to the next appropriate specialist—or concludes the process when output quality meets requirements.
This approach delivers greater autonomy but introduces controlled variability: outcomes may differ across runs, and the Supervisor might route through agents multiple times. Nevertheless, it demonstrates the potential for self-adapting systems that respond to task characteristics rather than merely executing predetermined scripts.
-----------------------------
# Build the graph
# -----------------------------
workflow = StateGraph(GraphState)
# Nodes: the four specialists + the supervisor node itself
workflow.add_node("planner", AGENTS["planner"])
workflow.add_node("researcher", AGENTS["researcher"])
workflow.add_node("coffee_expert", AGENTS["coffee_expert"])
workflow.add_node("coder", AGENTS["coder"])
# The supervisor is used only as a router, not a regular node
def supervisor_node(state: GraphState) -> GraphState:
# no mutation here; we route via conditional edges
return state
workflow.add_node("supervisor", supervisor_node)
# Entry point is the supervisor (it picks the very first agent)
workflow.set_entry_point("supervisor")
# From supervisor → one of the agents or finish
workflow.add_conditional_edges(
"supervisor",
supervisor_router,
{
"planner": "planner",
"researcher": "researcher",
"coffee_expert": "coffee_expert",
"coder": "coder",
"finish": END,
},
)
# After ANY agent finishes, go back to supervisor to decide the next step
for name in ["planner", "researcher", "coffee_expert", "coder"]:
workflow.add_edge(name, "supervisor")
app_graph = workflow.compile()
Explore the Code Yourself
The full source code for all four versions is available in our Bitbucket repository:
👉 https://bitbucket.org/n47/agentic-ai-poc
We invite you to experiment with the examples locally and observe how orchestration evolves from simple pipelines to increasingly autonomous configurations.
From Exploration to Application
Our Agentic AI PoC is not a finished product. It is an exploration — a way to understand how multiple AI agents can collaborate to deliver richer results than a single model call.
Even in these small examples, the business value is visible:
- Reports compiled from a single input prompt,
- Optional expertise added automatically when relevant,
- And orchestration that adapts without manual intervention.
Now, imagine applying this approach to real business challenges:
- Automating internal research and reporting,
- Building knowledge assistants that integrate with company data,
- Or creating domain-specific agent teams for tasks like compliance, marketing, or customer insights.
We’d love to hear your thoughts. 👉 Leave a comment below, or get in touch with us if you see a use case in your organization.
Together, we can explore how Agentic AI moves from experimentation into practical business impact.
The Path Forward: Model Context Protocol
While our proof of concept concentrated on agent orchestration mechanics, we intentionally maintained simplicity by not connecting agents to external tools or live data sources. Looking ahead, the emerging Model Context Protocol (MCP) represents a promising evolution. MCP aims to standardize how AI agents interface with APIs, databases, and enterprise systems – enhancing interoperability and extensibility. As this ecosystem matures, we view MCP as a logical next step that can make agentic AI not just more capable, but also more practical for genuine enterprise deployment scenarios.




