
Tool calling and agentic workflows
Agentic systems and workflows are a hot topic in the LLM field as they help take automation efficiency with LLMs to the next level. One of the factors that makes a system or workflow agentic is the ability of the agent/LLM to use external tools, also known as function calling. Think about examples like connecting an agent to your smart home, enabling you to interact with it using natural language (voice) prompts.
🤖 New to LLM agents?
Read our Introduction to LLM agents for software development.
Tool calling is the core building block because it lets the LLM or agent interact with any number of external systems (e.g. a calculator, weather, current time, productivity apps, etc.). In other words, it makes the AI “actually do stuff” rather than just “talk about stuff”. Using tools is difficult for LLMs because of different vendor-specific formats i.e. there is no universal standard to format these function call instructions. You need to hardwire the connection using prompts or rely on different vendor APIs. For more information on how to connect LLMs and tools, check out our tool calling introduction post. The essential concept behind tool calling is that the model in place can ask the Agent Scaffold (the system surrounding the LLM) to perform an external function call on its behalf. New standards like MCP then aim to abstract that tool call itself, so the scaffold can connect to any tool that supports MCP.
What is the Model Context Protocol (MCP)?
To quote the Model Context Protocol documentation, MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications.
MCP is useful if you want to build complex agentic workflows using LLMs while retaining maximum flexibility of what external sources and tools the agent can access. It’s a way for LLM-powered applications to connect to data and tools. For example, through MCP, the LLM-powered system can query databases, write files, or call APIs. MCP provides a standardized, flexible, and scalable way of enabling interactions between LLM systems and external functions. The Model Context Protocol enables the automation of tasks using LLMs, and should help provide more accurate responses to user queries (e.g. instead of generating a text response to a mathematical problem, the LLM-powered application may decide to just call a code interpreter function to provide an accurate output).
One useful thing about using MCP is that you can easily extend the list of available services (tools), and with dynamic tool discovery you can make sure the LLM has access to the latest tools on the fly. And finally, with new MCP servers becoming available for more and more things, extending the capabilities of any AI agent has never been easier. Want an AI agent that can interact with music production software or your minecraft server? All possible now thanks to MCP.
The architecture of MCP
MCP defines a client-server architecture that enables the host application to connect to multiple servers:
- MCP Host: The environment (program, agent application, or other LLM-based system) that the LLM runs in. The MCP Host needs to access the data and tools provided by the MCP Server via the MCP Client.
- MCP Client: This client manages the communication between the MCP Servers and the application, translating model requests into standardized calls.
- MCP Server: A program that exposes data sources and various capabilities (functions) through the standardized Model Context Protocol which makes them accessible via HTTP.
The MCP Server could access local files, databases, and services. Or remote services could enable it to connect to external (e.g. cloud) systems over the internet via APIs.
Remember that connecting the MCP Client to the LLM is not handled by the MCP protocol. I.e. translating the available resources into native function calls for the LLM is still necessary. MPC is marketed as the “USB-C” for AI agents, but you have to think of any LLM as the processor within a USB-C-enabled device. Going from the USB-C port to the actual chip on the mainboard is again a custom solution.
The MCP workflow
The workflow proposed by the Model Context Protocol, illustrated above, is fairly simple:
- The LLM is initialized with the tools available via the MCP Client
- It identifies the appropriate tool to call based on the user’s input (prompt)
- The LLM gathers all the parameters necessary for using that tool
- It provides all the information in a structured JSON format that will be required to call the external function
- The tool is called via MCP and does whatever logic is implemented
- The result of the tool call is returned to the LLM
- The LLM decides how to act further based on the result of the tool call
Why use MCP?
Using MCP with already existing servers and clients is a no-brainer, there is simply no better alternative. When compared to the traditional approach of manually developing a new custom integration (in the case of a more complex use case), MCP reduces the development overhead of connecting LLM-based systems to tools. It provides a clean and standardized interface, better code readability, and better maintainability due to its standardized approach.
MCP was designed to minimize latency in function calls and for efficient resource usage, so it is a scalable solution. Using MCP also gives you flexibility in designing modular agents.
How to get started with MCP?
As an end-user:
- Pick a supported MCP Host / Client
- Connect to a supported MCP Server
- Enjoy!
As a developer:
- If you want your agent / LLM to be able to use the various tools provided by the MCP servers out there, you need to write an MCP client. Check out the docs on how to get started.
- If you want your tool / resource to be usable by the various agents / LLMs out there, you need to write an MCP server. Check out the docs on how to get started.
- Or check out the MCP roadmap to get a glimpse of what is to come.
Conclusion
While MCP is definitely a huge step in the right direction, it comes with it’s own flaws. Developers have already raised concerns about the lack of security, in both the absense of transfer encryption and explicit human-in-the-loop approval mechanisms. It will be interesting to see if the MCP standard manages to overcome these critique points in future revisions.