Tool use: giving the AI hands
An LLM by itself can only talk. Tool use is what lets it do things like check the weather or do real math.
The problem with a pure LLM
An LLM, all by itself, is a brilliant talker locked in a soundproof room. It can write a sonnet. It can’t tell you today’s weather. It can’t multiply 47 × 312 reliably. It can’t open your email.
That’s because it has no way to reach anything. It just produces words.
Enter “tools”
The fix is simple. We hand the LLM a menu of little programs (called tools or functions) it can ask us to run. Each tool has:
- A name (like
get_weatherorcalculator). - A short description of what it does.
- A list of inputs it needs.
When the model decides a tool would help, it doesn’t run it. It just asks, with a structured request like get_weather(city="Tokyo"). Your code runs the tool, sends the answer back, and the model continues with that new information.
This back-and-forth has two common names you’ll hear: tool use and function calling. Same thing.
See the loop
Click through one of these. Each step shows what the model decided, what the tool returned, and how the final answer was built.
- 1You askWhat's the weather in Tokyo right now?
What changed in 2026
Tool use exploded once models got really good at picking the right tool with the right inputs. Three highlights:
- Parallel calls. Today’s Claude, GPT, and Gemini models can ask for several tools at once. If you say “what’s the weather in London and New York?”, the model fires two weather lookups side-by-side instead of one after the other.
- MCP, the “USB-C for AI.” Model Context Protocol is an open standard that lets any AI plug into any tool without custom code. Anthropic started it; OpenAI, Google, and many others now support it. There are already thousands of free MCP “plugins” for things like Google Drive, GitHub, and databases.
- Tool search. Some models can now search a giant catalog of tools to find the right one for the job, instead of being handed a tiny fixed menu.
Why it matters
Tool use is the bridge between “AI that talks” and “AI that gets things done.” Almost every flashy AI app you’ve seen in 2026, from coding assistants to research helpers to shopping bots, is really just an LLM with the right set of tools wired up.
Quick check
- 1. What does 'tool use' let an LLM do?
- 2. Who actually runs the tool?
- 3. What is MCP?