Comment by simonw
17 hours ago
I'm very excited about tool use for LLMs at the moment.
The trick isn't new - I first encountered it with the ReAcT paper two years ago - https://til.simonwillison.net/llms/python-react-pattern - and it's since been used for ChatGPT plugins, and recently for MCP, and all of the models have been trained with tool use / function calls in mind.
What's interesting today is how GOOD the models have got at it. o3/o4-mini's amazing search performance is all down to tool calling. Even Qwen3 4B (2.6GB from Ollama, runs happily on my Mac) can do tool calling reasonably well now.
I gave a workshop at PyCon US yesterday about building software on top of LLMs - https://simonwillison.net/2025/May/15/building-on-llms/ - and used that as an excuse to finally add tool usage to an alpha version of my LLM command-line tool. Here's the section of the workshop that covered that:
https://building-with-llms-pycon-2025.readthedocs.io/en/late...
My LLM package can now reliably count the Rs in strawberry as a shell one-liner:
llm --functions '
def count_char_in_string(char: str, string: str) -> int:
"""Count the number of times a character appears in a string."""
return string.lower().count(char.lower())
' 'Count the number of Rs in the word strawberry' --td
I love the odd combination of silliness and power in this.
Was the workshop recorded?
No video or audio, just my handouts.