Getting language models to use external tools (a.k.a. function calls)
NOTE! This notebook has not yet been updated to use the new ``v0.1+`` pythonic syntax. We are working through the notebooks still. (PRs are welcome)
Note that this notebook is a work in progress, and is not in its final article-style form. But it is already a working introduction to tool use and function calling in guidance.
This notebook demonstrates how to get LLMs to call external tools when needed. Tools use can be implementated many ways, because there are many possible ways to design prompts that then produce outputs that can be parsed to trigger external tool calls. You can create and parse of all this syntax yourself, but Guidance also has special support commands for this that align with both the way LLMs are actually executed and with popular APIs like OpenAI. Using this syntax will also help ensure you align your prompts with any fine-tuning the LLM may have undergone for tool use (assuming the corresponding LLM object in Guidance has support built in).
OpenAI Chat Models
Tool use in Guidance is designed to align with how the model actually processes the text you give it. This means you give the model the actual function definition text the model sees, and you watch for the text generated by the model when it wants to make a function call. While the OpenAI Chat API abstracts away all these details, Guidance re-exposes them so you can interact with OpenAI models in the same way you would interact with any other model.
So in the examples below you will see text going into the model’s system prompt, and function calls coming out of the model as though you were watching the raw model output inside the assistant
role. But behind the scenes the guidance.llms.OpenAI
class translates this text into the corrresponding API calls. Note that the text that Guidance puts into the system prompt follows the TypeScript format that ChatGPT claims to expect on the backend, so you are seeing what things look like to the
LLM itself (we just asked ChatGPT what it expects in order to get this format).
Define the tool(s) we want to use
Here we use the same mock tool that is used in the OpenAI docs, a mock weather service function.
[1]:
# define a tool we would like the model to use
import json
def get_current_weather(location, unit="fahrenheit"):
""" Get the current weather in a given location.
Parameters
----------
location : string
The city and state, e.g. San Francisco, CA
unit : "celsius" or "fahrenheit"
"""
weather_info = {
"location": location,
"temperature": "71",
"unit": unit,
"forecast": ["sunny", "windy"],
}
return json.dumps(weather_info)
Define a guidance program that uses the tool(s)
To get the LLM to use tools when it needs to you need to first specify which tools it can use, then you need to watch for when the LLM wants to use a tool.
Function definition: There are many ways you can tell the LLM about functions it can use, but in Guidance by convention we use the tool_def
partial to write this definition. Each LLM object defines tool_def
so that you can use it and know that if your model was fine tuned for tool use you will align with how the model was trained. The tool_def
program is (normally) defined by the LLM running your program and it will convert a list of function definitions into a format that the
model understands (where the function definitions are the same as the OpenAI API expects). For OpenAI models the text generated by tool_def
is TypeScript type definitions and belongs at the end of the system
message. Note: Any variables not found in the current program scope fall back to the LLM object, so writing ``tool_def`` falls back to ``llm.tool_def`` unless you have explicitly defined a custom ``tool_def`` variable in the current program.
Call detection: Calls to functions by the LLM can be detected manually by setting the stop
or stop_regex
parameters of the gen
command to something that signifies that the LLM is making a function call. But a cleaner way is to use the function_call="auto"
parameter. This will get passed directly to the LLM object so that it can set the appropriate stop_regex
parameter or API parameter (to change how this work you can override the function_call_stop
or
function_call_stop_regex
variables). There is also a extract_function_call
variable that allows you to extract a callable object from the text returned by gen
calls. Rather that calling this manually you can also treat the returned text just like a function and Guidance will use the extract_function_call
command in the background, so calling the string will result in calling the tool call embedded in that string. This makes it easy to work with tool call outputs in the same way
you work with other outputs from the LLM.
In summary there are four special variables and one gen
argument that are used to implement tool use in Guidance. All of them have default implementations defined by the LLM object, but you can override them to change how tool use works:
tool_def
: A guidance program that defines the tool(s) that the LLM can use, it looks for afunctions
variable that has function definitions in the OpenAI dictionary-style function definition syntax.function_call
: This parameter of thegen
command is passed directly to the LLM object to tell it if it should generate function calls.extract_function_call
: A function that takes the text returned by the LLM and extracts a callable object from it.function_call_stop
: A string that is used to detect when the LLM is making a function call.function_call_stop_regex
: A regex that is used to detect when the LLM is making a function call.
Below is an example that puts all this together for the OpenAI Chat API:
[2]:
import guidance
# define the chat model we want to use (must be a recent model supporting function calls)
guidance.llm = guidance.llms.OpenAI("gpt-3.5-turbo-0613", caching=False)
# define a guidance program that uses tools
program = guidance("""
{{~#system~}}
You are a helpful assistant.
{{>tool_def functions=functions}}
{{~/system~}}
{{~#user~}}
Get the current weather in New York City.
{{~/user~}}
{{~#each range(10)~}}
{{~#assistant~}}
{{gen 'answer' max_tokens=50 function_call="auto"}}
{{~/assistant~}}
{{#if not callable(answer)}}{{break}}{{/if}}
{{~#function name=answer.__name__~}}
{{answer()}}
{{~/function~}}
{{~/each~}}""")
Note that in the program above we make a maximum of 10 consecutive function calls. Once the model generates text that does not include a function call we break out and leave the text from that final answer in the answer
variable.
Calling the guidance program
To call the program above we need to pass in a function definition and the actual function to call.
[3]:
# call the program, passing in the function definition we want to use as JSON
executed_program = program(functions=[
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
}
}
], get_current_weather=get_current_weather)
systemYou are a helpful assistant. # Tools ## functions namespace functions { // Get the current weather in a given location type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, unit?: "celsius" | "fahrenheit" }) => any; } // namespace functionsuserGet the current weather in New York City.assistant```typescript functions.get_current_weather({ "location": "New York City" })```function{"location": "New York City", "temperature": "71", "unit": "fahrenheit", "forecast": ["sunny", "windy"]}assistantThe current weather in New York City is 71°F and it is sunny and windy.
[4]:
executed_program["answer"]
[4]:
'The current weather in New York City is 71°F and it is sunny and windy.'
Factoring out the loop calling code
There is a non-trivial amount of logic and syntax required to create a loop of function calls. We can make this easer by factoring out that loop into its own guidance program:
[5]:
# this is a reusabe component for calling functions as intermediate steps in a generation
# (note that args[0] refers to the first positional argument passed to the program when it is included)
chat_tool_gen = guidance("""{{~#each range(max_calls)~}}
{{~#assistant~}}
{{gen 'func_inner' temperature=temperature max_tokens=max_tokens_per_chunk function_call=function_call~}}
{{~/assistant~}}
{{#if not callable(func_inner)}}{{break}}{{/if}}
{{~#function name=func_inner.__name__~}}
{{func_inner()}}
{{~/function~}}
{{~/each~}}{{set args[0] func_inner}}""", max_calls=20, function_call="auto", max_tokens_per_chunk=500, temperature=0.0)
# define a guidance program that uses chat_tool_gen
program2 = guidance("""
{{~#system~}}
You are a helpful assistant.
{{>tool_def functions=functions}}
{{~/system~}}
{{~#user~}}
Get the current weather in New York City.
{{~/user~}}
{{>chat_tool_gen 'answer'}}""", chat_tool_gen=chat_tool_gen)
# call the program
executed_program2 = program2(functions=[
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"]
}
}
], get_current_weather=get_current_weather)
systemYou are a helpful assistant. # Tools ## functions namespace functions { // Get the current weather in a given location type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, unit?: "celsius" | "fahrenheit" }) => any; } // namespace functionsuserGet the current weather in New York City.assistant```typescript functions.get_current_weather({ "location": "New York City" })```function{"location": "New York City", "temperature": "71", "unit": "fahrenheit", "forecast": ["sunny", "windy"]}assistantThe current weather in New York City is 71°F and sunny with windy conditions.
Calling the function outside of Guidance
In the example above the function call was made during the execution of the guidance program, but we can also pause the program’s execution whenever we want to make a function call, and then make that call outside of Guidance. This is useful if you don’t want Guidance to own the function calling part of your program logic.
[6]:
# define a guidance program that pauses we when a function call is made
await_program = guidance("""
{{~#system~}}
You are a helpful assistant.
{{>tool_def functions=functions}}
{{~/system~}}
{{~#user~}}
Get the current weather in New York City.
{{~/user~}}
{{~#each range(10)~}}
{{~#assistant~}}
{{gen 'answer' temperature=1.0 max_tokens=50 function_call="auto"}}
{{~/assistant~}}
{{set 'function_call' extract_function_call(answer)}}
{{~#if not function_call}}{{break}}{{/if~}}
{{set 'answer' await('call_result')}}
{{~#function name=function_call.__name__~}}
{{answer}}
{{~/function~}}
{{~/each~}}""")
# call the program, passing in the function definition we want to use as JSON
executed_await_program = await_program(functions=[
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
}
}
], get_current_weather=get_current_weather)
systemYou are a helpful assistant. # Tools ## functions namespace functions { // Get the current weather in a given location type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, unit?: "celsius" | "fahrenheit" }) => any; } // namespace functionsuserGet the current weather in New York City.{{set 'answer' await('call_result')}}{{~#function name=function_call.__name__~}} {{answer}} {{~/function~}} {{~#each range(10) start_index=1~}} {{~#assistant~}} {{gen 'answer' temperature=1.0 max_tokens=50 function_call="auto"}} {{~/assistant~}} {{set 'function_call' extract_function_call(answer)}} {{~#if not function_call}}{{break}}{{/if~}} {{set 'answer' await('call_result')}} {{~#function name=function_call.__name__~}} {{answer}} {{~/function~}} {{~/each~}}assistant```typescript functions.get_current_weather({ "location": "New York City" })```
[7]:
# these are the details of the function call we need to make
executed_await_program["function_call"]
[7]:
CallableAnswer(__name__=get_current_weather, __kwdefaults__={'location': 'New York City'})
[8]:
# run the call
call = executed_await_program["function_call"]
if call.__name__ == "get_current_weather":
weather = get_current_weather(**call.__kwdefaults__)
executed_await_program(call_result=weather)
systemYou are a helpful assistant. # Tools ## functions namespace functions { // Get the current weather in a given location type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, unit?: "celsius" | "fahrenheit" }) => any; } // namespace functionsuserGet the current weather in New York City.assistant```typescript functions.get_current_weather({ "location": "New York City" })```function{"location": "New York City", "temperature": "71", "unit": "fahrenheit", "forecast": ["sunny", "windy"]}assistantThe current weather in New York City is 71°F and it is sunny and windy.
Open Source model [TODO]
Here we run the same examples as before, but with the an open model instead. Note that the model does not have any special fine-tuned support for function calls, so we have to provide much more detail in the tool definition.
[ ]:
# TODO
Have an idea for more helpful examples? Pull requests that add to this documentation notebook are encouraged!