Comment by thecupisblue

4 days ago

You parse it. Invalid calls you revalidate with a model of your choice. Parsing isn't a hard to solve thing, it's easy and you can parse whatever you want. I've been parsing responses from LLM's since days of Ada and DaVinci where they would just complete the text and it really isn't that hard.

> Deal with invalid calls? Encode and tie results to the original call? Deal with error states? Is it custom work to bring in each new api or do you have common pieces dealing with, say, rest APIs or shelling out, etc?

Why would any LLM framework deal with that? That is your basic architecture 101. I don't want to stack another architecture on top of an existing one.

>If it’s reused, then is it that different from creating abstractions?

Because you have control over the abstractions. You have control over what goes into the context. You have control over updating those abstractions and prompts based on your context. You have control over choosing your models instead of depending on models supported by the library or the tool you're using.

>As an aside - models are getting explicitly trained to use tool calls rather than custom things.

That's great,but also they are great at generating code and guess what the code does? Calls functions.

I’m not saying they’re hard I’m saying they’re common problems that don’t need solving each time. I don’t re-solve title casing every time I need it.

> Because you have control over the abstractions.

And depending on what you’re using you have that with other libraries/etc.

> That's great,but also they are great at generating code and guess what the code does? Calls functions.

Yep, and a lot more so it depends how well you’re sandboxing that I guess.