Comment by scarface_74

1 day ago

For my use case, definitely.

I have worked on AWS Connect (online call center) and Amazon Lex (the backing NLP engine) projects.

Before LLMs, it was a tedious process of trying to figure out all of the different “utterances” that people could say and the various languages you had to support. With LLMs, it’s just prompting

https://chatgpt.com/share/678bab08-f3a0-8010-82e0-32cff9c0b4...

I used something like this using Amazon Bedrock and a Lambda hook for Amazon Lex. Of course it wasn’t booking a flight. It was another system

The above is a simplified version. In the real world , I gave it a list of intents (book flights, reserve a room, rent a car) and properties - “slots” - I needed for each intent.

Thank you for sharing an actual prompt thread. So much of the LLM debate is washed in biases, and it is very helpful to share concrete examples of outputs.

  • The “cordele GA” example surprised me. I was expecting to get a value of “null” for the airport code since I knew that city had a population of 12K and no airport within its metropolitan statistical area. It returned an airport that was close.

    Having world knowledge is a godsend. I also just tried a prompt with “Alpharetta, GA” a city north of Atlanta and it returned ATL. An NLP could never do that without a lot more work.

That’s a great example and I understand it was intentionally simple but highlighted how LLMs need care with use. Not that this example is very related to NLP:

My prompt: `<<I want a flight from portland to cuba after easter>>`

The response: ``` { "origin": ["PDX"], "destination": ["HAV"], "date": "2025-04-01", "departure_time": null, "preferences": null } ```

Of course I meant Portland Maine (PWM), there is more than one airport option in Cuba than HAV, and it got the date wrong, since Easter is April 20 this year.

How about the the costs?

  • We measure savings in terms of call deflections. Clients we work with say that each time a customer talks to an agent it costs $2-$5. That’s not even taking into account call abandonments

    • My base thing while advising people is that if anyone you pay needs to read the output, or you are directly replacing any kind of work then even frontier llm model inference costs are irrelevant. Of course you need to work out of that's truly the case but people worry about the cost in places where it's just irrelevant. If it's $2 when you get to an agent, each case that's avoided there could pay for around a million words read/generated. That's expensive compared to most API calls but irrelevant when counting human costs.

link is a 404, sadly. what did it say before?

  • The link works for me even in cognito mode.

    The prompt:

    you are a chatbot that helps users book flights. Please extract the origin city, destination city, travel date, and any additional preferences (e.g., time of day, class of service). If any of the details are missing, make the value “null”. If the date is relative (e.g., "tomorrow", "next week"), convert it to a specific date.

    User Input: "<User's Query>"

    Output (JSON format): { "origin": list of airport codes "destination": list of airport codes, "date": "<Extracted Date>", "departure_time": "<Extracted Departure Time (if applicable)>", "preferences": "<Any additional preferences like class of service (optional)>" }

    The users request will be surrounded by <<>>

    Always return JSON with any missing properties having a value of null. Always return English. Return a list of airport codes for the city. For instance New York has two airports give both

    Always return responses in English