Don’t let an LLM make decisions or implement business logic: they suck at that. I build NPCs for an online game, and I get asked a lot “How did you get ChatGPT to do that?” The answer is invariably: “I didn’t, and also you shouldn’t”.
I assumed that people are asking you how you got ChatGPT to code the NPC for you. Why would people ask you how ChatGPT is powering the NPC? ChatGPT does not have an API. OpenAI has APIs. ChatGPT is just an interface to their models. How can ChatGPT power your NPCs for an online game? Made no sense.
That is much clearer.
The intro to your article is also very confusing.
I assumed that people are asking you how you got ChatGPT to code the NPC for you. Why would people ask you how ChatGPT is powering the NPC? ChatGPT does not have an API. OpenAI has APIs. ChatGPT is just an interface to their models. How can ChatGPT power your NPCs for an online game? Made no sense.
I changed the word "implement" to "execute" on the blog post. Thank you for your feedback. As to:
> Why would people ask you how ChatGPT is powering the NPC?
Because they think LLM and ChatGPT are synonymous.
I think changing the word from "execute" to "inference" is even clearer to be honest. Though it's much better than the original word choice.
I still find it weird. 99.9999% of NPCs in video games are not LLMs. So why would people ask that question?