Comment by tom_0
5 days ago
Hey, Tommaso here, I'm one of the founders of the Robotopia studio. I didn't expect to see this here! Ask me anything :)
5 days ago
Hey, Tommaso here, I'm one of the founders of the Robotopia studio. I didn't expect to see this here! Ask me anything :)
Do you have a budget per-player of cloud usage? What happens if people really like the game and play it so much it starts getting expensive to keep running? I guess at $0.79 / Mtok llama70B is pretty affordable, but a per-player opex seems hard to handle without a subscription model.
Our initial plan was to simply ask enough for the game that the price would cover the costs on average... but that means that we're basically encouraged to have people play the game as little as possible? We're looking into some kind of subscription now, it sounds weird but I do think it's a better incentive in this case. Plus we can actually ask for less upfront.
Do you think there's a path where you can pregenerate popular paths of dialogue to avoid LLM inference costs for every player? And possibly pair it with a lightweight local LLM to slightly adapt the responses? While still shelling out to a larger model when users go "off the rails"?
Not the founder, but having run conversational agents at decent scale, I don't think the cost actually matters much early on.
It's almost always better to pay more for the smarter model, than to potentially give a worse player experience.
If they had 1M+ players there would certainly be room to optimize, but starting out you'd certainly spend more trying engineer the model switcher than you would save in token costs.
I agree, trying to save on costs early on is basically betting against things getting better. Not only that but in almost every case people prefer the best model they can get!
Not only that but I think our selling point is rewarding creativity with emergent behavior. I think baked dialogue would turn into traditional game with worse writing pretty quick and then you got a problem. For example, this AI game here does multiple choices with a local model and people seem a bit mild about it.
We could use it to cache popular QA, but in my experience humans are insane and nobody ever says even remotely similar things to robots :)
[1] https://store.steampowered.com/app/2828650/The_Oversight_Bur...
This is fantastic. I think it’s nailed in the substack what was missing from a lot of these LLM driven NPCs that did not feel authentic. I have a couple of follow-up questions on specifics relating to analysis of behaviour with LLMs (in game-dev myself). Would it be possible to speak to you directly on them?
Thanks :) If you want I'm on the discord linked on our landing page, it's fun stuff to talk about!
Amazing! Thanks will join.
Hey! Robotopia looks awesome, I'm excited to try it out when it launches. How do you convert the LLM output to actions? Is there more broad actions available (ie like creating any object, moving anything anywhere) exposed to the LLM or is it more specific tools it can call?
Thanks :) It may sound insane but we convert actions to Python functions then ask the LLM to write a python script that actually runs in IronPython inside the game. Then we have a visual Behavior Tree system to let our designer define the actions. So yeah, they got a bunch of general actions like walk, talk, follow, interact etc.
PS: I think MCP/Tool Calls are a boondoggle and LLMs yearn to just run code. It's crazy how much better this works than JSON schema etc.
uhhh... you're running generated code on your customers' PCs? what kind of sandboxing do you have?
2 replies →
This has insanely incredible potential for language learning. Do you plan to implement support for additional languages?
Yes, but every language is going to be a "port", not something contracted out like traditional localization. I haven't decided how exactly but language conversion will land somewhere between these two extremes: 1. (expensive) pick a suite of "native" models (eg. models from China), TTS, ASR. Rewrite all the prompts in the target language. Revalidate all characters by hand 2. (cheap) slap a translation model around input and output and let the game run in English internally. My gut feeling is that this could have very poor results though and increase latency.
It's definitely a research project, this has never been done before.
Are the LLMs run on-device, or does this use cloud compute?
(Off-topic AMA question: Did you see my voxel grid visibility post?)
The "big" one is Llama3.3-70b on the cloud, right now. On GroqCloud in fact, but we have a cloud router that gives us several backups if Groq abandoned us.
We use a ton of smaller models (embeddings, vibe checks, TTS, ASR, etc) and if we had enough scale we'll try to run those locally for users that have big enough GPUs.
(You mean the voxel grid visibility from 2014?! I'm sure I did at the time... but I left MC in 2020 so don't even remember my own algorithm right now)
Shipping GPU-accelerated ML models in games looks difficult, are there any major examples other than vendor-locked upscaling like DLSS or FSR?
(Yep! https://cod.ifies.com/voxel-visibility/ )
7 replies →