We put Claude Code in Rollercoaster Tycoon

1 month ago (labs.ramp.com)

Related:

I’ve always found it crazy that my LLM has access to such terrible tools compared to mine.

It’s left with grepping for function signatures, sending diffs for patching, and running `cat` to read all the code at once.

I however, run an IDE and can run a simple refactoring tool to add a parameter to a function, I can “follow symbol” to see where something is defined, I can click and get all usages of a function shown at a glance, etc etc.

Is anyone working on making it so LLM’s get better tools for actually writing/refactoring code? Or is there some “bitter lesson”-like thing that says effort is always better spent just increasing the context size and slurping up all the code at once?

  • > Claude Code officially added native support for the Language Server Protocol (LSP) in version 2.0.74, released in December 2025.

    I think from training it's still biased towards simple tooling.

    But also, there is real power to simple tools, a small set of general purpose tools beats a bunch of narrow specific use case tools. It's easier for humans to use high level tools, but for LLM's they can instantly compose the low level tools for their use case and learn to generalize, it's like writing insane perl one liners is second nature for them compared to us.

    If you watch the tool calls you'll see they write a ton of one off small python programs to test, validate explore, etc...

    If you think about it any time you use a tool there is probably a 20 line python program that is more fit to your use case, it's just that it would take you too long to write it, but for an LLM that's 0.5 seconds

    • > but for LLM's they can instantly compose the low level tools for their use case and learn to generalize

      Hard disagree; this wastes enormous amounts of tokens, and massively pollutes the context window. In addition to being a waste of resources (compute, money, time), this also significantly decreases their output quality. Manually combining painfully rudimentary tools to achieve simple, obvious things -- over and over and over -- is *not* an effective use of a human mind or an expensive LLM.

      Just like humans, LLMs benefit from automating the things they need to do repeatedly so that they can reserve their computational capacity for much more interesting problems.

      I've written[1] custom MCP servers to provide narrowly focused API search and code indexing, build system wrappers that filter all spurious noise and present only the material warnings and errors, "edit file" hooks that speculatively trigger builds before the LLM even has to ask for it, and a litany of other similar tools.

      Due to LLM's annoying tendency to fall back on inefficient shell scripting, I also had to write a full bash syntax parser and shell script rewriting ruleset engine to allow me to silently and trivially rewrite their shell invocations to more optimal forms that use the other tools I've written, so that they don't have to do expensive, wasteful things like pipe build output through `head`/`tail`/`grep`/etc, which results in them invariably missing important information, and either wandering off into the weeds, or -- if they notice -- consuming a huge number of turns (and time) re-running the commands to get what they need.

      Instead, they call build systems directly with arbitrary options, | filters, etc, and magically the command gets rewritten to something that will produce the ideal output they actually need, without eating more context and unnecessary turns.

      LLMs benefit from an IDE just like humans do -- even if an "IDE" for them looks very different. The difference is night and day. They produce vastly better code, faster.

      [1] And by "I've written", I mean I had an LLM do it.

    • Note that the Claude code LSP integration was actually broken for a while after it was released, so make sure you have a very recent version if you want to try it out.

      However as parent comment said, it seems to always grep instead, unless explicitly said to use the LSP tool.

    • Correct. If you try to create a coding agent using the raw Codex or Claude code API and you build your own “write tool”, and don’t give the model their “native patch tool”, 70%+ of the time it’s write/ patch fails because it tries to do the operation using the write/ patch tool it was trained on.

      1 reply →

  • > I however, run an IDE and can run a simple refactoring tool to add a parameter to a function, I can “follow symbol” to see where something is defined, I can click and get all usages of a function shown at a glance, etc etc

    I am so surprised that all of the AI tooling mostly revolves around VSC or its forks and that JetBrains seem to not really have done anything revolutionary in the space.

    With how good their refactoring and code inspection tools are, you’d really think they’d pass of that context information to AI models and that they’d be leaps and bounds ahead.

    • Recently, all these agents can talk LSP (language server protocol) so it should get better soon. That said, yeah they don't seem to default to use `ripgrep` when that is clearly better than `grep`

      3 replies →

    • Are you? I'm not surprised at all, considering that the biggest investment juggernaut in AI is also the author of VSC. I wonder what the connection is? ;)

      3 replies →

    • Agreed - this seems like a no brainer, surely this is something that is being worked on.

    • Jetbrains is trying but I feel like they're very very behind in the space

    • Claude and other LLMs can be used through JetBrains, and the IDE provides a significantly better experience than VS Code in my opinion.

    • I haven't seen JetBrains as 'great'. I think they have a strong marketing team that gets into universities and potentially astroturfs on the internet, but I have always found better tools for every language. Although, I can't remember what I ended up choosing for PHP.

  • LLMs aren't like you or me. They can comprehend large quantities of code quickly and piece things together easily from scattered fragments. so go to reference etc become much less important. Of course though things change as the number of usages of a symbol becomes large but in most cases the LLM can just make perfect sense of things via grep.

    To provide it access to refactoring as a tool also risks confusing it via too many tools.

    It's the same reason that waffling for a few minutes via speech to text with tangents and corrections and chaos is just about as good as a carefully written prompt for coding agents.

  • If you can read fast enough, grepping is probably faster than waiting for a compiler to tell you anything.

    • Faster for worse results, though. Determining the source of a symbol is not as trivial as finding the same piece of text somewhere else, it should also reliably be able to differentiate among them. What better source for that then the compiler itself?

      2 replies →

  • Zed Editor gives the LLM tools that use the LSP as you'd expect as a normal IDE user, like "go to symbol definition" so it greps a lot less.

  • JetBrain IDEs come with an MCP server that supports some refactoring tools [1]:

    > Starting with version 2025.2, IntelliJ IDEA comes with an integrated MCP server, allowing external clients such as Claude Desktop, Cursor, Codex, VS Code, and others to access tools provided by the IDE. This provides users with the ability to control and interact with JetBrains IDEs without leaving their application of choice.

    [1] https://www.jetbrains.com/help/idea/mcp-server.html#supporte...

  • Tidewave.ai does exactly that. It’s made Claude code so much more functional. It provides mcp servers to

    - search all your code efficiently - search all documentation for libraries - access your database and get real data samples (not just abstract data types) - allows you to select design components from your figma project and implements them for you - allows Claude to see what is rendered in the browser

    It’s basically the ide for your LLM client. It really closes the loop and has made Claude and myself so much more productive. Highly recommended and cheap at $10/month

    Ps: my personal opinion. I have Zero affiliation with them

  • LLMs operate on text. They can take in text, and they can produce text. Yes, some LLMs can also read and even produce images, but at least as of today, they are clearly much better at using text[1].

    So cat, ripgrep, etc are the right tools for them. They need a command line, not a GUI.

    1: Maybe you'd argue that Nano Banana is pretty good. But would you say its prompt adherence is good enough to produce, say, a working Scratch program?

    • Inputs to functions are text, as in variables, or file names, directory names, symbol names with symbol searching. Outputs you get from these functions for things like symbol searching is text too, or at least easily reformatted to text. Like API calls are all just text input and output.

      1 reply →

  • You can give agents the ability to check VSCode Diagnostics, LSP servers and the like.

    But they constantly ignore them and use their base CLI tools instead, it drives me batty. No matter what I put in AGENTS.md or similar, they always just ignore the more advanced tooling IME.

    • Doesn't have to be a bad thing, not all languages have good LSP support. If the AI can optimize for simple cross-language tools it won't be as dependent on the LSP implementation.

      I used grep and simple ctags to program in vanilla vim for years. It can be more useful than you'd think. I do like the LSP in Neovim and use it a lot, but I don't need it.

      1 reply →

  • An LSP MCP?

    • Yeah, or something even smarter than that.

      If you are willing to go language-specific, the tooling can be incredibly rich if you go through the effort. I’ve written some rust compiler drivers for domain-specific use cases, and you can hook into phases of the compiler where you have amazingly detailed context about every symbol in the code. All manner of type metadata, locations where values are dropped, everything is annotated with spans of source locations too. It seems like a worthy effort to index all of it and make it available behind a standard query interface the LLM can use. You can even write code this way, I think rustfmt hooks into the same pipeline to produce formatted code.

      I’ve always wished there were richer tools available to do what my IDE already does, but without needing to use the UI. Make it a standard API or even just CLI, and free it from the dependency on my IDE. It’d be very worth looking into I think.

      2 replies →

  • Not coding agents but we do a lot of work trying to find the best tools, and the result is always that the simplest possible general tool that can get the job done always beats a suite of complicated tools and rules on how to use them.

    • Well, jump to definition isn't exactly complicated?

      And you can use whatever interface the language servers already use to expose that functionality to eg vscode?

      2 replies →

  • This isn’t completely the answer to what you want but skills do open a lot of doors here. Anything you can do on a command line can turn into a skill, after all.

  • I’ve been saying this for a while. CPU demand is about to go through the roof.

    I think about it, to get these tools to be most effective you have to be able to page things in and out of their context windows.

    What was once a couple of queries is now gonna be dozens or hundreds or even more from the LLM

    For code that means querying the AST and query it in a way that allows you to limit the results of the output

    I wonder which SAST vendor Anthropic will buy.

Author here - some bonus links!

Session transcript using Simon Willison's claude-code-transcripts

https://htmlpreview.github.io/?https://gist.githubuserconten...

Reddit post

https://www.reddit.com/r/ClaudeAI/comments/1q9fen5/claude_co...

OpenRCT2!!

https://github.com/jaysobel/OpenRCT2

Project repo

https://github.com/jaysobel/OpenRCT2

  • Did you eval using screenshots or some sort of rendered visualization instead of the CLI? I wonder if Claude has better visual intelligence when viewing images (lots of these in its training set) rather than ascii schematics (probably very few of these in the corpus).

    • Computer use and screenshots are context intensive. Text is not. The more context you give to an LLM, the dumber it gets. Some people think at 40% context utilization, the LLM starts to get into the dumb zone. That is where the limitations are as of today. This is why CLI based tools like Claude Code are so good. And any attempt at computer use has fallen by the wayside.

      There are some potential solutions to this problem that come to mind. Use subagents to isolate the interesting bits about a screenshot and only feed that to the main agent with a summary. This will all still have a significantly higher token usage compared to a text based interface, but something like this could potentially keep the LLM out of the dumb zone a little longer.

      2 replies →

    • I had tried the browser screenshotting feature for agents in Cursor and found it wasn't very reliable - screenshots eat a lot of context, and the agent didn't have a good sense for when to use them. I didn't try it in this project. I bet it would work in some specific cases.

    • Claude helped me immensely getting an image converter to work. Giving it screenshots of wrong output (lots of layers had an unpredictable offsets that was not supposed to be there) and output as I expected it helped Claude understand the problems and it fixed the bugs immediately.

  • > Claude is at a pretty steep visuo-spatial disadvantage,

    How hard would it be to use with OpenAI's offerings instead? Particularly, imo, OpenAI's better at "looking" at pictures than Claude.

> As a mirror to real-world agent design: the limiting factor for general-purpose agents is the legibility of their environments, and the strength of their interfaces. For this reason, we prefer to think of agents as automating diligence, rather than intelligence, for operational challenges.

> The only other notable setback was an accidental use of the word "revert" which Codex took literally, and ran git revert on a file where 1-2 hours of progress had been accumulating.

  • If I tell Claude to "revert that last change, it isn't right, try this instead" and Claude hasn't committed recently it will happily `git checkout ...` and blow away all recent changes instead of reverting the "last change".

    (Which, it's not wrong or anything -- I did say "revert that change" -- it's just annoying. And telling `CLAUDE.md` to commit more often doesn't work consistently, because Claude is a dummy sometimes).

  • Amazing that these tools don't maintain a replayable log of everything they've done.

    Although git revert is not a destructive operation, so it's surprising that it caused any loss of data. Maybe they meant git reset --hard or something like that. Wild if Codec would run that.

  • Does Codex not let you set command permissions?

    • Yea, it does so this would likely have been to be a `--yolo` (I don't care, let me `rm -rf /`). I've found even with the "workspace write" mode and no additional writable directories I can't do git operations without approval so it seems to exclude `.git` by default.

  • Yet another reason to use Jujutsu. And put a `jj status` wrapper in your PS1. ;-)

    • Start with env args like AGENT_ID for indicating which Merkle hash of which model(s) generated which code with which agent(s) and add those attributes to signed (-S) commit messages. For traceability; to find other faulty code generated by the same model and determine whether an agent or a human introduced the fault.

      Then, `git notes` is better for signature metadata because it doesn't change the commit hash to add signatures for the commit.

      And then, you'd need to run a local Rekor log to use Sigstore attestations on every commit.

      Sigstore.dev is SLSA.dev compliant.

      Sigstore grants short-lived release attestation signing keys for CI builds on a build farm to sign artifacts with.

      So, when jujutsu autocommits agent-generated code, what causes there to be an {{AGENT_ID}} in the commit message or git notes? And what stops a user from forging such attestations?

      1 reply →

I love the interview at the end of the video. The kubectl-inspired CLI, and the feedback for improvements from Claude, as well as the alerts/segmentation feedback.

You could take those, make the tools better, and repeat the experience, and I'd love to see how much better the run would go.

I keep thinking about that when it comes to things like this - the Pokemon thing as well. The quality of the tooling around the AI is only going to become more and more impactful as time goes on. The more you can deterministically figure out on behalf of the AI to provide it with accurate ways of seeing and doing things, the better.

Ditto for humans, of course, that's the great thing about optimizing for AI. It's really just "if a human was using this, what would they need"? Think about it: The whole thing with the paths not being properly connected, a human would have to sit down and really think about it, draw/sketch the layout to visualize and understand what coordinates to do things in. And if you couldn't do that, you too would probably struggle for a while. But if the tool provided you with enough context to understand that a path wasn't connected properly and why, you'd be fine.

  • I see this sentiment of using AI to improve itself a lot but it never seems to work well in practice. At best you end up with a very verbose context that covers all the random edge cases encountered during tasks.

    For this to work the way people expect you’d need to somehow feed this info back into fine tuning rather than just appending to context. Otherwise the model never actually “learns”, you’re just applying heavy handed fudge factors to existing weights through context.

    • I've been playing around with an AI generated knowledge base to grok our code base, I think you need good metrics on how the knowledge base is used. A few things is:

      1. Being systematic. Having a system for adding, improving and maintaining the knoweldge base 2. Having feedback for that system 3. Implementing the feedback into a better system

      I'm pretty happy I have an audit framework and documentation standards. I've refactored the whole knowledge base a few times. In the places where it's overly specific or too narrow in it's scope of use for the retained knowledge, you just have to prune it.

      Any garden has weeds when you lay down fertile soil.

      Sometimes they aren't weeds though, and that's where having a person in the driver's seat is a boon.

    • The features it asked for in this case were better tools, I thought they were really sensible. It said it wanted a —dry-run (like the CLIs the rct one was modelled on), it wanted to be able to segment guest feedback, and it wanted better feedback from its path tools. Those might not be actually possible in rct, but in a different context they’re pretty smart requests and not just verbose edge cases.

> We don't know any C++ at all, and we vibe-coded the entire project over a few weeks. The core pieces of the build are…

what a world!

  • I would’ve walked for days to a CompUSA and spent my life savings if there was anything remotely equivalent to this when I was learning C on my Macintosh 4400 in 1997

    People don’t appreciate what they have

    • Did you actually learn C? Be thankful nothing like this existed in 1997.

      A machine generating code you don't understand is not the way to learn a programming language. It's a way to create software without programming.

      These tools can be used as learning assistants, but the vast majority of people don't use them as such. This will lead to a collective degradation of knowledge and skills, and the proliferation of shoddily built software with more issues than anyone relying on these tools will know how to fix. At least people who can actually program will be in demand to fix this mess for years to come.

      31 replies →

  • Everyone should read that section. It was really interesting reading about their experiences/challenges getting it all working.

  • First time I am seeing realistic timelines from a vibe-coded project. Usually everyone who vibe codes just says they did in few hours, no matter the project.

    • Hmm. My experience with it is that a few hours of that will get you a sprint if you're lucky and the prompt hits the happy path. I had… I think two of those, over 5 weeks? I can believe plenty of random people stumble across happy-path examples.

      Exciting when it works, but I think a much more exciting result for people with less experience who may not know that the "works for me" demo is the dreaded "first 90%", and even fairly small projects aren't done until the fifth-to-tenth 90%.

      (That, and that vibe coding in the sense of "no code review" are prone to balls of mud, so you need to be above average at project management to avoid that after a few sprint-equivalents of output).

    • It’s possible to vibe code certain generic things in a few hours if you’re basically combining common, thoroughly documented, mature building blocks. It’s not going to be production ready or polished but you can get surprisingly far with some things.

      For real work, that phase is like starting from a template or a boilerplate repo. The real work begins after the basics are wired together.

Interesting article but it doesn’t actually discuss how well it performs at playing the game. There is in fact a 1.5 hour YouTube video but it woulda been nice for a bit of an outcome postmortem. It’s like “here’s the methods and set up section of a research paper but for the conclusion you need to watch this movie and make your own judgements!”

  • It does discuss that? Basically it has good grasp of finances and often knows what "should" be done, but it struggles with actually building anything beyond placing toilets and hotdog stalls. To be fair, its map interface is not exactly optimal, and a multimodal model might fare quite a bit better at understanding the 2D map (verticality would likely still be a problem).

  • I was told the important part of AI is the generation part, not the verification or quality.

> kept the context above the ~60% remaining level where coding models perform at their absolute best

Maybe this is obvious to Claude users but how do you know your remaining context level? There is UI for this?

> In this article we'll tell you why we decided to put Claude Code into RollerCoaster Tycoon, and what lessons it taught us about B2B SaaS.

What is this? A LinkedIn post?

I corroborate that spatial reasoning is a challenge still. In this case, it's the complexity of the game world, but anyone who has used Codex/Claude with complex UIs in CSS or a native UI library will recognize the shortcomings fairly quickly.

Can't wait for someone to let Claude control a runescape character from scratch

  • I've done this! Given the right interface I was surprised at how well it did. Prompted it "You're controlling a character in Old School RuneScape, come up with a goal for yourself, and don't stop working on it until you've achieved it". It decided to fish for and cook 100 lobsters, and it did it pretty much flawlessly!

    Biggest downside was it's inability to see (literally), getting lists of interact-able game objects, NPCs, etc was fine when it decided to do something that didn't require any real-time input. Sailing, or anything that required it to react to what's on screen was pretty much impossible without more tooling to manage the reacting part for it (e.g. tool to navigate automatically to some location).

    • RuneScape is packet based and there are tools for inspecting packets. I wonder if these tools can give some insight to Claude Code.

      The only thing is you would need a description of the worlmap on each tick (i.e. where npcs are, where objects are, where players are)

  • People have been botting on Runescape since the early 2000s. Obviously not quite at the Claude level :). The botting forums were a group of very active and welcoming communities. This is actually what led me to Java programming and computer science more broadly--I wrote custom scripts for my characters.

    I still have some parts of the old Rei-net forum archived on an external somewhere.

Claude Code in dwarf fortress would be wild

  • Given dwarf fortress has an ASCII interface it may actually be a lot easier to set up claude to work with it. Also, a lot of the challenges of dwarf fortress is just knowing all the different mechanics and how they work which is something claude should be good at.

    • And it’s (Claude) almost certainly accumulated a fair amount of knowledge about the game itself, given the number of tutorials, guides, and other resources that have been written about DF over the last two decades.

    • Unfortunately it's rendering ASCII characters as sprites using SDL, so it's not really a text interface.

This was an interesting application of AI, but I don't really think this is what LLMs excel at. Correct me if I'm wrong.

It was interesting that the poster vibe-coded (I'm assuming) the CTL from scratch; Claude was probably pretty good at doing that, and that task could likely have been completed in an afternoon.

Pairing the CTL with the CLI makes sense, as that's the only way to gain feedback from the game. Claude can't easily do spatial recognition (yet).

A project like this would entirely depend on the game being open source. I've seen some very impressive applications of AI online with closed-source games and entire algorithms dedicated to visual reasoning.

I'm still trying to figure out how this guy: https://www.youtube.com/watch?v=Doec5gxhT_U

Was able to have AI learn to play Mario Kart nearly perfectly. I find his work to be very impressive.

I guess because RCT2 is more data-driven than visually challenging, this solution works well, but having an LLM try to play a racing game sounds like it would be disastrous.

  • Not sure if you clocked this, but the Mario Kart AI is not an LLM. It's a randomized neural net that was trained with reinforcement learning. Apologies if I misread.

While this seems cool at first, it does not demonstrate superiority over a true custom built AI for rollercoaster tycoon.

It is a curiosity, good for headlines, but the takeaway is if you really need an actual good AI, you are still better off not using an LLM powered solution.

> We don't know any C++ at all, and we vibe-coded the entire project over a few weeks.

And these are the same people that put countless engineers through gauntlets of bizarre interview questions and exotic puzzles to hire engineers.

But when it comes to C++ just vibe it obviously.

  • Oh, I almost didn't realise this is done by a company. I was like this must have costed a lot, didn't realize its just an advertisement for ramp

This is a cool idea. I wanted to do something like this by adding a Lua API to OpenRCT2 that allows you to manipulate and inspect the game world. Then, you could either provide an LLM agent the ability to write and run scripts in the game, or program a more classic AI using the Lua API. This AI would probably perform much better than an LLM - but an interesting experiment nonetheless to see how a language model can fare in a task it was not trained to do.

The opening paragraph I thought was the agent prompt haha

> The park rating is climbing. Your flagship coaster is printing money. Guests are happy, for now. But you know what's coming: the inevitable cascade of breakdowns, the trash piling up by the exits, the queue times spiraling out of control.

the beauty of this game was that it was developed in Assembly Code and on top of that by majorly one person.

I've been trying to locate the dev of this game since a long time, so I can thank them for an amazing experience.

If anyone knows their social or anything, please do share, including OP.

Also, nice work on CC in this. May actually be interested in Claude Code now.

It's been several times that I see ASCII being used initially for these kinds of problems. I think it's because its counter-intuitive, in the sense that for us humans ASCII is text but we tend to forget spacial awareness.

I find this very interesting of us humans interacting with AIs.

Most interesting phrase: "Keeping all four agents busy took a lot of mental bandwidth."

Wonder how it would do with Myst.

  • Surely it must have digested plenty of walkthroughs for any game?

    A linear puzzle game like that I would just expect the ai to fly through first time, considering it has probably read 30 years of guides and walkthroughs.

Would a way to take screenshots help? It seems to work for browser testing.

  • I’ve been doing game development and it starts to hallucinate more rapidly when it doesn’t understand things like the direction it placing things or which way the camera is oriented

    Gemini models are a little bit better about spatial reasoning, but we’re still not there yet because these models were not designed to do spatial reasoning they were designed to process text

    In my development, I also use the ascii matrix technique.

    • Spatial awareness was also a huge limitation to Claude playing pokemon.

      It really seems to me that the first AI company getting to implement "spatial awareness" vector tokens and integrating them neatly with the other conventional text, image and sound tokens will be reaping huge rewards. Some are already partnering with robot companies, it's only a matter of time before one of those gets there.

      1 reply →

    • I disagree. With opus I'll screenshot an app and draw all over it like a child with me paint and paste it into the chat - it seems to reasonably understand what I'm asking with my chicken scratch and dimensions.

      As far as 3d I don't have experience however it could be quite awful at that

      1 reply →

But will Claude pick up complaining guests and put them in a tiny isolated section of the park that only has a bathroom that charges $10 to use?

Question: There is still a competitive AoE2 community. Will that be destroyed by AI?

  • Dota 2 is a real time strategy game with an arguably more complex micro game (but a far simpler macro game than AoE2, but that's far easier for an AI to master), and OpenAI Five completely destroyed the reigning champions. In 2019. Perfect coordination between units, superhuman mechanical skill, perfect consistency.

    I see no reason why AoE2 would be any different.

    Worth noting that openAI Five was mostly deep reinforcement learning and massive distributed training, it didn't use image to text and an LLM for reasoning about what it sees to make its "decisions". But that wouldn't be a good way to do an AI like that anyway.

    Oh, and humans still play Dota. It's still a highly competitive community. So that wasn't destroyed at all, most teams now use AI to study tactics and strategy.

  • I suspect the fun is playing against real people and the unexpected things they do. Just because the AI can beat you does not necessarily make it fun. People still play chess despite stock fish existing.

> also completely unfazed by the premise that it has been 'hacked into' a late-90's computer game. This was surprising, but fits with Claude's playful personality and flexible disposition.

When I read things like this, I wonder if it's just me not understanding this brave new world, or half of AI developers are delusional and really believe that they are dealing with a sentient being.

> "Where Claude excels:"

Am I reading a Claude generated summary here?

  • I thought it sounded more like an ad for Claude written by Anthropic:

    > "This was surprising, but fits with Claude's playful personality and flexible disposition."

    • This sounds as expected to me as a heavy user of Opus. Claude absolutely has a "personality" that is a lot less formal and more willing to "play along" with more creative tasks than Codex. If you want an agent that's prepared to just jump in, it's a plus. If you want an agent that will be careful, considered and plan things out meticulously, it's not always so great - I feel that when you want Claude to do reptitive, tedious tasks, you need to do more work to prevent it from getting "bored" and try to take shortcuts or find something else to do, for example.

      3 replies →

  • Yes I believe so. Also things like forcing a "key insight" summary after the excels vs struggles section.

    I would take any descriptions like "comprehensive", "sophisticated" etc with a massive grain of salt. But the nuts and bolts of how it was done should be accurate.

this is cute but i imagined prompting the ai for a loop-di-loop roller coaster. If this could build complex ride it would be a game changer.

  • yeah I was expecting it to... do something in the game? like build a ride

    not just make up bullshit about events

Honestly i thought the AI would do better then what is described. RCT is pretty simple when it comes to things like what to set ride price to. I think the game has a straightforward formula for how guests respond to prices.

Interesting this is on the ramp.com domain? I'm surprised in this tech market they can pay devs to hack on Rollercoaster Tycoon. Maybe there's some crossover I'm missing but seems like a sweet gig honestly.

  • yeah really - ramp.com is a credit card/expense platform that surely loses money right now...

    pretty heavy/slow javascript but pretty functional nonetheless...

    • Why would they be losing money? It’s what we use for tracking expenses and getting comped for travel, meals, software licenses etc - works great in my experience. I can click a few buttons and get a new business expense card spun up in less than a minute, use it to make a purchase, get approval and have the funds transferred. Boom easy.

      Do you not think they’re charging enough or something?

    • This is brilliant SEO work, I doubt that they loose money with it. With 40h and some additional for the landingpage it might be an expensive link bait, but definitely worth it. Kudos!

      If not for SEO, it’s building quite a good reputation for this company, they got a lot of open positions.

      I’m a big fan of transport tycoon, used to play it for hours as a kid and with Open Transport Tycoon it also might have been a good choice, but maybe not B2C?

next up: Crusader Kings III

  • > You’re right, I did accidentally slaughter all the residents of Béziers. I won’t do that again. But I think that you’ll find God knows his own.

  • Crusader Kings is a franchise I really could see LLMs shine. One of the current main criticisms on the game is that there's a lack of events, and that they often don't really feel relevant to your character.

    An LLM could potentially make events far more aimed at your character, and could actually respond to things happening in the world far more than what the game currently does. It could really create some cool emerging gameplay.

    • In general you are right, I expect something like this to appear in the future and it would be cool.

      But isn't the criticism rather that there are too many (as you say repetitive, not relevant) events - its not like there are cool stories emerging from the underlying game mechanics anymore ("grand strategy") but players have to click through these boring predetermined events again and again.

      2 replies →

"i vibe coded a thing to play video games for me"

i enjoy playing video games my own self. separately, i enjoy writing code for video games. i don't need ai for either of these things.

  • Yeah, but can you use your enjoyment of video games as marketing material to justify a $32B valuation?

  • That's fine. Tool-assisted speedruns long predate LLMs and they're boring as hell: https://youtu.be/W-MrhVPEqRo

    It's still a neat perspective on how to optimize for super-specific constraints.

    • > Tool-assisted speedruns long predate LLMs and they're boring as hell

      You and I have _very_ different definitions for the word boring. A lot of effort goes into TAS runs.

  • I actually think it would be pretty fun to code something to play video games for me, it has a lot of overlap with robotics. Separately, I learned about assembly from cheat engine when I was a kid.

  • That’s not the point of this. This was an exercise to measure the strengths and weaknesses of current LLMs in operating a company and managing operations, and the video game was just the simulation engine.

  • You do you. I find this exceedingly cool and I think it's a fun new thing to do.

    It's kind of like how people started watching Let's Plays and that turned into Twitch.

    One of the coolest things recently is VTubers in mocap suits using AI performers to do single person improv performances with. It's wild and cool as hell. A single performer creating a vast fantasy world full of characters.

    LLMs and agents playing Pokemon and StarCraft? Also a ton of fun.