← Back to context

Comment by CamperBob2

3 hours ago

You can't feed something like that to the free ChatGPT model and expect anything useful. Try these:

https://chatgpt.com/s/t_6929f00ff5508191b75f31e219609a35 (5.1 Pro Thinking)

https://claude.ai/share/7d9caa25-14f7-4233-b15c-d32b86e20e09 (Opus 4.5)

https://docs.google.com/document/d/1C0lSKbLSZOyMWnGgR0QhZh3Q... (Gemini 3 Pro Thinking)

All of them recognized the thrM exception path, although I didn't review them for correctness.

That being said, I imagine the major showstopper in real-world disassembly tasks would simply be the limited context size. As you suggest, a standard LLM isn't really the best tool for the job, at least not without assistance to split up the task logically.

Those first two indeed look correct (third link is not public); indeed free chatgpt is understandably not the best, but I did give it basically the smallest function in my codebase that does something meaningful, instead of any of the actually-non-trivial multi-kilobyte functions.