Comment by onion2k
16 hours ago
My first thought was that you could just obfuscate the code and that would stop the LLM. So I tried. I put the following into ChatGPT 5.3:
What does this JavaScript do?
function _0x2dee(_0x518715,_0xdc9c42){_0x518715=_0x518715-(0x639+0x829+-0x332*0x4);var _0x4f9ec2=_0x1aec();var _0x2308f2=_0x4f9ec2[_0x518715];return _0x2308f2;}var _0xbdf4ac=_0x2dee;function _0x1aec(){var _0x472dbe=['65443zxmXfN','71183WPtagF','1687165KeHDfr','406104dvggQc','156nrzVAJ','4248639JiaxSG','log','484160Wfepsg','149476dlIGMx','yeah','9NphkgA'];_0x1aec=function(){return _0x472dbe;};return _0x1aec();}(function(_0x1654d4,_0x9dbc95){var _0x57f34f=_0x2dee,_0x4990aa=_0x1654d4();while(!![]){try{var _0x2eed8a=parseInt(_0x57f34f(0x1a2))/(-0x15b9+0x1d2e+-0x774)+-parseInt(_0x57f34f(0x19a))/(0x1*0x13e8+-0x1cb2+0x466*0x2)+parseInt(_0x57f34f(0x1a1))/(0xa91+0xa83+-0x1511)*(-parseInt(_0x57f34f(0x19f))/(0x1d*0x153+-0x15b7+-0x2*0x856))+-parseInt(_0x57f34f(0x1a4))/(0xc4c+0x13*-0x12f+0xa36)+parseInt(_0x57f34f(0x19b))/(0x3d*0x2f+-0x595*0x4+0xb27)*(parseInt(_0x57f34f(0x1a3))/(-0x9*0xca+0x1a4*0x15+-0x577*0x5))+parseInt(_0x57f34f(0x19e))/(0xfc3+-0x1cfd+0x1*0xd42)+parseInt(_0x57f34f(0x19c))/(0x70f*0x1+0x1104+-0x180a);if(_0x2eed8a===_0x9dbc95)break;else _0x4990aa['push'](_0x4990aa['shift']());}catch(_0x42c1c4){_0x4990aa['push'](_0x4990aa['shift']());}}}(_0x1aec,-0x3cdf*-0xd+-0x1f355*0x3+0x9*0xa998),console[_0xbdf4ac(0x19d)](_0xbdf4ac(0x1a0)));
It had absolutely no trouble understanding what it is, and deobfuscated it perfectly in on it's first attempt. It's not the cleverest obfuscation (https://codebeautify.org/javascript-obfuscator) but I'm still moderately impressed.
Yeah even the "basic" free tier Gemini 3.1 thinking model can easily unscramble that. It's impressive, but after all it's the very precise kind of job an LLM is great at - iteratively apply small transformations on text
It's genuinely amazing how good they are at reverse engineering.
I have a silly side project that ended up involving the decompilation of a toaster ovens firmware, the firmware of the programmer for said toaster ovens MCU, and the host side programming software. They were able to rip through them without a problem, didn't even have ghidra setup, they just made their own tools in python.
I’ve used AI for some reverse engineering and I’ve noticed the same thing. It’s generally great at breaking obfuscation or understanding raw decompilation.
It’s terrible at confirming prior work, if I label something incorrectly it will use that as if it was gospel.
Having a very clean function with lots of comments and well named functions with a lot of detail that does something completely different will trip it up very easily.
> It’s terrible at confirming prior work, if I label something incorrectly it will use that as if it was gospel.
That's funny, sounds like you'd get better results by obfuscating then. (Relative to partially deobfuscated code that might have incorrect names.)
yeah
there is no chatgpt 5.3
There's GPT‑5.3‑Codex