Comment by monocasa
8 days ago
Someone unminified the js, and it turned out that a bunch of the rest endpoints it knew about were just unverified crud endpoints for the site.
https://archive.ph/2025.02.14-132833/https://www.404media.co...
8 days ago
Someone unminified the js, and it turned out that a bunch of the rest endpoints it knew about were just unverified crud endpoints for the site.
https://archive.ph/2025.02.14-132833/https://www.404media.co...
Smells exactly like llm created solution.
Or just what happens when you hire a bunch of 20 year olds and let them loose.
That's currently how I model my usage of LLMs in code. A smart veeeery junior engineer that needs to be kept on a veeeeery short leash.
Yes. LLMs are very much like a smart intern you hired with no real experience who is very eager to please you.
8 replies →
Even at 20 years old I would not have done this.
2 replies →
One who thinks "open source" means blindly copy/pasting code snippets found online.
It's definitely both. A bunch of 20 year olds were let loose to be "super efficient." So, to be efficient they use LLMs to implement what should be a major government oversight webpage. Even after the fix the list is a few half-baked partial document excerpts with a few sentences saying, "look how great we are!" It's embarrassing.
Does it? At least my experience is that ChatGPT goes super hard on security, heavily promoting the use of best practices.
Maybe they used Grok ;P
> At least my experience is that ChatGPT goes super hard on security, heavily promoting the use of best practices.
Not my experience at all. Every LLM produces lots of trivial SQLI/XSS/other-injection vulnerabilities. Worse they seem to completely authorization business logic, error handling, and logging even when prompted to do so.
1 reply →
Does it, though? The saying says we shouldn't mistake incompetence for malice, but that requires more than usual for Musk's retinue.
Smells like getting a backdoor in early.
Apparently they get backdoors in as incompetently as they create efficiency.
My first guess is that this is an unauthenticated server action.[0]
0 - https://blog.arcjet.com/next-js-server-action-security/
Maybe doge should have used an LLM to generate defenses
They did, and this is what they got.
Just checked the DOGE website; I'm not too sure about this theory given that POST requests are blocked and the only APIs you can find (ie. /api/offices) only supports GET requests and if the UUID doesn't match, it 404s.
I don't see any CRUD endpoints for modifying the database
DOGE noticed. They might have "fixed" the vulnerability by now
https://doge.gov/workforce?orgId=69ee18bc-9ac8-467e-84b0-106... is what's linked to by the "Workforce" header, and it now looks different than the screenshots
Good thing we have the best and brightest at DOGE!
well they pay for a blue checkmark, they _must_ be the cleverest we have
It's been a while since I last saw a CMS pulling data from a database... It's a miracle the website didn't crumble under the load.
Put a CMS behind a well-configured CDN and it's essentially a static site generator. If you have cache invalidation figured out, you get all the speed and scalability benefits of a static site without ever having to regenerate your content.
I’m guessing it didn’t have much in front of it because the management endpoints were accessible from the public Internet. I think you mentioning the “well configured CDN” is key here. If there was a CDN in front of it, it wasn’t well configured.
BTW, I spent a lot of my career configuring load balancing, caches, proxies, sharding, and CDNs for Plone (a CMS that’s popular with governments) websites.
1 reply →
https://m.youtube.com/watch?v=woPff-Tpkns&pp=ygUSdW5kZXJ0YWx...