Comment by com2kid
1 year ago
> OpenAI clearly downgrades some of their APIs from their maximal theoretic capability, for the purposes of response time/alignment/efficiency/whatever.
When ChatGPT3.5 first came out, people were using it to simulate entire Linux system installs, and even browsing a simulated Internet.
Cool use cases like that aren't even discussed anymore.
I still wonder what sort of magic OpenAI had and then locked up away from the world in the name of cost savings.
Same thing with GPT 4 vs 4o, 4o is obviously worse in some ways, but after the initial release (when a bunch of people mentioned this), the issue has just been collectively ignored.
You can still do this. People just lost interest in this stuff because it became clear to ehich degree the simulation is really being done (shallow).
Yet I do wish we had access to less finetuned/distilled/RLHF'd models.
People are doing this all the time with Claude 3.5.