Comment by karim79
7 months ago
It's amazing just how ill-understood this tech is, even by its creators who are funded by gazillions of dollars. Reminds me of this:
https://www.searchenginejournal.com/researchers-test-if-thre...
It just doesn't reassure me in the slightest. I don't see how super duper auto complete will lead to AGI. All this hype reminds me of Elon colonizing mars by 2026 and millions or billions of robots by 2030 or something.
I took a continuing education class from Stanford on ML recently and this was my main takeaway. Even the experts are just kinda poking it with a stick and seeing what happens.
That's just how science happens sometimes and how new discoveries are made. Heck even I have to do that sometimes with the codebase of large legacy applications. It's not en unreasonable tactic sometime.
Incompetent people waiting for “science to happen” while the merchant class lies to the peasants about what science should be for them to make money. Explains what is going on.
As I was reading that prompt, it looked like large blob of if else case statements
Maybe we can train a simpler model to come up with the correct if/else-statements for the prompt. Like a tug boat.
Hobbyists (random dudes who use LLM models to roleplay locally) have already figured out how to "soft-prompt".
This is when you use ML to optimize an embedding vector to serve as your system prompt instead of guessing and writing it out by hand like a caveman.
Don't know why the big cloud LLM providers don't do this.
This is generally how prompt engineering works
1. Start with a prompt
2. Find some issues
3. Prompt against those issues*
4. Condense into a new prompt
5. Go back to (1)
* ideally add some evals too
If you could see how it would basically be done. But it not being obvious doesn't prevent us from getting there (superhuman in almost all domains) in a few new breakthroughs
Reminds me of Elon saying that self-driving a car is essentially ballistics. It explains quite a bit of how FSD is going.
How is it going? I use it every day in NYC and I think it's incredible.
You are not. There is no car that has FSD. If you are relying on teslas autopilot thinking it is fsd you are just playing with your and everyone else's life on the road. Especially in an urban traffic situation like NYC.
4 replies →
How often do you need to intervene?
1 reply →
FSD is going pretty well. Have you looked at real drives recently, or just consumed the opinions of others?
Musk has been "selling" it for a decade. When are Model 3s from 2018 getting it?
1 reply →
Every single piece of hype coverage that comes out about anything is really just geared towards pumping the stock values
That's really all there is too it imo. These executives are all just lying constantly to build excitement to pump value based on wishes and dreams. I don't think any of them genuinely care even a single bit about truth, only money
That's exactly it. It's all "vibe" or "meme" stock with the promise of AGI right around the corner.
Just like Mars colonisation in 2026 and other stupid promises designed to pump it up.
What stock value? OpenAI and Anthropic are private.
(If they were public it'd be illegal to lie to investors - if you think this you should sue them for securities fraud.)
> illegal to lie to investors
Unfortunately, in practice it's only illegal if they can prove you lied on purpose
As for your other point, hype feeds into other financial incentives like acquiring customers, not just stocks. Stocks was just the example I reached for. You're right it's not the best example for private companies. That's my bad
Extremely accurate. Each and every single OpenAI employee just got a 1.5 Million USD Bonus. They must be printing money!
Charitable of you to think it's "printing money" and not "burning investors' cash".
Welcome to for profit enterprises? The fact that anyone even for a moment thought otherwise is the real shocking bit of news.
The fact this is normalized and considered okay should make us more angry, not just scoff and say "of course it's all fake and lies, did you really think otherwise?"
We should be pissed at how often corporations lie in marketing and get away with it
6 replies →
Wasn't it a nonprofit at one point
> I don't see how super duper auto complete will lead to AGI
Autocomplete is the training algorithm, not what the model "actually does". Autocomplete was chosen because it has an obvious training procedure and it generalizes well to non-autocomplete stuff.