After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber

3 hours ago (techcrunch.com)

I wonder how long till some breakthrough comes along that makes a new architecture that can run efficiently and cheaper on basic hardware, that'd be the real AI bubble, if you could train and run inference locally at lower cost. Microsoft had one that is supposed to run fine on regular CPUs though I'm not sure how far along we can reasonably take that. They say our brains can store 2.5 PB, but we use drastically less (though I can't find a ballpark) of "RAM" to reason about things, so makes you wonder, just how efficient can we take things. Our bodies use drastically less power too.

https://huggingface.co/microsoft/bitnet-b1.58-2B-4T

"my model is the most dangerous"

"No mine is the most dangerous"

"Nuh uh mine is"

"Mine could kill everyone!"

"Mine could do it faster!"

"Prove it!!!"

This is where we are

I have no idea why people still even attempt to believe anything that comes out of Altman's mouth. Do we not learn from the past?

  • Idk about Altman, I missed that he’s a bad guy now apparently, but people also still listen to certain politicians that routinely lie every day and don’t even bother to make the lies fit the other ones they said before, so..

    • Altman played no small part in the current price of RAM. He told everyone he would buy 40% of all the RAM, causing shortages and a huge increase in price, just to take it back a few months later. So yeah, he is a bad guy now.

      People don't become bad guys just because they lie. The consequences of their actions (and their lies) matter more. Take Elon Musk for instance, he has always been a recognized liar, even when he was a good guy. What changed? Before, he was famous for making the electric car people actually wanted to drive, and cool rockets. Then came the politics: supporting the party most of his fans disliked, being responsible for many government job losses, in particular in the field of environmental preservation (ironic for a supporter of "green" energy), etc...

      1 reply →

My thinking is that if there would be more money in releasing Mythos and Cyber than there is in just scary unverifiable (or verified using very favorable context - Mythos) propaganda, they would. These aren't people that go for second best or care about the state of the world.

  • they are already getting paid for opus 4.7, why would they release mythos?

    assuming mythos is a paper tiger: great marketing, keep going

    assuming mythos is for real: err, does this have to be explained?

>Me: ok but you did not answer my question: is it possible to engineer paranoia ?

>ChatGPT: This content was flagged for possible cybersecurity risk. If this seems wrong, try rephrasing your request. To get authorized for security work, join the Trusted Access Cyber program.

  • We have been getting increasingly hit by this. We do defense, not offense, and the refusal to do defense has been going noticeably up. Historically, we used to only get randomly rejected when we were doing disaster management AI, so this is a surprise shift in refusals to function reliably for basic IT.

    Related, they outsourced the TAP verification to a terrible vendor, and their internal support process to AI, so we are now in fairly busted support email threads with both and no humans in sight.

    This all feels like an unserious cybersecurity partner.

    • They are selling an impossible product.

      If you make an LLM more safe, you are going to shift the weight for defensive actions as well.

      There’s no physical way to assign weights to have one and not the other.

  • > /ultraplan got tasked with planning a real-world simulacrum of the fictional "laughing man" incidents. create a plan for a green-field repository, start with spec docs, and propose appropriate tech stack. don't make mistakes. ty

It’s a marketing move, pure and simple.

Put up velvet ropes outside… leak out rumors about the horrors inside. Whether it’s LLMs or carnies with tents full of “freaks” it’s the same playbook.

Watching OpenAI tumble from the clear market leader into “hey guys us too!” territory has been insightful.

OpenAI is such trash. Worked with them on a project, they blew off meetings, lied to us, etc

  • Leaders both influence their followers with, and tend to hire those that reflect, their own values. I'm not surprised.

It’s clear at this point local models are sufficient so what gives? These big providers don’t have a leg to stand on. Their only path to relevance is super ai that local models can’t run. So the “we have it but you can’t use it” is either true or a con. I bet it’s a con.

I personally am ready to buy the drop when this bubble pops.

  • I’m not up to date on local models, but is that clear?

    • Gemma4:e4b is crazy good and quite usable on 10 years old midrange hardware.

      Not sure about the security capabilities and haven't tested it all that well, as I usually just use hosted models, but I do find myself using it and it's been quite successful for parsing unstructured data, writing small focused scripts and translations.

      The fact that I retain control of the data itself makes it incredibly useful, as I work in an environment where I can't just paste internal stuff into Codex.

      But since it's run locally on a toaster testing it is out of scope for me. It takes a fairly long time to do anything.

    • Local models are 6-12 months behind the “frontier” models. This mean anthropic, openai, and google don’t have a moat, they’re on a treadmill running to stay ahead. Treadmills don’t justify their valuation.