← Back to context

Comment by squibonpig

16 hours ago

The people doing the downvoting are different people from the whiners. I'm one of the whiners.

That's kinda fair, like it's still useful to prepare a list, but it's also like if you didn't go research your information yourself why would I start from a position of charitability when I read it? When I research something with LLMs, I know to double-check everything myself before I use it as a basis for my thought or repeat it to other people. Knowing an article is AI written forces me to doubt every sentence. Or maybe it's worse, I have to assume nobody cared about the sentence. The old format was a guarantee that someone gave enough shits to put the article together. Relevance comes implicitly bundled in each sentence. It's like someone talking to you in public in that there's often a reason to pay attention.

It's not as though that person is going to say something correct, or ethical, but I've had a lifetime of dealing with human kinds of wrongness. When stuff is wrong, I'll know it's wrong because the article is slanted or wrong because the author was lazy etc., which will let me discount it selectively and still get value from it when, e.g., a slanted author contradicts themselves. Reading an LLM article I have no clue whether the person who put it up even read the whole thing, so when I read sentences, I have no guarantee that the sentence communicates something worth paying attention to. I dislike that ambiguity and would prefer to guarantee that the text is slop by asking a bot myself. Then I know its worth upfront. I'd be fine with it if these sites included a direct statement in bold at the top: HEY THIS IS AI SLOP IF YOU DONT WANT THAT LEAVE. Then I know exactly how to parse it.

You might like my new startup, then: https://safebots.ai

I spent way too much time on actually building this — with Claude and double checking everything — so an article I publish can be OK to push out. We aren’t building a bridge for thousands of cars here, it’s an article.

A lot of things are automated and 95% of the time they are correct. The key is knowing whether the last mile is worth fixing, if the consequences are minor.

  • I read through your presentation but I still feel pretty confused about what ur startup does. Could you explain it?

    • Shimman is wrong. The goal is much bigger, and almost the opposite of what he thinks. It's trying to solve the problem of people chasing "slivers" of money and selling out, which happened in Web2 and Web3: https://safebots.ai/singularity.pdf

      What the startup does is make a verifiably trusted, zero-configuration, turnkey environment for businesses to move their data into and run AI workloads on, without worrying about their data being stolen, or some Agents doing unpredictable things. The environment is super-secured, with no ssh. It's an appliance, with over-the-air M of N updates. Think more "Tesla car" and less "OpenClaw". That's the foundation.

      That environment then builds everything around a graph database, for people, organizations, and even code. We have Grokers that can ingest a codebase statically once, and then present the graph databases as a far better "RAG" than cosine similarity and pinecone vector databases.

      At its most basic level: Agents can't be trusted. We want predictable Workflows, not agents. They can do 99% of everything Agents can, if done properly, and the remaining 1% are the dangerous parts https://safebots.ai/agents.html

      It's a lot of innovations at once, including:

      Collaborative Bots that are safer than agents.

      Workflows and tools that can read, reason and propose actions.

      Policies that must be satisfied before actions can be taken.

      Logging of everything. Verifiable security and audits for SOC2 compliance etc. etc.

      Everything is configurable and designed for serious businesses, not a grandma that finished a Chinese course on how to install OpenClaw on her terminal and not get pwned