Comment by jscyc

6 days ago

When you put it that way it reminds me of the Severn/Keats character in the Hyperion Cantos. Far-future AIs reconstruct historical figures from their writings in an attempt to gain philosophical insights.

The Hyperion Cantos is such an incredible work of fiction. Currently re-reading and am midway through the fourth book The Rise Of Endymion; this series captivates my imagination and would often find myself idly reflecting on it and the characters within more than a decade after reading. Like all works, it has its shortcomings, but I can give no higher recommendation than the first two books.

  • I really should re-read the series. I enjoyed it when I read it back in 2000 but it's a faded memory now.

    Without saying anything specific to spoil plot poonts, I will say that I ended-up having a kidney stone while I was reading the last two books of the series. It was fucking eerie.

This isn’t science fiction anymore. CIA is using chatbot simulations of world leaders to inform analysts. https://archive.ph/9KxkJ

  • We're literally running out of science fiction topics faster than we can create new ones

    If I started a list with the things that were comically sci Fi when I was a kid, and are a reality today, I'd be here until next Tuesday.

    • Almost no scifi has predicted world changing "qualitative" changes.

      As an example, portable phones have been predicted. Portable smartphones that are more like chat and payment terminals with a voice function no one uses any more ... not so much.

      19 replies →

    • Not at all, you just need to read different scifi. I suggest Greg Egan and Stephen Baxter and Derek Künsken and The Quantum Thief series

  • Zero percent chance this is anything other than laughably bad. The fact that they're trotting it out in front of the press like a double spaced book report only reinforces this theory. It's a transparent attempt by someone at the CIA to be able to say they're using AI in a meeting with their bosses.

    • I wonder if it's an attempt to get foreign counterparts to waste time and energy on something the CIA knows is a dead end.

    • Unless the world leaders they're simulating are laughably bad and tend to repeat themselves and hallucinate, like Trump. Who knows, maybe a chatbot trained with all the classified documents he stole and all his twitter and truth social posts wrote his tweet about Ron Reiner, and he's actually sleeping at 3:00 AM instead of sitting on the toilet tweeting in upper case.

    • Let me take the opposing position about a program to wire LLMs into their already-advanced sensory database.

      I assume the CIA is lying about simulating world leaders. These are narcissistic personalities and it’s jarring to hear that they can be replaced, either by a body double or an indistinguishable chatbot. Also, it’s still cheaper to have humans do this.

      More likely, the CIA is modeling its own experts. Not as useful a press release and not as impressive to the fractious executive branch. But consider having downtime as a CIA expert on submarine cables. You might be predicting what kind of available data is capable of predicting the cause and/or effect of cuts. Ten years ago, an ensemble of such models was state of the art, but its sensory libraries were based on maybe traceroute and marine shipping. With an LLM, you can generate a whole lot of training data that an expert can refine during his/her downtime. Maybe there’s a potent new data source that an expensive operation could unlock. That ensemble of ML models from ten years ago can still be refined.

      And then there’s modeling things that don’t exist. Maybe it’s important to optimize a statement for its disinfo potency. Try it harmlessly on LLMs fed event data. What happens if some oligarch retires unexpectedly? Who rises? That kind of stuff.

      To your last point, with this executive branch, I expect their very first question to CIA wasn’t about aliens or which nations have a copy of a particular tape of Trump, but can you make us money. So the approaches above all have some way of producing business intelligence. Whereas a Kim Jong Un bobblehead does not.

  • Sounds like using Instagram posts to determine what someone really looks like

  • I predict very rich people will pay to have LLMs created based on their personalities.

    • As an ego thing, obviously, but if we think about it a bit more, it makes sense for busy people. If you're the point person for a project, and it's a large project, people don't read documentation. The number of "quick questions" you get will soon overwhelm a person to the point that they simply have to start ignoring people. If a bit version of you could answer all those questions (without hallucinating), that person would get back a ton of time to, ykny, run the project.

    • "I sound seven percent more like Commander Shepard than any other bootleg LLM copy!"

  • [flagged]

    • Depending on which prompt you used, and the training cutoff, this could be anywhere from completely unremarkable to somewhat interesting.

    • Interesting. Would you be ok disclosing the following:

      - Are you ( edit: on a ) paid version? - If paid, which model you used? - Can you share exact prompt?

      I am genuinely asking for myself. I have never received an answer this direct, but I accept there is a level of variability.

This is such a ridiculously good series. If you haven't read it yet, I thoroughly recommend it.