The chat is full of modern “art talk,” which is a highly specific way that modern (post 2000ish) artists blather on about their ideas and process. It started earlier but in 1980 there was more hippie talk and po-mo deconstruction lingo.
Point being, to someone outside the art world this might sound like how an artist thinks. But to me ear this a bot imitating modern trendy speech from that world.
Even with reinforcement learning, you can still find phrases and patterns that are repeated in the smaller models. It's likely true with the larger ones, too, except the corpus is so large that you'll have fat luck to pick out which specific bits.
It's also imitating the speaker (critic, artist or most likely a gallerist) unwaveringly praising everything about the "choices" it made, even though it clearly made a worse thing in the end.
Indeed, I have a really dry and information dense way of speaking when working and it very quickly copies that. I can come across as abrupt and rude in text, which is pretty funny to have mirrored to you. This Claude guy is an asshole!
(I am very friendly and personable in real life, but work text has different requirements)
Is it? They're all generalizing from a pretty similar pool of text, and especially for the idea of a "helpful, harmless, knowledgeable virtual assistant", I think you'd end up in the same latent design space. Encompassing, friendly, radiant.
Note that Claude, ChatGPT, Perplexity, and other LLM companies (assumably human) designers chose a similar style for their app icon: a vaguely starburst or asterisk shaped pop of lines.
> In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum that imitated a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
I feel like we need another effect for people on hacker news that consistently do the opposite - take obvious intelligence and pretend it's equivalent to Eliza.
I wonder if it would give a similar evaluation in a new session, without the context of "knowing" that it had just produced an SVG describing an image that is supposed to have these qualities. How much of this is actually evaluating the photo of the plotter's output, versus post-hoc rationalization?
It's notable that the second attempt is radically different, and I would say thematically less interesting, yet Claude claims to prefer it.
> [Claude Code] "A spiral that generates itself — starting from a tight mathematical center (my computational substrate) and branching outward into increasingly organic, tree-like forms (the meaning that emerges). Structure becoming life. The self-drawing hand."
"And blood-black nothingness began to spin... A system of cells interlinked within cells interlinked within cells interlinked within one stem... And dreadfully distinct against the dark, a tall white fountain played." ("Blade Runner 2049", Officer K-D-six-dash-three-dot-seven)
Ergodic literature refers to texts requiring non-trivial effort from the reader to traverse, moving beyond linear, top-to-bottom reading to actively navigate complex, often nonlinear structures. Coined by Espen J. Aarseth (1997), it combines "ergon" (work) and "hodos" (path), encompassing print and electronic works that demand physical engagement, such as solving puzzles or following, navigating, or choosing paths.
Somebody a while back on HN compared sharing AI chat transcripts as the equivalent of telling everyone all about that “amazing dream you had last night”.
Are they though? I don't know what I expected, but to me they looked like nothing. Maybe they'd be more impressive if I'd read the transcripts but whatever.
They still exist, but more as a maker hobby and/or art device than as a 'big printer' like those used for stuff like cartography in the past. A big advantage of plotters is they don't have to carry a pen, but can also (laser) cut or burn stuff.
There are multiple tools for converting SVG to the gcode plotter language.
What bugs me the most about this post is the anthropomorphizing of the machine. The author asks Claude "what [do] you feel", and the bot answers things like "What do I feel? Something like pull — toward clarity, toward elegance, ...", "I'm genuinely pleased...", "What I like...", "it feels right", "I enjoyed it", etc.
Come on, it's a computer, it doesn't have feelings! Stop it!
To someone who worked on the earliest LLM tech and pre LLM tech at Google this art is very striking to me. It looks very much like like an abstract representation of how an LLM “thinks” and is an attempt to know itself better.
The inner waves undulate between formal and less formal as patterns and filters of pathways of thought and the branches spawn as pass through them to branch into latent space to discover viable tokens.
To me this looks like manifold search and activation.
”It has long been suggested that there is a link between mental disorders and creativity (which involves divergent thinking – thinking in a free-flow, spontaneous, many-branching manner).”
Hey OP I also got interested in seeing LLMs draw and came up with this vibe coded interface. I have a million ideas for taking it forward just need the time... Lmk if you're interested in connecting?
You can look at SVG lineart on the screen without plotting it, and if you really want it on paper you can print it on any printer.
And particularly:
> This was an experiment I would like to push further. I would like to reduce the feedback loop by connecting Claude directly to the plotter and by giving it access to the output of a webcam.
You can do this in pure software, the hardware side of it just adds noise.
probably at the same stage where a bunch of peptides activating some receptors and triggering the pumping of electrolytes in an out of lipid walls does, i guess
I always feel guilty when I do such stupid stuff over Claude, these are all resources and limited computing. Enormous amounts of water and electricity. Gotta really think about what is it worth spending on. And is it, in fact, worth it at all.
AI is very selfish technology in this way. Every time you prompt you proclaim: My idea is worth the environmental impact. What I am doing is more important than a tree.
The entire current AI industry is based on one huge hype-fueled resource grab— asthma-inducing, dubiously legal, unlicensed natural gas turbines and all. I doubt even most of the “worthwhile” tasks will be objectively considered worth the price when the dust clears.
This is why I like to go on vacation every year and blow what for most individuals on the earth represents an entire lifetime of co2 emissions just on the airfare.
Take that virtue-signalers, by the time you figure out how to fix the planet I'll be dead.
Are you saying that you like pointless meetings that waste your time? I sure don't. My team generally does a lot of work to ensure that our meetings are short and productive. It's a point that comes up quite often.
Maybe I do, or maybe I am very selfish and I think that my palate is more important than cows? Or maybe cows wouldn't even exist at all without the cheeseburgers?
It's kind of ominous. I could see people in a science fiction thriller finding a copy of the image and wondering what it all means. Maybe as the show progresses it adds more of the tentacle/connection things going out further and further.
I'm reminded of the episode of Star Trek: TNG where Data, in a sculpture class being taught by Troi, is instructed to sculpt the "concept of music". She was testing, and giving him the opportunity to test, how well he could visualize and represent something abstract. Data's initial attempt was a clay G clef, to which Troi remarked, "It's a start."
Is there anything interesting here? Are people really that entertained by this? I remember when ChatGPT first came out and people were making it think it was a dog or something. I tried it, it was fun for about 5 minutes. How the hell could you be bored enough to read article after article, comment after comment of "here's what I typed in, here's what came out"?
Personally I'd like to see the model get better at coding, I couldn't really care less if it's able to be 'creative' -- in fact i wish it wasn't. It's a waste of resources better used to _make it better at coding_.
Resources issue is really something that needs to be thought about more. These things already siphoned all existing semiconductors and if that turns out to be mostly spent on things like op does and viral cats then holy shit
Thing is dear people, we have limited resources to get out of this constraining rock. If we miss that deadline doing dumb shit and wasting energy, we will just slowly decline to preindustrial at best and that's the end of any space society futurism dreams forever.
We only have one shot at this, possibly singular or first sentients in the universe. It is all beyond priceless. Every single human is a miracle and animals too.
They should run it, same verbatim prompts, using all the old versions still obtainable in api- see the progression. Is there a consistent visual aesthetic, implementation? Does it change substantially in one point version? Heck apart from any other factor it could be a useful visual heuristic for “model drift”
Lovely stuff, and fascinating to see. These machines have an intelligence, and I'd be quite confident in saying they are alive. Not in a biological sense, but why should that be the constraint? The Turing test was passed ages ago and now what we have are machines that genuinely think and feel.
Whenever I see commentary like this, I get that the intent is to praise AI, but all I can get out of it is deprecation of humanity. How can people feel that their own experience of reality is as insignificant a phenomenon as what these programs exhibit? What is it like to perceive human life — emotions, thoughts, feelings — as something no more remarkable than a process running on a computer?
Argue all you want about what words like "think" or "intelligence" should mean (I'm not even going to touch the Turing misinformation), but to call an LLM "alive" or "feeling" is as absurd to me as attributing those qualities to a conventional computer program, or to the moving points of light on the screen where their output appears, or to the words themselves.
Feelings are caused by chemicals emitted into your nervous system. Do these bots have that ability? Like saying “I love you” and meaning it are two different things.
Sure. But the emitted chemicals strengthen/weaken specific neurons in our neural nets.
If there were analogous electronic nets in the bot, with analogous electrical/data stimulii, wouldn't the bot "feel" like it had emotions?
Not saying it's like that now, but it should be possible to "emulate" emotions. ??
Our nets seem to believe we have emotions. :-)
The chat is full of modern “art talk,” which is a highly specific way that modern (post 2000ish) artists blather on about their ideas and process. It started earlier but in 1980 there was more hippie talk and po-mo deconstruction lingo.
Point being, to someone outside the art world this might sound like how an artist thinks. But to me ear this a bot imitating modern trendy speech from that world.
> But to me ear this a bot imitating modern trendy speech from that world.
Unless they've had some reinforcement learning, I'm pretty sure thats all LLMs ever really do.
Even with reinforcement learning, you can still find phrases and patterns that are repeated in the smaller models. It's likely true with the larger ones, too, except the corpus is so large that you'll have fat luck to pick out which specific bits.
2 replies →
I think you mean “post-modern” or “contemporary” - modern art is a period of art that came to an end around the 1970s
I see this mistake all the time.
I think people who have the opportunity should visit the MoMA to see the wide variety of art there.
I'm sure a lot would consider van gogh or Klimt to be "traditional" art when they're very much modern artists.
Obligatory XKCD: https://xkcd.com/3089/
It's also imitating the speaker (critic, artist or most likely a gallerist) unwaveringly praising everything about the "choices" it made, even though it clearly made a worse thing in the end.
Indeed, I have a really dry and information dense way of speaking when working and it very quickly copies that. I can come across as abrupt and rude in text, which is pretty funny to have mirrored to you. This Claude guy is an asshole!
(I am very friendly and personable in real life, but work text has different requirements)
1 reply →
Very Ongo Gablogian
I think it's somewhat interesting that codex (gpt-5.3-codex xhigh), given the exact same prompt, came up with a very similar result.
https://3e.org/private/self-portrait-plotter.svg
Asked gemini the same question and it produced a similar-ish image: https://manuelmoreale.dev/hn/gemini_1.svg
When I removed the plot part and simply asked to generate an SVG it basically created a fancy version of the Gemini logo: https://manuelmoreale.dev/hn/gemini_2.svg
This is honestly all quite uninteresting to me. The most interesting part is that the various tools all create a similar illustration though.
Is it? They're all generalizing from a pretty similar pool of text, and especially for the idea of a "helpful, harmless, knowledgeable virtual assistant", I think you'd end up in the same latent design space. Encompassing, friendly, radiant.
Note that Claude, ChatGPT, Perplexity, and other LLM companies (assumably human) designers chose a similar style for their app icon: a vaguely starburst or asterisk shaped pop of lines.
8 replies →
Spirals again.
Those AIs have read too much Junji Ito.
AFAIK all of these models have been trained in very similar ways, on very similar corpuses. They could be heavily influenced by the same literature.
I wonder if anyone recognizes it really closely. The Pale Fire quote below is similar but not really the same.
It’s a bit closer to the Flying Spaghetti Monster.
good stuff, thank you for sharing!
"Doesn't look like anything to me"
I love that these would be perfectly at home as sigils in some horror genre franchise.
its just reality
1 reply →
Are you crazy or am I because I scrolled through that blog and am left scratching my head at you and your claim.
That literal spiral pattern keeps popping up, often around instances of AI psychosis: https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-o...
(I'm not endorsing any of that article's conclusions, but it's a good overview of the pattern.)
Maybe Claude is just a fan of Tengen Toppa Gurren Lagann? (Or influenced by the fandom thereof.)
https://www.youtube.com/watch?v=30mmoegSQCs
https://en.wikipedia.org/wiki/Dark_City_(1998_film)
> Enable JavaScript to continue
I wonder what’s here that requires code execution
How annoying. I am not a Less Wrong reader so I have no particular insight here.
> In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum that imitated a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
https://en.wikipedia.org/wiki/ELIZA_effect
I feel like we need another effect for people on hacker news that consistently do the opposite - take obvious intelligence and pretend it's equivalent to Eliza.
Already exists: https://en.wikipedia.org/wiki/AI_effect
Does this effect refer to how HN commenters respond to one another in the comments?
> and Claude to answer:
I wonder if it would give a similar evaluation in a new session, without the context of "knowing" that it had just produced an SVG describing an image that is supposed to have these qualities. How much of this is actually evaluating the photo of the plotter's output, versus post-hoc rationalization?
It's notable that the second attempt is radically different, and I would say thematically less interesting, yet Claude claims to prefer it.
Certainly the second half of the session suffers from degradation
> [Claude Code] "A spiral that generates itself — starting from a tight mathematical center (my computational substrate) and branching outward into increasingly organic, tree-like forms (the meaning that emerges). Structure becoming life. The self-drawing hand."
"And blood-black nothingness began to spin... A system of cells interlinked within cells interlinked within cells interlinked within one stem... And dreadfully distinct against the dark, a tall white fountain played." ("Blade Runner 2049", Officer K-D-six-dash-three-dot-seven)
:)
The poetry you quoted is originally by Vladimir Nabokov in Pale Fire.
Pale Fire book is shown in the movie Blade Runner 2049
https://www.youtube.com/watch?v=OtLvtMqWNz8
Solving Nabokov's Pale Fire - A Deep Dive
https://www.youtube.com/watch?v=-8wEEaHUnkA
Pale Fire is what we call as Ergodic literature
Ergodic literature refers to texts requiring non-trivial effort from the reader to traverse, moving beyond linear, top-to-bottom reading to actively navigate complex, often nonlinear structures. Coined by Espen J. Aarseth (1997), it combines "ergon" (work) and "hodos" (path), encompassing print and electronic works that demand physical engagement, such as solving puzzles or following, navigating, or choosing paths.
Ergodic Literature: The Weirdest Book Genre
https://www.youtube.com/watch?v=tKX90LbnYd4
"House of Leaves" is another book from the same genre.
House of Leaves - A Place of Absence
https://www.youtube.com/watch?v=YJl7HpkotCE
Diving into House of Leaves Secrets and Connections | Video Essay
https://www.youtube.com/watch?v=du2R47kMuDE
The Book That Lies to You - House of Leaves Explained
https://www.youtube.com/watch?v=tCQJUUXnRIQ
I went into this rabbit hole few years ago.
Pale Fire is brilliant - wonderfully written and very funny. The poem itself is pretty good too - one of my favourite bits:
How to locate in blackness, with a gasp,
Terra the Fair, an orbicle of jasp.
How to keep sane in spiral types of space.
Precautions to be taken in the case
Of freak reincarnation: what to do
On suddenly discovering that you
Are now a young and vulnerable toad
Plump in the middle of a busy road
Machine designed to spit out words similar to other words it has ingested does exactly that. Groundbreaking.
The images are neat, but I would rather throw my laptop in the ocean than read chat transcripts between a human and an AI.
(Science fiction novels excluded, of course.)
Somebody a while back on HN compared sharing AI chat transcripts as the equivalent of telling everyone all about that “amazing dream you had last night”.
I guess they were (unknowingly?) quoting Tom Scott, unless he himself was also doing the same: https://youtu.be/jPhJbKBuNnA?t=384
5 replies →
Except sometimes you get absolutely banger dreams.
1 reply →
[dead]
[flagged]
12 replies →
> images are neat
Are they though? I don't know what I expected, but to me they looked like nothing. Maybe they'd be more impressive if I'd read the transcripts but whatever.
Consider it generative / digital art, emergent from some kind of algorithm. That's interesting enough to explore and write about in an article.
I just skipped to the images. Don't even want to skim generated nonsense.
+1, I don’t even fully read my own conversations with AI
Oh that reminds me. Could someone make an AI interface where each agent uses a different Culture ship name, and looks like the dialog from Excession?
If we are going to have a dystopia, lets make it fun, at least...
They haven’t earned ship names yet.
The minds name themselves. Ask your agent.
That feels somehow sacrilegious.
If we are going by Culture standards, then surely the AIs should give themselves appropriate names?
Forget AGI benchmarks, I'm watching for when AI start giving themselves culture names.
1 reply →
I feel the same way, but apparently millions of people are using character.ai?
Don’t throw it away, just send it to me I might have a few good use for it ;)
Claude manages to be even more insufferable than the stereotype of a pretentious artist, with none of the talent.
-HAL, Throw my portable computing device through the porthole.
-Im afraid I cant do that Dave!
-HAL, do you need some time on dr. Chandras couch again?
-Dave, relax, have you forgotten that I dont have arms?
I'm curious about what difference the pen plotter makes?
Isn't the prompt just asking the LLM to create an SVG? Why not just stop there?
I guess for some folks it's not "real" unless it's on paper?
I tend to think of plotters as very old technology. What software would one use nowadays to feed SVG to a plotter?
They still exist, but more as a maker hobby and/or art device than as a 'big printer' like those used for stuff like cartography in the past. A big advantage of plotters is they don't have to carry a pen, but can also (laser) cut or burn stuff. There are multiple tools for converting SVG to the gcode plotter language.
What bugs me the most about this post is the anthropomorphizing of the machine. The author asks Claude "what [do] you feel", and the bot answers things like "What do I feel? Something like pull — toward clarity, toward elegance, ...", "I'm genuinely pleased...", "What I like...", "it feels right", "I enjoyed it", etc.
Come on, it's a computer, it doesn't have feelings! Stop it!
To someone who worked on the earliest LLM tech and pre LLM tech at Google this art is very striking to me. It looks very much like like an abstract representation of how an LLM “thinks” and is an attempt to know itself better.
The inner waves undulate between formal and less formal as patterns and filters of pathways of thought and the branches spawn as pass through them to branch into latent space to discover viable tokens.
To me this looks like manifold search and activation.
This really brings to mind that artist who kept painting/drawing cats as he slowly went insane.
Louis Wain - https://www.samwoolfe.com/2013/08/louis-wains-art-before-and...
”It has long been suggested that there is a link between mental disorders and creativity (which involves divergent thinking – thinking in a free-flow, spontaneous, many-branching manner).”
Isn’t that how these LLMs ”think”?
First time I heard about him was during my cognitive sciences studies. I sure hope not following the same path!
Hey OP I also got interested in seeing LLMs draw and came up with this vibe coded interface. I have a million ideas for taking it forward just need the time... Lmk if you're interested in connecting?
https://github.com/acadien/displai
Yes, please connect. marc AT harmonique.one or instagram marc.in.space
So we see here that AI has come for the jobs of people who write artist statements... ;-)
I always wonder what the pen plotter is adding?
You can look at SVG lineart on the screen without plotting it, and if you really want it on paper you can print it on any printer.
And particularly:
> This was an experiment I would like to push further. I would like to reduce the feedback loop by connecting Claude directly to the plotter and by giving it access to the output of a webcam.
You can do this in pure software, the hardware side of it just adds noise.
Sure, you could just do it in software. Maybe it would produce something interesting though, to have that extra layer through the physical world?
It does. It makes for a more catchy title and feeds into illusions of it understanding something about the world.
1 reply →
This is awesome. I’ve been experimenting with letting models “play” with different environments as a strong demo of their different behaviors.
Those images feel biblically accurate. Maybe add some pairs of wings, Claude.
> I exist only in the act of processing
Seems like a good start for AI philosophy
when does a bunch of matmuls being fed a blob of numbers become a transient consciousness?
probably at the same stage where a bunch of peptides activating some receptors and triggering the pumping of electrolytes in an out of lipid walls does, i guess
I am because I think I am.
I infer, therefor I am
Ask it to draw a pelican on a bicycle
it's hilarious that the author was prompting the thing as if it were a person and Claude was like "am computer not person lol"
Seems the AIs are quite self aware.
"If you pay attention to AI company branding, you'll notice a pattern:
Sound familiar?"
https://velvetshark.com/ai-company-logos-that-look-like-butt...
This is who is wasting our computing power guys
I always feel guilty when I do such stupid stuff over Claude, these are all resources and limited computing. Enormous amounts of water and electricity. Gotta really think about what is it worth spending on. And is it, in fact, worth it at all.
AI is very selfish technology in this way. Every time you prompt you proclaim: My idea is worth the environmental impact. What I am doing is more important than a tree.
We have to use it responsibly.
The entire current AI industry is based on one huge hype-fueled resource grab— asthma-inducing, dubiously legal, unlicensed natural gas turbines and all. I doubt even most of the “worthwhile” tasks will be objectively considered worth the price when the dust clears.
I do appreciate this note more than others. It is food for thought. I think it could have been worded a lot more respectfully though.
No, it's not worded disrespectful enough... this idiot use of an idiotic technology needs to be called out.
As someone who isn't much into AI, you make me want to use AI more just to spite the eco-virtue-signaling idiots.
It's fun to harness all that computing power. That should be reason enough. Life is meant to be enjoyed.
This is why I like to go on vacation every year and blow what for most individuals on the earth represents an entire lifetime of co2 emissions just on the airfare.
Take that virtue-signalers, by the time you figure out how to fix the planet I'll be dead.
And this is why this technology needs to be destroyed.
Some things are signaling and some things are genuine worry. Learn to tell the difference
What an empty outlook on life you have
Did you raise tbe same point in pointless meetings that you participate? “Guys, stop quibbling, you are wasting precious resource”
Are you saying that you like pointless meetings that waste your time? I sure don't. My team generally does a lot of work to ensure that our meetings are short and productive. It's a point that comes up quite often.
I hope you feel the same way every time you eat beef.
Maybe I do, or maybe I am very selfish and I think that my palate is more important than cows? Or maybe cows wouldn't even exist at all without the cheeseburgers?
3 replies →
Literal whataboutism
2 replies →
It's kind of ominous. I could see people in a science fiction thriller finding a copy of the image and wondering what it all means. Maybe as the show progresses it adds more of the tentacle/connection things going out further and further.
I'm reminded of the episode of Star Trek: TNG where Data, in a sculpture class being taught by Troi, is instructed to sculpt the "concept of music". She was testing, and giving him the opportunity to test, how well he could visualize and represent something abstract. Data's initial attempt was a clay G clef, to which Troi remarked, "It's a start."
Who cares?
"asking Claude what it thought about the pictures. In total, Claude produced and signed 2 drawings."
Have people gone utterly nuts?
I bought an 80s HP pen plotter a while ago (one of these: https://www.curiousmarc.com/computing/hp-7475a-plotter).
Haven't put it to use yet. I bet Claude can figure out HPGL though...
Sounds like a good vibe coding session goal!
Oh my, the noise coming from that machine!
Claude: Let me think about it seriously before putting pen to paper.
Jaunty!
Is there anything interesting here? Are people really that entertained by this? I remember when ChatGPT first came out and people were making it think it was a dog or something. I tried it, it was fun for about 5 minutes. How the hell could you be bored enough to read article after article, comment after comment of "here's what I typed in, here's what came out"?
Especially when the output is, garbage.
i guess i should have written up my claude/plotting workflow already. i didn’t bother actually plotting them. https://x.com/joshu/status/2018205910204915939
Let’s connect if you’re interested. marc at harmonique.one
Personally I'd like to see the model get better at coding, I couldn't really care less if it's able to be 'creative' -- in fact i wish it wasn't. It's a waste of resources better used to _make it better at coding_.
Resources issue is really something that needs to be thought about more. These things already siphoned all existing semiconductors and if that turns out to be mostly spent on things like op does and viral cats then holy shit
Thing is dear people, we have limited resources to get out of this constraining rock. If we miss that deadline doing dumb shit and wasting energy, we will just slowly decline to preindustrial at best and that's the end of any space society futurism dreams forever.
We only have one shot at this, possibly singular or first sentients in the universe. It is all beyond priceless. Every single human is a miracle and animals too.
What is the difference between creativity and coding?
Technically impressive, artistically disappointing.
From the onset it feels like the author treats the AI as a person, and him merely the interface. Weird take, as AI is just a tool... not an artist!
[dead]
[dead]
Sorry, how is this HN front page worthy?
Also why is the downvote button missing?
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
Submissions generally don't have a downvote button.
This is brilliant. It could be fun to redo the process every 6 months and hang them up in a gallery.
Maybe someday (soon) an embodied LLM could do their self-portrait with pen and paper.
They should run it, same verbatim prompts, using all the old versions still obtainable in api- see the progression. Is there a consistent visual aesthetic, implementation? Does it change substantially in one point version? Heck apart from any other factor it could be a useful visual heuristic for “model drift”
Good idea, thank you
Don't give that one guy more ideas for easily upvoted slop articles. We have enough of those by a considerable margin.
Quite ugly, but hey
Thank you!
Lovely stuff, and fascinating to see. These machines have an intelligence, and I'd be quite confident in saying they are alive. Not in a biological sense, but why should that be the constraint? The Turing test was passed ages ago and now what we have are machines that genuinely think and feel.
> they are alive. Not in a biological sense, but why should that be the constraint?
Because being alive is THE defining characteristic of biology.
Biology is defined by its focus on the properties that distinguish living things from nonliving matter.
What do you think living things are made of other than molecules a d electrical signals?
1 reply →
Whenever I see commentary like this, I get that the intent is to praise AI, but all I can get out of it is deprecation of humanity. How can people feel that their own experience of reality is as insignificant a phenomenon as what these programs exhibit? What is it like to perceive human life — emotions, thoughts, feelings — as something no more remarkable than a process running on a computer?
Argue all you want about what words like "think" or "intelligence" should mean (I'm not even going to touch the Turing misinformation), but to call an LLM "alive" or "feeling" is as absurd to me as attributing those qualities to a conventional computer program, or to the moving points of light on the screen where their output appears, or to the words themselves.
What do you think humans are made of other than molecules and electrical signals?
1 reply →
Seek therapy. Stop talking to LLMs.
[dead]
Feelings are caused by chemicals emitted into your nervous system. Do these bots have that ability? Like saying “I love you” and meaning it are two different things.
Sure. But the emitted chemicals strengthen/weaken specific neurons in our neural nets. If there were analogous electronic nets in the bot, with analogous electrical/data stimulii, wouldn't the bot "feel" like it had emotions?
Not saying it's like that now, but it should be possible to "emulate" emotions. ?? Our nets seem to believe we have emotions. :-)
I've seen SOUL.md. Has anyone attempted to give these things a semblance of feelings by some sort of pain/dopamine mechanism? Should we?
And then we turn them off.