Comment by Kostchei

3 months ago

So the things I have seen in generative AI art lead me to believe there is more complexity than that. Ask it do a scifi scene inspired by Giger but in the style of Van Gough. Pick 3 concepts and mash them together and see what it does. You get novel results. That is easy to undert5stand because it is visual.

Language is harder to parse in that way. But I have asked for Haiku about cybersecurity, work place health and safety documents in Shakespearean sonnet style etc. Some of the results are amazing.

I think actual real creativity in art, as opposed to incremental change or combinations of existing ideas, is rare. Very rare. Look at style development in the history of art over time. A lot of standing on the shoulders of others. And I think science and reasoning are the same. And that's what we see in the llms, for language use.

There is plenty more complexity, but that emerges more from embedding, where the less superficial elements of information (such as syntactic dependencies) allow the model to hone in on the higher-order logic of language.

e.g. when preparing the corpus, embedding documents and subsequently duplicating some with a vec where the tokens are swapped with their hex repr could allow an LLM to learn "speak hex", as well as intersperse the hex with the other languages it "knows". We would see a bunch of encoded text, but the LLM would be generating based on the syntactic structure of the current context.