Comment by walrus01

14 hours ago

I think that one could also take a much larger model (35B or 122B sized) and give it a thorough system prompt to only speak in the manner of a well educated Victorian/Edwardian era gentleman, if you want an "old timey" LLM.

It's hard to know how accurate that is. Is the LLM truly imitating text from that era, or is it imitating a modern idea of text from that era? Also, safety/alignment training would probably prevent it from embracing many of the ideas from that era, even in roleplay.

  • >Also, safety/alignment training would probably prevent it from embracing many of the ideas from that era, even in roleplay.

    lobotomy is an *optional* step. had this technology emerged before the 9/11 and Twitter, SOTA models wouldn't bat an eye if you asked one to write a recipe for meth in ebonics.

  • There's 'uncensored' versions of Qwen 3.6 35B at Q6 and Q8 quantization levels (somewhere from 28GB to 40GB on disk as GGUF files) out there now that won't refuse any prompt. Imitating a Victorian era person is very tame compared to what you can get it to output.

As we learn how to train smarter models on less data, it’ll become more and more interesting to see whether models like this can invent post-1930 math, science, etc. and make predictions.

[Edit: serves me right for not reading tfa. My points are well-covered]