← Back to context

Comment by gabriel666smith

2 days ago

I found some weird results when messing around with different embeddings in text generation.

I'm not sure if this is meaningful, and - if anyone on here is interested - I could use some help figuring out what's going on.

I was invited to repost this by the mods' (thank you!) second-chance system.

In the meantime, I've added to the repo a small study I did. This study seems to initially indicate that appending Fib words generated by the model the model to a prompt quite drastically improves LLMs output on creative writing tasks.

Again, I'd love to know if anyone could take this thing further.

I understand you’re worried about publishing your code. I’d be happy to help do something with this at a larger scale, but I think I could use a litttle more detail. Are you saying the training task is to ask for the (fib_i)th token rather than the next token?

If that’s all you did, then I think you’ll probably benefit more from just publishing the code then holding it back. Check out for instance lucidrains (Phil Wang) repository on GitHub to see the speed at which a full academic paper is turned into a python codebase for replication.

Anyway, I suggest you add a little code snippet illustrating the key point, or just confirm on my q. I think it would be fun to train a larger model!

  • Thank you, I agree, I think it'd be helpful to publish aspects of it.

    > Are you saying the training task is to ask for the (fib_i)th token rather than the next token?

    Yes, functionally - I explained in more detail in another comment.

    I'm not sure which is the key point (sort of what I'm trying to work out), but I'll get the model-generation code into the repo. Is that the best thing for you?