← Back to context

Comment by bhickey

2 months ago

Some of the non-determinism mentioned above manifests as sensitivity to _where_ data falls within a batch.

In my experience with other regular models, once the context starts to fill up, quality starts to degrade.

wouldn't getting batched at the end of a batch, have a similar -effect- on the results, where your prompt might recieve overall less attention focused into it, if the context window is almost full?

Idk just going by the vibes