← Back to context

Comment by appcustodian2

1 day ago

source on the 8096 tokens number? i'm vaguely aware that some previous models attended more to the beginning and end of conversations which doesn't seem to fit a simple contiguous "attention window" within the greater context but would love to know more

well 8096 is just the first number that came to my mind, obviously frontier models have 32k or above, but they essentially they have a layer which "looks" at a limited view of the entire context window. {[1m x 3-4 weights] attention layer to determine what is actually important} -> {all other layers}