Comment by WithinReason
1 year ago
They are different but transformers don't have fixed windows, you can extend the context or make it smaller. I think you can extend a positional encoding if it's not a learned encoding.
1 year ago
They are different but transformers don't have fixed windows, you can extend the context or make it smaller. I think you can extend a positional encoding if it's not a learned encoding.
No comments yet
Contribute on Hacker News ↗