← Back to context Comment by thomquaid 1 year ago It says 4.25 TPS in the first para. 6 comments thomquaid Reply ricardobeat 1 year ago Honest mistake. Some people think HN is just a series of short tweets and haven’t realized they are links yet! 4ndrewl 1 year ago It's the modern way. Why read when you can just imagine facts straight out of your own brain. plagiarist 1 year ago I agree but also found your comment funny in the context of LLMs. People love getting facts straight out of their models. thomquaid 1 year ago 4.25 is enough tps for a lot of use cases. weatherlight 1 year ago That's still pretty slow, considering there's that "thinking" phase. thomquaid 1 year ago True, but 4.25 is the number we all want to know.
ricardobeat 1 year ago Honest mistake. Some people think HN is just a series of short tweets and haven’t realized they are links yet! 4ndrewl 1 year ago It's the modern way. Why read when you can just imagine facts straight out of your own brain. plagiarist 1 year ago I agree but also found your comment funny in the context of LLMs. People love getting facts straight out of their models. thomquaid 1 year ago 4.25 is enough tps for a lot of use cases.
4ndrewl 1 year ago It's the modern way. Why read when you can just imagine facts straight out of your own brain. plagiarist 1 year ago I agree but also found your comment funny in the context of LLMs. People love getting facts straight out of their models.
plagiarist 1 year ago I agree but also found your comment funny in the context of LLMs. People love getting facts straight out of their models.
weatherlight 1 year ago That's still pretty slow, considering there's that "thinking" phase. thomquaid 1 year ago True, but 4.25 is the number we all want to know.
Honest mistake. Some people think HN is just a series of short tweets and haven’t realized they are links yet!
It's the modern way. Why read when you can just imagine facts straight out of your own brain.
I agree but also found your comment funny in the context of LLMs. People love getting facts straight out of their models.
4.25 is enough tps for a lot of use cases.
That's still pretty slow, considering there's that "thinking" phase.
True, but 4.25 is the number we all want to know.