Comment by snek_case
13 hours ago
You can work on building LLMs that use less compute and run locally as well. There are some pretty good open models. They probably be made even more computationally efficient.
13 hours ago
You can work on building LLMs that use less compute and run locally as well. There are some pretty good open models. They probably be made even more computationally efficient.
No comments yet
Contribute on Hacker News ↗