Comment by snek_case
1 month ago
You can work on building LLMs that use less compute and run locally as well. There are some pretty good open models. They probably be made even more computationally efficient.
1 month ago
You can work on building LLMs that use less compute and run locally as well. There are some pretty good open models. They probably be made even more computationally efficient.
No comments yet
Contribute on Hacker News ↗