Comment by rybosome
5 days ago
Yep!
So my use case currently is admittedly very specific. My company uses LLMs to automate hardware design, which is a skill that most LLMs are very poor at due to the dearth of training data.
For tasks which involve generation of code or other non-natural language output, we’ve found that fine-tuning with the right dataset can lift performance rapidly and decisively.
An example task is taking in potentially syntactically incorrect HDL (Hardware Description Language) code and fixing the syntax issues. Fine-tuning boosted corrective performance significantly.
No comments yet
Contribute on Hacker News ↗