Comment by Kiyo-Lynn
6 days ago
I feel that the effects of fine-tuning are often short-term, and sometimes it can end up overwriting what the model has already learned, making it less intelligent in the process. I lean more towards using adaptive methods, optimizing prompts, and leveraging more efficient ways to handle tasks. This feels more practical and resource-efficient than blindly fine-tuning. We should focus on finding ways to maximize the potential of existing models without damaging their current capabilities, rather than just relying on fine-tuning.
No comments yet
Contribute on Hacker News ↗