← Back to context

Comment by rockinghigh

12 hours ago

Wasn't the original LLaMA developed by FAIR Paris?

I hadn't heard that, but he was heavily involved in a cancelled project called Galactica that was an LLM for scientific knowledge.

  • Yeah that stuff generated embarrassingly wrong scientific 'facts' and citations.

    That kind of hallucination is somewhat acceptable for something marketed as a chatbot, less so for an assistant helping you with scientific knowledge and research.

    • I thought it was weird at the time how much hate Galactica got for its hallucinations compared to hallucinations of competing models. I get your point and it partially explains things. But it's not a fully satisfying explanation.