← Back to context

Comment by repeekad

7 months ago

What about fresh data like an extremely relevant news headline that was published 10 minutes ago? Private data that I don’t want stored offsite but am okay trusting an enterprise no log api? Providing realtime context to LLMs isn’t “hacky”, model intelligence and RAG can complement each other and make advancements in tandem

I don't think the parents idea was to bake all information into the model, just that current RAG feels cumbersome to use (but then again, so do most things AI right now) and information access should be intrinsic part of the model.

  • Is there a specific shortcoming of the model that could be improved, or are we simply seeking better APIs?

One of my favorite cases is sports chat. I'd expect ChatGPT to be able to talk about sports legends but not be able to talk about a game that happened last weekend. Copilot usually does a good job because it can look up the game on Bing and them summarize but the other day i asked it "What happened last week in the NFL" and it told me about a Buffalo Bills game from last year (did it know I was in the Bills geography?)

Some kind of incremental fine tuning is probably necessary to keep a model like ChatGPT up to date but I can't picture it happening each time something happens in the news.

  • For the current game, it seems solvable by providing it the Boxscore and the radio commentary as context, perhaps with some additional data derived from recent games and news.

    I think you’d get a close approximation of speaking with someone who was watching the game with you.