Comment by bbor

2 years ago

In my view it’s very simple, which is what makes it so exciting. Summarizing a design doc that I imagine Microsoft and google are both spending millions of dollars of manhours/day working on their versions of:

1. User enters query.

2. LLM augments query if necessary, adding extra terms or clauses.

3. Normal search pipeline returns ranked links, just like it does now.

4. LLM reads the content of the first 100 links and decides which are the best based on your stated preferences and past behavior, uses that to augment the ranking a bit.

5. LLM generates various summaries depending on the type of query, such as laying out a few common answers to a controversial political question or giving a summary of a technical Wikipedia article tailored to your expertise level in that field.

6. Finally, for a tiny subset of queries, maybe the user wants to converse with the AI in a chat-like format, where it cites all of its claims with direct links.

It’s gonna be awesome :)