Comment by paulteehan

5 days ago

THANK YOU. I keep thinking this as well. I'm rolling my own skills to actually make my job easier, which is all about gathering, surfacing, and synthesizing information so I can make quick informed decisions. I feel like nobody is thinking this way and it's bizarre.

The value prop is tenuous and most people still think agents aren't capable of doing this type of work reliably yet (which is... kind of true). You won't get punished too much by users for false positives when summarizing tasks, but you will get absolutely eviscerated for false negatives (e.g. dropping a critical task from the summary). Can you guarantee that your agent won't forget to tell you about something super important?

I am completely convinced this is because of a gap in the intersection of knowledge. Somehow the people making the best agents are focused on extending the capabilities of the models, meanwhile the people who could best make an application layer because just think of LLM's as a chat prompt.

We need a product person, maybe with a turtle neck sweater and an horrid work-life attitude, to fix this up, instead of a weirdly philosophic basilisk fearing idealist.