Comment by nothinkjustai
12 hours ago
People will eventually figure out LLMs have no capacity for intent and are fundamentally unreliable for tasks such as summarization, note taking etc.
12 hours ago
People will eventually figure out LLMs have no capacity for intent and are fundamentally unreliable for tasks such as summarization, note taking etc.
Smart people and those with basic common sense already have figured that out. AI leaders and CEOs still haven’t noticed.