Comment by nothinkjustai
14 hours ago
People will eventually figure out LLMs have no capacity for intent and are fundamentally unreliable for tasks such as summarization, note taking etc.
14 hours ago
People will eventually figure out LLMs have no capacity for intent and are fundamentally unreliable for tasks such as summarization, note taking etc.
Smart people and those with basic common sense already have figured that out. AI leaders and CEOs still haven’t noticed.