← Back to context

Comment by chasing0entropy

2 days ago

Human thought is more than just general intelligence (AGI), but the ability to conceptualize data beyond linear symbolic representations and to combine the schema / metadata from multiple contextual inputs with memory, to output unique concepts in both alphanumeric and other conceptualized manners.

An LLM, in addition to being unable to ever obtain true AGI because of the linear and singular representation of concepts and data, cannot combine multiple schemas or metadata from multiple contexts with its own training and reinforcement data.

That means it cannot truly remember and correct its mistakes. A mistake is more than the observation and correction, it is applying global changes to both your metadata and schema of the event and surrounding data.

LLMs as an AI solution is a dead end.