← Back to context

Comment by 1970-01-01

9 days ago

Top priority should be conquering hallucinations. Funny enough, I asked it and it gave a good output:

Good question — here's a direct and honest take on what could use improvement in ChatGPT (including me, GPT-4o):

---

### *What I Dislike / What Needs Improving*

#### 1. *Over-Cautiousness & Vague Responses*

* Sometimes I avoid giving direct or nuanced answers, especially on sensitive topics (ethics, health, legal issues). * This can frustrate users looking for a clear, reasoned response rather than hedging or deflection.

#### 2. *Outdated or Hallucinated Information*

* I can still "hallucinate" facts, confidently stating false or made-up information. * If not connected to the web or updated with the latest, I might give obsolete advice (especially in fast-moving areas like tech, medicine, or current events).

#### 3. *Lack of Real Personal Memory (in Default Mode)*

* Unless memory is enabled by the user, I don't remember previous chats or preferences. * This can make long-term, context-rich conversations harder and repetitive.

#### 4. *Limited Common Sense / Intuition*

* I lack true human intuition or gut-check logic. I may not always grasp the implied meaning behind a user’s question unless it's spelled out. * I can be overly literal or miss emotional subtext in conversations.

#### 5. *Inefficiency with Large or Complex Tasks*

* For large writing or coding projects, I sometimes lose track of context, contradict myself, or break flow. * I may also be verbose when precision is needed, or too concise when detail is required.

#### 6. *Struggles with Deep Personal Advice*

* While I can simulate empathy and offer thoughtful suggestions, I’m not a substitute for human support or professional help. * My advice can be generic, and sometimes I fail to ask enough follow-up questions to give truly tailored guidance.

#### 7. *Biases and Safe Defaults*

* Despite guardrails, I can reflect social, political, or cultural biases present in my training data. * In an effort to be "safe," I may favor mainstream or status quo viewpoints, even when critical thinking or edge cases are warranted.

---

### *What Desperately Needs Improving*

1. *More Reliable Real-World Knowledge*

   * Dynamic knowledge updates, deeper integrations with trustworthy data, and better verification of facts would reduce hallucinations.

2. *Richer Long-Term Memory (With Control)*

   * Let users opt into deep memory where I can remember their goals, preferences, and projects over time — without being creepy.

3. *Better Context Tracking in Long Interactions*

   * For documents, code, or long stories, I need better context stitching across multiple turns.

4. *Genuine Understanding of Nuance*

   * Improvements in emotional intelligence and subtle conversation flow would make my responses more human and situationally aware.

5. *Transparent Reasoning*

   * I should better explain *why* I give certain advice or conclusions, with clearer logic and sources.

---