← Back to context

Comment by tomgp

8 hours ago

The issue is that whilst the warning exists and is there front and centre, the marketing around ChatGPT etc - which is absolutely deafening in volume and enthusiasm - is that they're PHD level experts and can do anything.

This marketing obscures what the software is _actually_ good at and gives users a poor mental model of what's going on under the hood. Dumping years worth of un-differentiated health data into a generic chatGPT chat window seems like a fundamental misunderstanding of the strengths of large language models.

A reasonable approach would be to try to explain what kind of tasks these models do well at and what kind of situations they behave poorly in.