Comment by ggm

1 year ago

I am probably suffering confirmation bias. But that said, this LLM smartness continues to be impressively shit. There's a level of "yea that's cool" but it's outweighed by "that'd be wrong, and suggests you understand nothing about me or my data"

It's a little (ok a lot) like targeted ads. I'll believe it's targeted when it tries to sell me ancillary, related goods for e.g. that fridge freezer I bought, not show me ads for fridge freezer I now don't need.

Likewise, I do wonder how much of my enthusiasm is confirmation bias. Could it just be a Clever Hans? I think it has to be at least a little smarter than that, even just to get code that usually compiles, but still, I am aware that it may be more smoke and mirrors than it feels like, that I may be in the cargo cult, metaphorically putting a paper slip into the head of a clay golem shaped like Brent Spiner.

Targeted ads are a useful reference point. A decade back, everyone was horrified (or amazed) by that story of supermarkets knowing some teenager was pregnant before their father did. But today… the category in which your fridge example is, is the best it gets for me — even Facebook, for the most part, is on-par with my actual spam folder, with ads for both boob surgery and dick pills, ads for lawyers based in a country I don't live in who specialise in giving up a citizenship I never had, recommendations for sports groups focusing on a sport I don't follow in a state I've never visited of a country I haven't set foot in since 2018. Plus, very occasionally, ads for things I already have.

Yeah, framing is the key. Put LLM in autocomplete, and it is "oh wow this thing reads my mind". Present it as an expert counselor and "this stupid bot does not know we have no bridge in our city" or something.