Comment by furyofantares

1 year ago

I used deep research with o1-pro to try to fact/sanity check a current events thing a friend was talking about, read the results and followed the links it provided to get further info, and ended up on the verge of going down a rabbit hole that now looks more like a leftist(!) conspiracy theory.

I didn't want to bring in specifics because I didn't feel like debating the specific thing, so I guess that made this post pretty hard to parse and I should have added more info.

I was trying to convey that it had found some sources that, if I came across them naturally, I probably would have immediately recognized as fringe. The sources were threading together a number of true facts into a fringe narrative. The AI was able to get other sources on the true facts, but has no common sense, and I think ended up producing a MORE convincing presentation of the fringe theory than the source of the narrative. It sounded confident and used a number of extra sources to check facts even though the fringe narrative that threaded them all together was only from one site that you'd be somewhat apt to dismiss just by domain name if it was the only source you found.