← Back to context

Comment by madeofpalk

2 years ago

I've come around to there being legitimate usecases for this type of generative AI, but I don't think producing anything that's supposed to be "True" or "Correct" is one. I think the only useful usecases is for when you want to generate fiction.

If you tell GPT-4 specifically to respond with proper jargon for the domain like that found in a textbook or journal it provides much much more useful replies. Silly that prompt engineering is what's required but at least for my purposes wherein I fact check it's output it's right nearly all the time and I've learned a great deal.

  • Even then there's literally nothing stopping it from making shit up.

    • Sure yeah, for now. Just saying I literally use it to mine out things to confirm (/ not believe until I do) and so far it has very rarely led me astray and even then it's been small nuance. It's striking.

      3 replies →

  • It doesn't mean it's less biased. All of these styles are exploited as a form of rhetoric. Many people simply take information written in a textbook style as authoritative.

  • And that's why I ignore the people that laugh at the whole prompt engineering thing, because it's a genuine skill.

    At the moment GPTs are trained on so much data across so many domains that you have to treat it like a person who has similar knowledge.

    If I just walk up to you and start sputtering jargon about a very specific complex topic, when you were just chatting to another friend about all sorts of everyday topics, you're not going to be able to reply to me immediately.

    With these GPTs it helps to get it "in the mood" for your topic by preloading keywords and shifting the topic over and deeper so your desired topic is clearer to the attention mechanism.

I've come around to there being legitimate use cases for journalism, but I don't think producing anything that's supposed to be "True" or "Correct" is one.

I’m to the point where I’d probably put more trust in an AI generated news summary than many of the sites that purport to give me accurate and truth worthy news.

  • For a lower-tech approach, The Flip Side is pretty good at doing one story each day from 2 different sides. I was a bit annoyed when an excited friend signed up my email address without asking me, but I have never unsubscribed because I find it refreshing in a typical world of frenzied news.

    https://www.theflipside.io

  • Screwing it up isn't a crime. Papers can retract and re-visit a line of thinking. While nobody loves doing that, I think it'd help.

    In addition I think it'd help if papers hammered the party line of Dems and Republicans far harder. My running joke / dare is: send sportscasters to DC for a year. At some point they'll call BS on everything and everyone, and start questioning with both barrels. BS is less tolerated in sports.

    Take taxes. Trickle down is BS. But it's also true the top 5% or so pay 40%-60% of taxes while the US Congress continues to spend in debt. How we'd get here? Who's primarily to blame (Congress). And what is Congress gonna do to fix it?

    Show votes by Congress members year by year against deficit, debt, and ratio paid by corps, rich, middle, and poorer Americans. I wanna see both aisles running for cover.

    Biden's budget envoy was in Congress about 6-8 weeks ago. She mentioned biden's plan was raising taxes on corporations and individuals with $400k or more in earnings. But when the republicans pointed out the fact above (5% paying more than half) she had nothing.

    What's the republican code here to de-construct? Well lack of fairness, and an implied destruction of jobs and income for workers if taxes are higher. Ok, how do you defeat that? Dems are empty. And tax payers will ultimately have to bear up under both parties stupidity if this continues.

    I didn't grouse too much about how the government (mis)spends money so long as debt to GDP isn't stupid and there's some attempts to get real. But in the last 10 years, I've changed. Who wants to send cash to DC? DC has got serious trust problems.

The calm, serene assurance and objectivity of the GPT outputs have been a breath of fresh air amidst the stupidity of the average social media discourse. If this style somehow prevails it will be a net positive for the internet. I for one welcome our new LLM overlords!

Writing summaries of documents and correspondence is one of the major use cases of those models. Desensitionalization and debullshittification are very similar to summarization, so it stands to reason LLMs should handle these tasks just as well.

  • Summarized bullshit is still bullshit akin to a polished turd.

    Given that the choice of which articles to write is incredibly biased to begin with this approach does not seem effective.

    What could theoretically work is an “AI news agency” that “summarizes” many different sources to generate unbiased articles.

    • > Given that the choice of which articles to write is incredibly biased to begin with this approach does not seem effective.

      Selection bias is a given. You always have to keep that in mind. But when you actually want to read a specific article, summarizers are useful. For news and general population content, debullshitifiers could come in handy too.

      Point being, the texts are not random. There's some nugget of valuable content in it, but it's usually wrapped by enormous layer of SEO, ad hooks, word count padding, and/or general nonsense. Reducing signal-to-noise ratio here - stripping all those layers of bullshit - is strictly useful.

      4 replies →

    • >What could theoretically work is an “AI news agency” that “summarizes” many different sources to generate unbiased articles.

      NewsMinimalist does this, it’s quite interesting. I’ve been using it since its introduction, and its been a fun way to get lots of summarized, de-sensationalized headlines. Specifically I enjoy setting it to 6.0 and reading the headlines that have impact that didn’t quite reach the 6.5+ threshold.

      https://news.ycombinator.com/item?id=35795388

      1 reply →

  • I would not and do not trust them to do this in cases where I care about the accuracy of the output.

    • If you care about the accuracy of the output, don’t read news in the first place? I think you’re trumping up the impotence of this use case.