Comment by chrismorgan
11 hours ago
The current title (“Pakistani newspaper mistakenly prints AI prompt with the article”) isn’t correct, it wasn’t the prompt that was printed, but trailing chatbot fluff:
> If you want, I can also create an even snappier “front-page style” version with punchy one-line stats and a bold, infographic-ready layout—perfect for maximum reader impact. Do you want me to do that next?
The article in question is titled “Auto sales rev up in October” and is an exceedingly dry slab of statistic-laden prose, of the sort that LLMs love to err in (though there’s no indication of whether they have or not), and for which alternative (non-prose) presentations can be drastically better. Honestly, if the entire thing came from “here’s tabular data, select insights and churn out prose”… I can understand not wanting to do such drudgework.
The newspaper in question is Pakistan's English language "newspaper of record", which has wide readership.
For some reason, they rarely ever add any graphs or tables to financial articles, which I have never understood. Their readership is all college educated. One time I read an Op-Ed, where the author wrote something like: If you go to this gov webpage, and take the data and put it on excel, and plot this thing vs that thing, you will see X trend.
Why would they not just take the excel graph, clean it up and put it in their article?
Maybe the model just wasn’t multi-modal back then ;)
Because it was BS opinion, dressed in scientifical sounding clothing?
The AI is prompting the human here, so the title isn't strictly wrong. ;)
Gemini has been doing this to me for the past few weeks at the end of basically every single response now, and it often seems to result in the subsequent responses getting off track and lower quality as all these extra tangets start polluting the context. Not to mention how distracting it is as it throws off the reply I was already halfway in the middle of composing by the time I read it.
This is why I wish chat UI's had separate categories of chats (like a few generic system prompts) that let you do more back-and-forth style discussions, or more "answers only" without adding any extra noise, or even an "exploration"/"tangent" slider.
The fact that system prompts / custom instructions have to be typed-in in every major LM chat UI is a missed opportunity IMO
Add "Complete this request as a single task and do not ask any follow-up questions." Or some variation of that. They keep screwing with default behavior, but you can explicitly direct the LLM to override it.
1 reply →
Why do you respond to its prompting? It's a machine
2 replies →
Occasionally I find it helpful, but it would be good to have the option to remove it from the context.
1 reply →
I think AI should present those continuation prompts as dynamic buttons, like "Summarize", "Yes, explain more" etc. based on the AI's last message, like the NPC conversation dialogs in some RPGs
1 reply →
I have decided to call it engagement bait.
For years, both the financial and sports news sides of things have generated increasingly templated "articles", this just feels like the latest iteration.
This dates back to at least the late 1990s for financial reports. A friend demoed such a system to me at that time.
Much statistically-based news (finance, business reports, weather, sport, disasters, astronomical events) are heavily formulaic and can at least in large part or initial report be automated, which speeds information dissemination.
Of course, it's also possible to distribute raw data tables, charts, or maps, which ... mainstream news organisations seem phenomenally averse to doing. Even "better" business-heavy publications (FT, Economist, Bloomberg, WSJ) do so quite sparingly.
A few days ago I was looking at a Reuters report on a strategic chokepoint north of the Philippines which it and the US are looking toward to help contain possible Chinese naval operations. Lots of pictures of various equipment, landscapes, and people. Zero maps. Am disappoint.
Obviously the solution is to use AI to extract the raw data from their AI generated fluff.
It's like the opposite of compression.
1 reply →
At least in the case of Bloomberg they would like you to pay for that raw data. That's their bread and butter.
1 reply →
https://www.npr.org/sections/money/2015/05/20/406484294/an-n...
https://www.wired.com/story/wordsmith-robot-journalist-downl... https://archive.ph/gSdmb
And this has been going on for a while... https://en.wikipedia.org/wiki/Automated_journalism
Sports and financial are the two easiest to do since they both work from well structured numeric statistics.
I like Quakebot as an example of how to do this kind of thing ethically and with integrity: https://www.latimes.com/people/quakebot
> Quakebot is a software application developed by the Los Angeles Times to report the latest earthquakes as fast as possible. The computer program reviews earthquake notices from the U.S. Geological Survey and, if they meet certain criteria, automatically generates a draft article. The newsroom is alerted and, if a Times editor determines the post is newsworthy, the report is published.
1 reply →
In the mid-late naughts, there used to be a content farm called "Associated Content". They would get daily lists of top searched terms from various search engines (Yahoo, Dogpile, Altavista, etc. etc.) and for each search term, pay an English major to write a 2-page fluff article. Regardless of what the topic was, they churned out articles by the bushel. Then they place ads on these articles and sat back and watched the dollars roll in.
A non-"AI" template is probably getting filled in with numbers straight from some relevant source. AI may produce something more conversational today but as someone else observed, this is a high-hallucination point for them. Even if they get one statistic right they're pretty inclined to start making up statistics that weren't provided to them at all if they sound good.
Not just that we know from heavy reddit posters that they have branching universe templates for all eventualities, so that they are "ready" whatever the outcome.
Both categories are and have-been bottom-feeder copy, and have been prior to the prevalence of LLMs.
Legitimate news organizations announce their use of A.I.
I believe the New York Times weather page is automated, but that started before the current "A.I." hype wave.
And I think the A.P. uses LLMs for some of its sports coverage.
I guess in the end the journalist didn't feel necessary to impact his readers with punchy one line stats and bold infographic-ready layouts, considering he opted for the first draft.
> it wasn’t the prompt that was printed, but trailing chatbot fluff
I've seen that sort of thing copy/pasted in several emails at work, usually ones that are announcing something on a staff email list.
Sort of a givaway that the email isn't very important.
Thank you, yes that's accurate and I am not sure if article itself is accurate. Don't think so it would have no incorrect stats.
By "AI prompt" I mean "prompted by AI"
Edit: Note about prompt's nature.
It might be better to mention “Dawn newspaper” instead of “Pakistani newspaper”.
Only Pakistanis knew from where the Dawn newspaper is, so the current title is more informative
4 replies →
Nobody outside Pakistan knows Dawn even though it is the newspaper that was founded by Muhammad Ali Jinnah (considered founding father of the nation) and one of the largest and most prestigious as well.
1 reply →
>” and is an exceedingly dry slab of statistic-laden prose
Thats the kinda thing i'd be worried AI would say make up a stat in, something really boring that most people aren't going to follow up on to verify.
I think AI-Prompt is synonymous with the chat before an LLM prints the intended garbage.
The prompt is the chat before it prints the intended garbage. This is the engagement bait the LLM appends after the intended garbage.
Do we know it was an AI? I realize that it rings with a sycophantic tone that the AIs love to use, but I've worked with some humans who speak the same way. AIs didn't invent brownnosing.