In my view, it's different to ask AI to do something for me (summarizing the news) than it is to have someone serve me something that they generated with AI. Asking the service to summarize the news is exactly what the user is doing by using Kite—an AI tool for summarizing news.
I'm just realizing that while I understand (and think it's obvious) that this tool uses AI to summarize the news, they don't really mention it on-page anywhere. Unless I'm missing it? I think they used to, but maybe I'm mis-remembering.
They do mention "Summaries may contain errors. Please verify important information." on the loading screen but I don't think that's good enough.
"Kagi News reads public RSS feeds of thousands of (community-curated) world-wide news sources and utilizes AI to distill them into one perfect daily briefing."
Where's the part where you ask them to do this? Is this not something they do automatically? Are they not contributing to the slop by republishing slopified versions of articles without as much as an acknowledgement of the journalists whose stories they've decided to slopify?
If they were big enough to matter they would 100% get sued over this (and rightfully so).
> Where's the part where you ask them to do this? Is this not something they do automatically?
It's a tool. Summarizing the news using AI is the only thing that tool does. Using a tool that does one thing is the same as asking the tool to do that thing.
> Are they not contributing to the slop by republishing slopified versions of articles without as much as an acknowledgement of the journalists whose stories they've decided to slopify?
They provide attribution to the sources. They're listed under the headline "Sources" right below the short summary/intro.
Been using Kagi for two years now. Their consistent approach to AI is to offer it, but only when explicitly requested. This is not that surprising with that in mind.
"Kagi News reads public RSS feeds of thousands of (community-curated) world-wide news sources and utilizes AI to distill them into one perfect daily briefing."
I think it's generally understood among their users (paying customers who make an active choice to use the service) but I agree—they should be explicit re: the disclosure.
Not all "AI"-generated content can be categorized as "slop". "Slop" has a specific meaning, usually associated with spam and low-effort content. What Kagi News is doing is summarizing news articles from different sources, and applying a custom structure and format. It is a branded product supported by a reputable company, not a low-effort spam site.
I'm a firm skeptic of the current hype around this technology, but I think it is foolish to think that it doesn't have good applications. Summarizing text content is one such use case, and IME the chances for the LLM to produce wrong content or hallucinate are very small. I've used Kagi News a number of times over the past few months, and I haven't spotted any content issues, aside from the tone and structure not quite matching my personal preferences.
Kagi is one of the few companies that is pragmatic about the positive and negative aspects of "AI", and this new feature is well aligned with their vision. It is unfair to criticize them for this specifically.
I think you're referencing https://kite.kagi.com/
In my view, it's different to ask AI to do something for me (summarizing the news) than it is to have someone serve me something that they generated with AI. Asking the service to summarize the news is exactly what the user is doing by using Kite—an AI tool for summarizing news.
(I'm a Kagi customer but I don't use Kite.)
I'm just realizing that while I understand (and think it's obvious) that this tool uses AI to summarize the news, they don't really mention it on-page anywhere. Unless I'm missing it? I think they used to, but maybe I'm mis-remembering.
They do mention "Summaries may contain errors. Please verify important information." on the loading screen but I don't think that's good enough.
"Kagi News reads public RSS feeds of thousands of (community-curated) world-wide news sources and utilizes AI to distill them into one perfect daily briefing."
https://news.kagi.com/about
1 reply →
https://news.kagi.com/world/latest
Where's the part where you ask them to do this? Is this not something they do automatically? Are they not contributing to the slop by republishing slopified versions of articles without as much as an acknowledgement of the journalists whose stories they've decided to slopify?
If they were big enough to matter they would 100% get sued over this (and rightfully so).
> Where's the part where you ask them to do this? Is this not something they do automatically?
It's a tool. Summarizing the news using AI is the only thing that tool does. Using a tool that does one thing is the same as asking the tool to do that thing.
> Are they not contributing to the slop by republishing slopified versions of articles without as much as an acknowledgement of the journalists whose stories they've decided to slopify?
They provide attribution to the sources. They're listed under the headline "Sources" right below the short summary/intro.
3 replies →
Been using Kagi for two years now. Their consistent approach to AI is to offer it, but only when explicitly requested. This is not that surprising with that in mind.
> Their consistent approach to AI is to offer it, but only when explicitly requested.
Kagi News does not disclose AI even.
"Kagi News reads public RSS feeds of thousands of (community-curated) world-wide news sources and utilizes AI to distill them into one perfect daily briefing."
https://news.kagi.com/about
1 reply →
I think it's generally understood among their users (paying customers who make an active choice to use the service) but I agree—they should be explicit re: the disclosure.
2 replies →
The code is open source, you can add it as a PR as you see appropriate.
Not all "AI"-generated content can be categorized as "slop". "Slop" has a specific meaning, usually associated with spam and low-effort content. What Kagi News is doing is summarizing news articles from different sources, and applying a custom structure and format. It is a branded product supported by a reputable company, not a low-effort spam site.
I'm a firm skeptic of the current hype around this technology, but I think it is foolish to think that it doesn't have good applications. Summarizing text content is one such use case, and IME the chances for the LLM to produce wrong content or hallucinate are very small. I've used Kagi News a number of times over the past few months, and I haven't spotted any content issues, aside from the tone and structure not quite matching my personal preferences.
Kagi is one of the few companies that is pragmatic about the positive and negative aspects of "AI", and this new feature is well aligned with their vision. It is unfair to criticize them for this specifically.
> "Slop" has a specific meaning, usually associated with spam and low-effort content.
Slop means different things to different people. And anything not human reviewed is low effort in my view.