Comment by lxgr
15 hours ago
I can’t design wallpapers/stickers/icons/…, but I can describe what I want to an image generation model verbally or with a source photo, and the new ones yield pretty good results.
For icons in particular, this opens up a completely new way of customizing my home screen and shortcuts.
Not necessary for the survival of society, maybe, but I enjoy this new capability.
So we get a fresh new cheap way to spread propaganda and lies and erode trust all across society while cementing power and control for a few at the top, and in return get a few measly icons (as if there weren’t literally thousands of them freely available already) and silly images for momentaneous amusement?
What a rotten exchange.
I wonder what will happen to the entire legal system. It used to be fairly difficult to create convincing photos and videos.
AI can probably fool most court judges now. Or the defense can refute legitimate evidence by saying “it’s AI / false”. How would that be refuted?
For better or worse, the only admissible evidence going forward will probably be either completely physical or originated in attestation-capable recording devices, i.e. something like a "forensics grade" camera with a signing key in trusted hardware issued by somebody deemed trustworthy.
Given the obvious personal safety upsell ("our phone/dashcam/... produces court-admissible evidence!"), I think we'll even see this in consumer devices before too long.
Trials have rules for evidence. You can't just pull out some footage out of nowhere. Where did that come from? From what camera? What was the chain of custody on its footage? Etc.
Yes, that is a major worry of mine, too. CCTV evidence is worth nil now (could be generated in whole or part), and even eye-witness testimony can be trusted (sure, a witness may think they saw the alleged perpetrator, but perhaps they just saw an AI-generated video/projection of someone).
MS13 was literally tattooed on his knuckles!
Multiple data sources, considering the trustworthiness of the source of the information, and accountability for lying.
You might generate an AI video of me committing a crime, But the CCTV on the street didn't show it happening and my phone cell tower logs show I was at home. For the legal system I don't think this is going to be the biggest problem. It's going to be social media that is hit hardest when a fake video can go viral far faster than fact checking can keep up.
By having people also testify to authenticity and coming down like the hand of God on fakers, the same way we make sure evidence is real now.
If it means anything, I have a 1990 Almanac from an old encyclopedia that warns the exact same thing about digital photo manipulation. I don't think it really matters at this point
AI can also be used to fight propaganda, for instance BiasScanner makes you aware of potentially manipulative news: https://biasscanner.org .
So that makes AI a "dual good", like a kitchen knife: you can cut your tomato or kill you neighbor with it, entirely up to the "user". Not all users are good, so we'll see an intense amplification of both good and bad.
AI is certainly a dual good but I think the project is misguided at best.
I put in one of the driest descriptions of the Holocaust I could find and it got a very high score for bias, calling a factual description of a massacre emotional sensationalism because it inevitably contains a lot of loaded words.
It also doesn't differentiate between reporting, commentary, poetry, or anything else. It takes text and spits out a number, which is a very shallow analysis.
It's more work to fight bullshit than it is to generate it, though. Saying "Use AI to fight it" is inherently a losing strategy when the other side also has an AI that is just as powerful.
1 reply →
[dead]
Don’t blame the tools. Stalin, Mao and Hitler didn’t need AI.
That pro forma response grows oh so very tiresome.
For the nth time: scale, easiness, and access, matter. AI puts propaganda abilities far beyond the reach of those men in the hands of many more people. Do you not understand the difference between one man with a revolver and an army with machine guns? They are not the same.
Nowhere in my comment am I “blaming the tools”. I’ll ask you engage with the argument honestly instead of simply parroting what you already believe absent reading.
Is that worth the cost of this technology? Both in terms of financial shenanigans and its environmental cost?
Are you asking if the 10 seconds it takes AI to generate an image is more costly to the environment than a commissioned graphics artist using a laptop for 5-6 hours, or a painter who uses physical media sourced from all over the world?
In short, yes.
A modern laptop is running almost fanless, like a 486 from the days of yore.
A single H200 pumps out 700W continuously in a data center, and you run thousands of them.
Also, don't forget the training and fine tuning runs required for the models.
Mass transportation / global logistics can be very efficient and cheap.
Before the pandemic, it was cheaper to import fresh tomatoes from half-world away rather than growing them locally in some cases. A single container of painting supplies is nothing in the grand scheme of things, esp. when compared with what data centers are consuming and emitting.
10 replies →
Cheaper/faster tech increases overall consumption though. Without the friction of commissioning a graphics artist to design something, a user can generate thousands of images (and iterate on those images multiple times to achieve what they want), resulting in way more images overall.
I'm not really well versed on the environmental cost, more just (neutrally) pointing out that comparing a single 10s image to a 5-6 hour commission ignores the fact that the majority of these images probably would never have existed in the first place without AI.
3 replies →
Wow, do you hold a degree in false dichotomies?
The environmental cost is significantly overblown, especially water usage.
I work with direct liquid cooled systems. If the datacenter is working with open DLC systems (most AI datacenters in the US in fact do), there's a lot of water is being wasted, 7/24/365.
A mid-tier top-500 system (think about #250-#325) consumes about a 0.75MW of energy. AI data centers consume magnitudes more. To cool that behemoth you need to pump tons of water per minute in the inner loop.
Outer loop might be slower, but it's a lot of heated water at the end of the day.
To prevent water wastage, you can go closed loop (for both inner and outer loops), but you can't escape the heat you generate and pump to the atmosphere.
So, the environmental cost is overblown, as in Chernobyl or fallout from a nuclear bomb is overblown.
So, it's not.
7 replies →
Depends on if you believe it will ever become cheaper. Either hardware, inspiring more efficient smaller models, or energy itself. The techno optimist believes that that is the inevitable and investable future. But on what horizon and will it get “zip drived” before then?
absolutely without a doubt it is
If that energy is used for research, maybe. If used to answer customer questions or generate Studio Ghibli knock-offs, it's not worth it, even a bit.
5 replies →