Will Smith's concert crowds are real, but AI is blurring the lines

8 months ago (waxy.org)

I think AI-"upscaled" videos are as jarring to look at as a newly bought TV before frame smoothing has been disabled. Who seriously thinks this looks better, even if the original is a slightly grainy recording from the 90's?

I was recently sent a link to this recording of a David Bowie & Nine Inch Nails concert, and I got a serious uneasy feeling as if I was on a psychedelic and couldn't quite trust my perception, especially at the 2:00 mark: https://www.youtube.com/watch?v=7Yyx31HPgfs&list=RD7Yyx31HPg...

It turned out that the video was "AI-upscaled" from an original which is really blurry and sometimes has a low frame rate. These are artistic choices, and I think the original, despite being low resolution, captures the intended atmosphere much better: https://www.youtube.com/watch?v=1X6KF1IkkIc&list=RD1X6KF1Ikk...

We have pretty good cameras and lenses now. We don't need AI to "improve" the quality.

  • The weird thing is that people are seemingly enjoying this.

    Yesterday we went to a store to have a look at a few smartphone for my partner. She primarily wants a good camera above any other parameter. I was seeing her preferring those that were counterfeiting the reality the most: she was like, "look I can zoom and it is still sharp" while obviously there was a delay between zooming and the end result which was a reconstructed, liquid like distorded version similar to the upscaling filters people are using on 8/16bit game console emulators. I was cringing at seeing the person I love the most preferring looking at selfies of picture of us with smoothed faces and a terrible fake bokeh in the background instead of something closer to the reality.

    • I’m a photographer, and am on a bunch of beginner photography groups.

      These groups used to be a mix of people being confused at how their camera worked and wanting help, people wanting tips on how to take better pictures, and sometimes there was requests for editing pictures on their behalf (eg “I found this old black and white faded picture of my great grandparents, can anyone help restore it?”)

      These days, 99.9% of the posts are requests that involve synthesizing an entirely new picture out of one or more other pictures. Examples: “can someone bring in my grandpa from this picture into this other family picture?”. Or “I love this photo of me with my kids, but I hate how I look. Can someone take the me from this other picture and put it in there? Also please remove the cups from our hands and the trees in the background, and this is my daughter’s ex boyfriend please also remove him”.

      What’s even crazier is that the replies of those threads are filled with dozens of people who evidently just copy pasted the prompt + picture into ChatGPT. The results look terrible… but the OP is always pleased as punch!

      People don’t care about “reality”. Pictures have lost their status of “visual record of a past event”* and become “visual interpretation of whatever this person happens to want”.

      There’s no putting back the genie in the bottle.

      *: yes, you can argue they were never 100% that, but still, that’s effectively what they were.

      30 replies →

    • Yes, this is the exact same reason that frame smoothing exists. When you walk into a store, all the TVs are lined up showing some random nature show or sports event, and frame smoothing will make your TV look a little more smooth than the others, even though it completely ruins the content.

      It's made for making sales, not for making things actually look good.

      9 replies →

    • At some point it became unacceptably rude to gatekeep, king-make, or be otherwise judgemental of taste. It was at around the same time that subcultures and counterculture melted into an homogenous mass.

      I think we lost something in that. Embarrassment can be useful for moving us out of our comfort zones.

    • It is weird.

      One funny thing I've noticed is that software developers (including myself) seem to rebel against it the most. A surprising number of software developers I know shoot film. No digital cameras, they just take photos, get the prints, and they're done.

      It seems to be the non-technical people who are most OK with the inauthenticity that comes with AI "enhanced" photos.

    • Couldn't you pretty reasonably create Bokeh algorithmically, since it's destroying information rather than creating it?

    • May I ask how religious (or woowoo) your partner is?

      The number of people who care about having an objectively true understanding of as much of reality as possible is disappointingly small and I suspect that these photo trends are just making that fact more obvious.

  • That's hilarious https://i.imgur.com/TVfncya.png

    • That's insane. Here's the same-ish frame from the original: https://imgur.com/a/dWS20oP

      The extreme blur here was obviously a creative choice by the director/editor, the rest of the video has lower resolution but it's not nearly that bad (which is why Bowie still looks like himself in other parts of the upscaled video).

      The process used to upscale the video has no subtlety, it's just "make everything look crisp, even if you have to create entirely made-up faces".

      2 replies →

  • I remember watching an episode of one of my favorite shows on my parents’ brand new TV, and thought to myself something about this episode is off, like the production is cheap, the acting feels worse, even the dialog is bad.

    Over time I noticed everything looks cheaper on their TV.

    It was the auto-smoothing.

    https://en.m.wikipedia.org/wiki/Soap_opera_effect

    • It is especially bad for animated shows that have made an explicit artistic choice to let (parts of) the animation progress at a lower frame rate. My kids watched "spider-man: across the spider-verse" at a friends place where smoothing was not turned off, and it completely ruined the artistic feel and made the movie feel like a stuttering video game.

      7 replies →

    • I had the exact same experience watching Goodfellas on my parents' TV. It felt like a cheap soap opera and I was thoroughly confused about what's happening. Afterwards I did some research and learned about motion interpolation in modern TVs.

      8 replies →

    • >everything looks cheaper

      Specifically to You because you grew up with soap operas. Young people today grew up with 60 fps games and video, to them 24-30 fps looks broken.

    • Thing is, to some population this is seen as better. While to me it feels as journalists camera, too real to pass as a story.

  • This phenomenon of pushing technology that end consumers don't want, seem to be driven by a simple sequence of incentives: pressure from shareholders to maintain/increase stock price -> pressure on business to increase market share, raise prices, or at least showcase promising future tech -> pressure on PMs to build new features -> combined with developers' desire to try out new technologies -> result: AI chatbots/summaries on things we didn't ask for, touchscreens on car dashboards, AI upscaling etc.

    • After decades of consumerism, most consumers already have most of what they need/want, so in order to keep selling widgets, corporations must manufacture demand. Enough big screen TVs have already been built and sold to give every American a fully functional 60"+ screen in every room in their residence, enough lightly used ones to go around to completely negate the need to manufacture more. But profit must not go down for any reason, so they must invent gimmicks to push the latest and greatest model onto a public that can't even tell the difference without marketing propaganda.

      The entire global economic system depends on the unceasing transformation of natural resources into a stream of disposable crap for the benefit of the ownership class and shareholding leeches. It's obviously unsustainable, but so are the mortal lives of those who benefit from the system. What incentive have they to save a world in which they will no longer have any stake? Better to live out their days in comfort and wealth by cutting down the saplings under whose shade they will never sit.

      I say enough is enough.

    • > pushing technology that end consumers don't want

      Flashback to when every TV at CES had 3D functionality. Turns out nobody really wanted what. What an immense waste of resources that was.

      2 replies →

  • > Who seriously thinks this looks better

    I don’t think people notice. I don’t own a TV, but twice now I’ve been to some friend’s house and I immediately noticed it on theirs. Both times I explained the Soap Opera effect and suggested disabling the feature. They both agreed, let me do it, and haven’t turned it on again. But I also think that is a mix of trusting me and not caring, I’m not convinced they could really tell the difference.

    Tip for those aiming to do the same: Search online for “<tv brand> soap opera effect” and you’re bound to find a website telling you the whereabouts of how to reach the setting. It may not be 100% correct, so be on the lookout for whatever dumb name the manufacturer gave the “feature” (usually described in the same guide you would have found online).

    > I got a serious uneasy feeling as if I was on a psychedelic and couldn't quite trust my perception, especially at the 2:00 mark

    You weren’t kidding. That bit at 02:06 really makes you start to blink and look closer. The face morphs entirely.

    https://youtu.be/7Yyx31HPgfs&t=126s

    Looking at the original, it’s obvious why: that section was really blurry. The AI version doesn’t understand camera effects.

    https://youtu.be/1X6KF1IkkIc?t=126

    Thank you for providing both links, it made the comparison really simple.

    • If you watch the average person watch TV, they don’t actually pay attention to it. Everyone is just on their phone. It drives me crazy watching just about anything with people because I look around and no one even has their eyes on the TV. It’s just background noise.

  • That is terrible.

    I see this upscaling a lot in Youtube videos about WWII that use very grainy B+W film sources (which themselves aren't using the best sources of) and it just turns the footage into some weird flat paneled cartoonish mess. It's not video anymore, it's an animated approximation.

  • The closeups of the bass player are like 6 slowmotion frames in the original and look like an interpolated mess with unhuman body joints upscaled.

  • What makes it uneasy is not only upscaling but they are generating new frames to make it 60fps. 60fps by itself feels fake (check some footage of The Hobbit that tried 48fps). It feels like video games.

    It's kinda funny to aim for 60fps because modern video productions will often have 60fps footage that's too sharp and clean. So they heavily post process the videos. You add the film grain and lower the fps to 30 or even 24 (cinema) so it looks much more natural.

    The question is if this is just habitual / taste thing. We most likely wouldn't prefer 24fps if the movie industry started with 50fps.

    • I actually went out of my way to watch The Hobbit at theaters with the 48fps copy because I thought it was incredible despite the wrerched 3D it was paired with. 24fps has always seemed choppy and confusing to me with any kind of action, and The Hobbit was a breath of fresh air

      I consider it a genuine shame there's no way to release the 48fps cut on home media.

    • It is just habitual and I feel it's making movies look terrible, especially panning shots look like a stuttery mess that is almost unwatchable for me at 24 FPS.

  • > Who seriously thinks this looks better, even if the original is a slightly grainy recording from the 90's?

    Whatever you had as a kid feels "natural", these things feel "natural" for new generations.

    Same things for a proper file system vs "apps", a teenagers on an ipad will do things you didn't know were possible, put them on windows XP and they won't be able to create a file or a folder, they don't even know what these words mean in the context of computers.

    • This sounds like two completely different things. I know people from my parent's generation who would say that the scenes on new TV's look "weird" until the motion smoothing is switched off. This is neurological, not generational.

      2 replies →

  • The first video induced actual physical nausea.

    I had to stop playback or I’m sure I would have thrown up. And I don’t suffer from motion sickness etc.

    There’s definitely something “uncanny valley” about it.

  • Holy... wtf...

    At 2:04 the original deliberately has everyone on stage way out of focus, and the AI upscaler (or the person operating it) decided to just replace it with an in-focus version sporting what looks like late 90s video game characters. That is terrible.

  • I also think it looks like garbage, but I wonder if maybe it looks better on small mobile screens - where you can't actually see the mangled details, but can perceive that it "looks sharper"

  • > I got a serious uneasy feeling as if I was on a psychedelic and couldn't quite trust my perception

    When I took LSD for the first time, I realised it was hitting when everything started looking like stable diffusion

  • Wow, you're not kidding. In some shots David Bowie barely looks like David Bowie because the algorithm's taken such liberties with the original image to try and make it look sharp.

  • I pulled up a podcast on YouTube the other week of just two people facing the camera and their faces side-by-side the screen. I had to just use audio only. I couldn't actually look at the video. The guest on the show was using some kind of AI filter for his video stream. I guess because he was on a low-quality phone camera, and he thought the filter would be better?? But the result was very disturbing. I almost barfed on my desk when his image came up.

  • Your post just made me realise that as soon as the technology is ready, built-in AI upscaling will be just as ubiquitous as motion smoothing.

    • Not sure if you're serious, but wouldn't it be more efficient to upscale at source and stream the result? Extra bandwidth versus a million TVs all doing the same computation.

      2 replies →

  • I think its preferred to get this kind of smooth unreal effect for services like youtube, but not because it looks better; but rather because it compresses better for storage. Less fine detail overall helps video compression.

  • I like upscaling and frame interpolation but as always, the TV does not have the hardware to do a good job. If you use neural network models, it works and looks a lot better without looking plastic-like.

  • I think the only way to future-proof 24 fps content is to render it as 120fps, but repeat every frame 5 times. 5 * 24 = 120fps.

    I don't think TVs can frame smooth that. It should display as intended.

    • It was probably a Technology Connections video or something but I learned that film projectors actually flash each frame 3 times before progressing to the next so the light is flickering 72 times a second, while the image is only changing 24 fps.

  • This reminds me of colorized black and white movies from the 90s although I can know imagine AI being used to do that and upscale the past creating new hyper-real versions of the past.

  • You will see the same thing on any online real estate listing. I spend more time wondering why it looks so weird than I do looking at what the picture is attempting to show.

  • It’s surprising how many people don’t notice when frame smoothing is on, when it looks so bad.

Two root comments (so far) are focusing on YouTube, but the article claims most of the AI was done by Will’s team, using AI to convert stills to video:

> The video features real performances and real audiences, but I believe they were manipulated on two levels:

1. Will Smith’s team generated several short AI image-to-video clips from professionally-shot audience photos

2. YouTube post-processed the resulting Shorts montage, making everything look so much worse

You can see the side-by-side [1] of the YouTube post-processing, and, while definitely altering the original, isn’t what’s causing most of the really bad AI artifacts.

Most of what YouTube appears to be doing is making it less blurry, sometimes successfully, and sometimes not. And, even with that, it is only done on Shorts.

[1] https://youtu.be/Bx5GzIsmEBI

  • I don't see any differences in that video

    • Yeah, it’s definitely subtle, for the most part. But if you pause it and move it around frames, you can definitely see where post processing attempted to sharpen some of the out-of-focus parts.

      But, again, the AI artifacts are from taking still shots and using AI to generate videos, which was done separately/intentionally by Will’s team (according to the article).

I wonder if there are two people reading this and wishing that Coldplay had employed this technology earlier this summer?

On a serious note, I find this trend of shoving AI everywhere pretty disturbing. For instance, I used to enjoy Spotify’s “Discover Weekly” feature to find new music, but these days it’s offering so many AI generated songs the experience is pretty jarring.

  • I've been really tempted to drop out of Spotify and start buying CDs from my local music shop instead. Then I can get a USB CD reader and start building my own little music collection. In the pursuit of keeping "up to date" and having access to "everything" we've totally lost touch with the human element of sharing music.

    • Spotify is an unbelievably shitty company, literally any other music service is better for everyone involved.

  • Yeah, I can see how that would feel jarring. Music discovery is supposed to feel like serendipity—stumbling across a track you didn’t know you needed—so when AI-generated filler creeps into that space, it can cheapen the experience.

    Spotify hasn’t officially said they’re flooding “Discover Weekly” with AI songs, but there’s definitely been a surge of AI-produced music uploaded to streaming platforms in the past year. Since Spotify’s algorithms don’t always distinguish between human and synthetic content, it can end up mixing both in your recommendations. That’s especially noticeable in genres where production is relatively easy for AI to mimic (ambient, lo-fi, EDM, generic pop).

    I think the larger unease you’re feeling—AI creeping into places where you expected human curation or artistry—is being shared by a lot of listeners. There’s a debate brewing about whether platforms should label AI music clearly, or even let users opt out of algorithmic recommendations that include it.

    Do you want me to check what tools or tricks people are using to filter out AI-generated songs on Spotify (or elsewhere), so you can get back to the human-made discovery experience?

    (Sorry, I couldn't help myself with this one. I'll see myself out now.)

If I were a marketing person I would also make genuine images look AI generated for the free publicity. Nothing gets attention like mistakes or fakes. The fact that they aren't actually fake means there is no downside for WS and team. I once spoke to a social media manager for a large brand and he said they intentionally put typos in posts on a semi regular basis and it always results in more post engagement (people correcting the typo).

Am I the only person here that thinks the bigger issue here is consent?

These people reasonably consented to being in photos and videos, as we all do going to gigs.

But they almost certainly did not give informed consent to have the artist's team fake video of them using AI?

Is that even legal in the countries it was done in?

  • I’m sure they “consented” by the fine print on the ticket checkout screen and some form of signs at the venue saying “by entering you consent to be filmed etc etc etc”

    Not that I think that’s valid morally but they are probably covered from a legal angle.

    • Please excuse any perceived tone in my reply. I'm furious but not at you, and I've re-drafted this several times and I can't even tell what tone it conveys any more. I appreciate your thoughtful response and you're likely right in that they've likely ticket a box or walked past a sign or whatever, as we always used to for photos and video. And you've explicitly noted you're not commenting on the morality of it.

      I'm certainly not saying you're wrong. Although you might be, if the specific country has laws around deepfakes that don't explicitly specify they need to be sexually explicit to be illegal.

      Or if you're not allowed to bury ridiculous stuff in small print.

      I've just checked the terms and conditions of entry for the next venue I'm going to, and you're right that buried in the T&Cs of entry is:

      >By entering the Venue you agree to your actual or simulated likeness being included for no fee within any film, photograph, audio and/or audio-visual recording to be exploited in any and all media for any purpose at any time throughout the world. This includes filming by the police or security staff which may be carried out for the security of customers or the prevention of crime. However, you may object to such use by specific request to privacy@livenation.co.uk .

      The signage for the above venue did not make this clear last time I was there. I wonder if that was the case for the venues involved?

      But there's no way people walking in the venue can reasonably be expected to have given informed consent to the artist producing deepfakes of them.

      1 reply →

Some PM in Youtube: “ yes let’s make it harder to tell real videos from AI to make people who don’t know better more susceptible and accepting of it”

  • I don't even think there is this much thought involved.

    There's two things happening. There are true believers who think that AI is legitimately magic and should be put into every product and then there are people who are putting AI into every product because their director or VP thinks that AI is legitimately magic and is insisting that they put it in every product. Brainstorming sessions aren't "how can we solve problem X for users" but are instead "where can AI change our product."

  • Never attribute malice where bad incentives suffice.

    Someone's KPI was to sprinkle AI. And someone got it by shoehorning "AI enhancement" in place of the previous sharpen + Denoise filter at YouTube.

  • > Some PM in Youtube: “ yes let’s make it harder to tell real videos from AI to make people who don’t know better more susceptible and accepting of it”

    This can backfire, perhaps making people believe that real, important news is in reality AI-generated to brainwash them, thus making people less susceptible, and more disbelieving.

My hardware/software or my eyes are borked because I cannot tell much of a difference between YouTube vs Instagram side-by-side. Gosh. If it is not my eyes, what are the recommendations? What is the top 1 (or 5) reasons I cannot see it if it is not my eyes? Do I need to upgrade my monitor? I have a relatively recent GPU but it is not a beast and I use a HDMI -> VGA converter.

The pictures, however, look god-awful! I presume the video is filled with stuff like these.

  • The videos are more subtle and it's not apparent in every frame. Look for things in the background snapping into and out of focus, weird textures appearing on Will's head and neck, and people's faces looking unnaturally sharp at the edges, while their skin is uncannily smooth (sort of like Max Headroom.)

Theres not a single person i know in my life who will want this as a consumer. WHY does the world keep doing things that are so complicated and unnecessary.

  • Sometimes countries get involved in military conflicts so that they can test their hardware and learn important lessons for the next big war. They are building the leadership, logistics and institutional knowledge for when it's really needed.

    Perhaps it's Google warming up their teams for when they have a proper use of their technology and know-how.

  • Before AI was a thing, that same sentiment was true for almost every startup, especially the ones that had "get" in their domain name. Yet here we are.

The "upscaling" significantly changes the content of the picture, altering half of the faces, their expressions, whether eyes are closed, mouths open, etc, on top of making it actually HARDER to see details. The "upscaling" on android smartphone cameras has always been similarly trash. There is always some odd fractal-looking noise filter added. Who is actually asking for this stuff?

Open an company that sells t-shirts with "AI glitched" text on it so people can make every foto of this kind illegitimate.

  • In a couple of years, they can go into same junk drawer of sixth finger prosthetics (generative AI problem) and 5-eyes masks (face recognition problem).

  • Oh those already exist, they just print out whatever ChatGPT gives them without double-checking.

On this episode of "Trying to make AI useful"...

Seriously, who's idea was this? It can't be a money saving feature; surely it costs more to upscale all these videos than to just host the HD version.

And even if you argue it can be used only on low res videos to provide a "better experience", the resulting distortion of reality should be very concerning.

https://www.theverge.com/youtube/765485/is-youtubes-shorts-e...

Today on The Verge, GenAI upscaling in YT shorts. Yes, AI is here to stay, but I do hope the icky parts go away soon.

  • > GenAI upscaling in YT shorts

    I cannot watch the linked video, but its description quotes “not generative AI”; is The Verge or someone else showing something different?

    • This is being unnecessarily pedantic. They're saying "yes we're doing post-processing, but that's not technically generative AI."

      Personally I couldn't care less about what they call it, I care that it makes the same video look more artifical on YouTube than they look elsewhere.

      > Hi! I'm a tech nerd and I try to be precise about the terminology I use

      > GenAI typically refers to technologies like transformers and large language models, which are relatively new

      > Upscaling typically refers to taking one resolution (like SD/480p) and making it look good at a higher resolution (like HD/1080p)

      > This isn't using GenAI or doing any upscaling

      > It's using the kind of machine learning you experience with computational photography on smartphones, for example, and it's not changing the resolution

      > And sincerely appreciate the feedback!

      1 reply →

    • There's no strict definition of these things, the quote is from YT and they're saying something like this isn't that _bad_ generative AI that people are worried about, this is just good old fashioned wholesome machine learning techniques.

I wonder if the fact that the original video was AI generated made the upscaling look worse than it would on a real video? Not that it can certainly be detected, but an actual video is likely different from an AI generated in ways that it seems like could lead astray their "computational photography" processing.

They're not blurring the lines, they're lying. They're generating video that didn't happen and passing it off as real video of events that actually happened. That's lying. Lying a little bit is lying.

Soon you won't be able to tell the difference between AI generated and 'real' content, since the 'real' content will be all processed by AI automatically. Quality in -> Garbage out.

  • Non-AI mucked content will be considered "artisanal" and "high end" instead of the base line.

  • Already happening. Take a photo on your phone of something at a distance at maximum zoom. The amount of digital processing going on is crazy. People didn't want blurriness so instead these highly zoomed photos now look like impressionist paintings.

The most incredible part about this story is that Will Smith is still a performing (and touring???) musician with any audience at all, AI or otherwise. I thought he was an actor now. Wut happened?

  • Will Smith has been both a successful musician and actor since the Fresh Prince days. People do both. I don't know why this would be confusing to you.

    • I think it was confusing to me as well because this guy is some sort of 90's legend, and I do not understand what kind of relevance he might still have, especially on anybody under 35. It's not like he successfully transition into some kind of cultural-relevant behemoth such as Ozzy Osborne or Snoop Dogg, he kind of plateau'd at least a decade ago, but potentially more...

      I used to watch and enjoy the Fresh Prince, but you couldn't offer me enough money to go to Will Smith concert, because, why the hell would I do that...

      2 replies →

I'm not sure about this specific instance, but AI generated movies will absolutely be the future, when you can create the exact shots you want with stability of the foreground, background, and characters, and edit it all together, it'll be an explosion of creativity just as with image generation currently.

To be clear, I don't think it'll be telling an AI to "create me a movie with X, Y, and Z" because AI reasoning is not there yet, but for the raw video generation, it's progressing steadily, as seen in r/aivideo.

  • I don't exactly disagree, but I do suggest reading "Trickster Makes This World: Mischief, Myth, and Art" by Lewis Hyde.

    There is a reasonable argument to be made that a lot of art is enlivened by the cantankerous, unpredictable and unyielding nature of the media we use to create art. I don't think this is a necessary feature of art per se, but I do think limitations often help humans create good art and that eliminating them often produces things which feel tossed off, trivial, thoughtless.

    I think for commercial produces creating "the exact shot you want" might be what shareholders demand of you. But many artists don't set out to create "the exact shot they want," they set out to collaborate with the world to create an impression that captures both their intent and the unpredictable substance of the situation in whatever sense that might mean.

  • > […] it'll be an explosion of creativity just as with image generation currently.

    I'm mostly seeing people who lack the skills or means to create their own works go nuts with prompting gen-AI tools, but it rarely strikes me as creative in either the 'having the ability to create' sense — they've outsourced that — or the 'original, expressive, imaginative' sense.

    • They don't have the mechanical means, yes, but they decide what to create so it'd be the latter, not sure why you think it's not; the AI isn't independently coming up with ideas and generating the media. Plus with ComfyUI, I'd say there's some of the former too, similar to how music producers aren't literally playing each instrument that's simulated in their software, but they do assemble it together.

      3 replies →

  • The line between movie and games will blur. Once you can do generative movies, you can do games, and vice versa, there's no obvious delineation, and the technical problem is heavily overlapping. Games just has some scoped control inputs, like this: https://demo.dynamicslab.ai/chaos

  • > it'll be an explosion of creativity just as with image generation currently.

    I haven't seen anything breathtaking yet, just a tsunami of slop. Arguably we already had a video tsunami of slop, you just have log in into netflix to witness it.

    For a long time I disliked the term "content" to describe photos/movies/art/&c. but now I feel it's a very appropriate term, an infinite amount of meaningless "content" to fill bottomless "containers"

  • Nope. Limitations feed creativity. When you have unlimited power/reesources, you end up with unlimited slop. One of the reasons why old movies were better on average - now we get so many average movies with no lasting effect. Another one, slightly orthogonal - a golden ring or rolex in a neatly designed photo shoot vs a middle eastern head of state's "throne room". When you have something in limited quantities, you get the best out of it - when it's unlimited you go crazy.

    • Survivorship bias, there is no indication that older movies were better on average. While I can agree that constraints breed creativity as they say, the opposite can also be true; look at software, one can also theoretically code an unlimited number of things, and from that we get people creating software and connecting devices to a never before seen level of scale and creativity.

      2 replies →

So... the videos showing the difference between the AI-tainted youtube version and the supposedly untainted instagram version are hosted on... youtube?

  • apparently the sharpening algorithm is only applied on youtube shorts, not on regular youtube videos.

I think the biggest takeaway for people in the industry is that the reaction to perceived AI videography is overwhelmingly negative. Using AI to generate footage of real people immediately makes watchers suspect something needs to be covered up or counterfeited. People know what reality looks like, and sloppily subverting it will never be popular.

> "Conclusion - Virtually all of the commenters on YouTube, Reddit, and X" SNIP

That's fine and commenters on those sites are entitled to their opinions, but it's strange they didn't mention Hacker News.

What's the point of using videos like this if it's a risk to reputation just to use them?

  • The people with whom this is a reputational risk were not going to buy Will Smith concert tickets anyway.

  • There's no risk to reputation. You get a massive rage boost then reveal that "a social media contractor used authentic crowd photographs in an unauthorized manner and is no longer employed by the company". You reveal the photos, everyone either celebrates the contractor getting canned or that this wasn't AI and you get huge lift.

    People are suckers. You can tell them you are going to do this, do it, and they'll still fall for it. Don't tell them and they'll think better of themselves and of you for obeying them (cf fictional firing) and you're done!

  • there are no risk of reputation until you use it. further more, even within creative professions, using Gen AI is already acceptable to some degree.

    • The entire point of this article is that the "make it look worse" filter is being applied by YouTube automatically, whether creators want it or not.

  • I’m wondering at what point the minority are going to finally accept ai is here to stay.

What happens when AI gets trained on AI slop?

If there's code to stop AI from being trained on AI, I would like to have it from stopping me from seeing it.

From a PR perspective, I wonder why YouTube is at the same time forcing unwanted AI features down people's throats[1], a move that many companies now do to drum up their perceived AI competence, but THEN at the same time, when asked, also downplaying this use of AI by splitting words.

The combination of the two confuses me. If this was about shareholders, they'd hype up the use of AI, not downplay it. And if this was about users, they'd simply disable this shit.

[1] I mean, they're sacrificing Google Search of all things to push their AI crap. Also, as a bilingual YouTube user, AI-translated titles and descriptions make the site almost unusable now. In addition to some moronic PM forcing this feature onto users, they somehow also seem to have implemented the worst translation model in the industry. The results are often utterly incomprehensible, and there's no way to turn this off.

  • The customers are advertisers and marketers. The product is access to the userbase. This is how we arrive where we are at today, where major clients have made significant investments into AI and expect further return on that investment through proliferation of the technology, while the users could not care less or even balk at it. But we are also at a time where there is no viable alternative to the monopolied corners of the internet such as youtube any longer, so the userbase has nowhere to flee if they even wanted to.

I find it hilarious that the Youtube spokespersons go out of their way to clarify that this is "not the bad GenAI shit that we know everyone hates but the good kind, you know, machine learning and stuff, you know, trust us"

  • The Youtube liason statement is truly something to behold. It's not "upscaling" it's "unblurring". There's no genai, just "traditional machine learning". Oh, phew! I wouldn't want to have misidentified post-processing as unblurring when it was really upscaling.

    I think we may see more of this which, to be blunt, I would say are stupid equivocations that are orthogonal to the actual concern, namely that it violates a fundamental trust that images represent something that really happened. We saw this already with Samsung's headspinning justification of their post-processing a fake moon.

    Strap in for more JV debate team level "gosh what is reality anyway" equivocations, which I suspect will become increasingly prevalent.

This is due to “upscaling”, I remember before the while AI bubble craziness there was a software I used that would upscale images pretty well, except if it has little details like that it would be turned into the mushy stuff you see in the article.

at the end of the day it doesn’t really matter for music performances though as long as we all have ears

his music is terrible