Comment by dusted
3 years ago
Basically this.. As "neat" as AI "improvement" is, I don't think it has any actual value, I can't come up with any use-case where I can accept it. "Make pictures look good by just hallucinating stuff" is one of the harder ones to explain, but you did it well..
Another thing, pictures for proof and documentation, maybe not when they're taken but after the fact, for historical reasons, or forensics.. We can't have every picture automatically compromised as soon as it's taken. (Yes, I know that photoshop is a thing, but that's a very deliberate action, which I believe it should be)
I think the main use case is "I'm a crummy photographer and all I want is something to remind me that I was there" and "Look at my cat. Look! Look at her!"
That's me. I'm a lousy photographer, as evidenced by all of the photos I shot back when film actually recorded what you pointed it at. My photography has been vastly improved by AI. It hasn't yet reached the point of "No, you idiot, don't take a picture of that. Go left. Left! Ya know what, I'm just gonna make something up," but it should.
I imagine there will remain a use case for people who can actually compose good shots. For the remaining 99% of us, we'll use "Send the camera on vacation and stay home; it's cheaper and produces better pictures" mode.
As a kid I was taking a photo in a tourist spot with a film camera and standard 50mm lens. An elderly local guy grabbed me by the shoulder as I framed the photo. We shared no common language and he (not so gently) pulled me over to where I should stand to get the better shot.
That would actually be a useful feature, I'm aiming the camera but based on what makes "good professional" photos, it suggests "move to the left so you frame the picture well" or "those two people should be more spread out so its not one person with two heads" etc, kindof like lane warnings on cars.
You don't need AI for taking better photos, for most people the phone just automatically taking a burst/video and picking a frame out for the still or stacking frames would be plenty. Lots of photos suck because of shit lighting. A camera intelligently stacking frames would fix a lot of people's photos.
That's already AI. Auto white balance and ISO are already AI too.
"I'm a crummy photographer and all I want is something to remind me that I was there"
This is fine, and I can take good shots but at the same time? I only care about this level of shot most of the time too!
But then instead of a 20MP image, which:
* takes more space, and ergo, more flash drive space
* more space to store, to backup, to send
* is made 20MP by inserting fake data
Why not have a 2MP image, which is real, and let people's end-use device "fix" it? Because all that post processing can be done when 2x or 4x the view size, too!
Because advertising.
And that's sad. We'd rather think we have a better pic, and destroy the original.
And the space thing is real. Because, that same pic gets stored in gmail with 20 people, backed up, kept in all the devices, and so on!
And the LOL of it all, is that I bet when it is uploaded to facebook... it gets downsized!
edit: in fact, my email app allows me to resize on email, so I downsize that too! Oh, those poor electrons.
This is like Huffman compression on your photos but the AI companies created an exabyte size dictionary and now you can store your photo in a few kilobytes.
1 reply →
I'm a decent photographer and still use my phone for this. It's good enough, and can even skip the AI stuff if I want to. Or even better: I can keep the AI stuff in the raw and edit its impact on the final photo later.
Interestingly enough one of the reason Sonys flagships perform really badly in comparisons is because they are weak at computational photography. So even when the sensor is great it looks too real, which people don't like.
> it looks too real
Yeah I've a phone with a great camera, nature shots are great, but people don't like themselves in these photos. When pressed they talk about the defects on skin and theets and eye position... Their phone beauty filters created in their mind a fake mental image of themselves and they dissociate from their real images.
It's weird. My mom brand fidelity is because Huawei specific algorithm is part of her self.
How about using AI for sensor fusion when you have images from multiple different kinds of lenses (like most smartphones today)? I was under the impression this was the main reason why AI techniques became popular in smartphone cameras to begin with
I'm not aware of much fusion happening between different lenses (although I saw an article using that for better portrait mode bokeh), but AI is used to stack multiple images from the same sensor. You can do de-noise, HDR and other stacking stuff with clever code, but AI just makes it better.
Good for situations where you aren’t expecting or care about realism in this detail. AI hallucinations will be amazing for entertainment, especially games.
I want game content generation by AI, like for dungeon generation in an ARPG - it likely won’t be as good as hand crafted level by a developer but it should be more interesting than the current techniques where prefab pieces of a dungeon are randomly put together.
Removing noise from low lights pictures, or removing motion blur from shaky hands. Lots use cases for “ai” or computational photography.
> We can't have every picture automatically compromised as soon as it's taken.
Isn't it a good thing for privacy?
I think it's neutral.. Just as incriminating as true photos can be (at least there might be some moral highground if you're into that sorta stuff), for AI faked pictures.. You may have no choice but be incriminated by photos that lie..
I genuinely can’t recall people saying “hallucinate” with any regularity - in the context of “AI” - until people started talking about ChatGPT.
So, we’ll see what people say in a year.
The term has been around in this context since at least 2018[1], and indeed I have chat logs from 2019 talking about how mtl [machine translation] hallucinates, so no, this has been what people have been calling it for a while now. Perhaps what you're seeing is just rising awareness that this is a weakness of current-gen ML models, which is great, now even monoglots get to feel my pain :V
[1]: https://www.wired.com/story/ai-has-a-hallucination-problem-t...
I think it started a bit earlier - already with image generation AI like dall-e.
I first recall seeing it in the context of DeepDream in 2015
Even earlier with Deep Dream