AI camera with no lens

3 years ago (theprompt.io)

I can't find it now, but there was a prototype someone made in the 2000s of a camera that, when you pressed the shutter, would fetch an image on Flickr that most closely matched your GPS coordinates + time of day, acting in a similar way as a "crowdsourced camera with no lens".

Fun to the see a modern reincarnation of that idea.

(While digging around to find the above, I did find yet another camera project that does the opposite: "Matt Richardson's "Descriptive Camera" sends your pictures to Amazon's Mechanical Turk and jobs out the task of writing a brief description of each image, then outputs the text on a thermal printer. It's a camera that captures descriptions, not pictures." (https://boingboing.net/2012/04/25/descriptive-camera-prints-...)

If anything, this is a neat art piece asking "Why do people take pictures?".

I've found often that the gut feel that makes you take the shot doesn't necessarily know when you have the right composition or what the subject of the shot actually is, you just hit he shutter, you know that this was the shot.

Then, when looking at the shots, you have all the time and the world to analyse and find meaning and beauty in this sliver of an instant.

By replacing this by a random seed, a 20 word prompt and gps localisation, I doubt that anyone would have a personal connection to the image, or to the instant it was taken. It become a "clean", "sanitized" image, that's only esthetic (or arguably memetic depending on your prompt), and is wholly separate from the person that took it.

You also lose all of the information that you can not consciously perceive while taking the shot/writing the prompt, since you filter what you see through the lens of language, and then back into visual.

It's neat !

  • Fun exercise if you're new to photography is taking photos and cropping them to let's say 1/4th the size. You often get photos giving you a completely new perspective on things you've seen many times.

    For example it's easy to make a photo look like it was taken from a non-existing tower if you crop the upper part of a regular photo. Or you can focus on details that you always skip over because there's something more eye-grabbing nearby.

    This is also why I love to see photos made by people who visited my city for the first time. They don't know which parts are pretty so they capture stuff I wouldn't think to photograph.

    • “giving you a completely new perspective on things you've seen many times.”

      This always been my idea of what makes a great photograph.

  • I am a lousy photographer. I never get that "this was the shot" feeling.

    I assume I could develop it with practice. I just never did. I rarely had a camera growing up, and now that I have one with me all of the time, I treat it like the Instamatics I used to have. The pictures are terrible.

    At best, they're a kind of bookmark that I was at that place and saw that thing. It won't have an emotional resonance for anybody but me, and for me it's just bringing up the much better picture in my head. If they want a good picture of the thing, I'll go find one that somebody else took.

    All of which is to say... I really admire good photographers. I respect their work, and the diligence it took to develop their eye.

    This is, as you note, an interesting art piece on that same subject. I'm afraid I'm better with words, so this is mine.

    • It's really hard, maybe impossible to come up to a new place and instantly take a good shot.

      You need to stay there, move around, explore the space, the light, the interaction with the living.

      Eventually you find a good way to tell the story of the place. You get lucky. Sometimes it's fast, sometimes it isn't.

      1 reply →

    • For what it's worth, maybe try to write down your thoughts on places that matter to you, when they do (kind of like a picture). Or a quick freeform poem

Idea: AI police sketch artist.

3D printed case. Resting on a table. Witness describes the suspect. 10 seconds later it prints out an AI generated ink sketch.

I mean why not? Sell it through to every police station in your state. You can even put a cute little police badge emblem on the case.

  • These ideas are what we should be worried about, not the paperclip thing.

    • businesses, states, markets, any organization or other system that incorporates super-human agency is already AI, so far performed manually

      the progression of technological "AI" has just been the automation and acceleration of their logic and operations

      what paperclips are the police maximizing?

      everything the alarmists are afraid of has already happened

      1 reply →

  • Well, maybe it will be a plus then for minorities that all of the training data is of white people. I only joke, as this is a horrible idea all around, but I appreciate your creativity.

  • Not an expert, but my intuition is that most of the sketch artists job is asking the right questions. I would assume that most people would have trouble describing close friends or even their partners from memory.

    Somewhat tangential: the "part of the brain" that is responsible for recognizing faces is incredibly well developed. That "peek-a-boo" game that you play with children? Every time you uncover your face millions of neurons in the childs brain suddenly fire giving them a jolt of "joy". The face recognition is so developed that we tend to see faces were there are none (face pareidolia).

    ... the point being that the brain does a lot of unconscious work recognizing individuals. Describing those individuals later consciously is pretty error prone.

  • As I understand it most police departments already use some kind of computer aided facial composite software instead of a traditional sketch artist. I can think of several dystopian reasons throwing AI into the mix might not be great, but tbe larger problem with this is why does it need to be sketched in pen and why does it need to be cute.

    Might make a neat like coin op charicature thing though.

  • yeah something I thought about before, all of a sudden you're the most wanted person and police just complies because that's what the system says

    would be crazy, probably a movie plot somewhere

    • I doubt it would change anything from what they do currently with police sketches; it would just be a faster, more accurate version. It's still just one piece of data they have to work from. The victim could describe the person to an AI, and it would update the 3D model on the fly.

      "White Male, Curly hair, mole on face"

      Generate.

      "Good, but he had a larger nose, and blue eyes."

      Generate.

      "He was a bit more gaunt, and had some stubble."

      Generate.

      "Nearly there. More pronounced check bones, and make the jaw a bit softer"

      Generate.

      In 5 minutes or less, you could get a near exact picture of the potential criminal; something that might take up to an hour or more normally with a professional police sketch artist, and it could easily be in 3D too. There's tremendous value in that.

      5 replies →

    • This is basically the plot of "The Net" starring Sandra Bullock. A group of hackers steals her identity and creates a new one for her in various systems to cause the police to believe she is a wanted felon.

  • hahaha that's not a bad idea at all

    • It's intriguing, because I wonder how this would affect police work. I'm imagining things here, but I assume that when a profile sketch is developed, all officers using that image know that it's "just a sketch" because, it looks like a drawing, because it is one.

      So what happens if you now generate a photorealistic "sketch" based on a description? Are officers going to be sufficiently aware to know that's not a actual photo of the guy they are looking for, and act accordingly? Or is it going to heavily bias a manhunt effort? moreover, what happens when the photo randomly ends up close to someone present in the dataset?

      4 replies →

Cute and fun idea, but it'd be nice if it could take better indoor photos.

Jokes aside, I think this demonstrates that AI generation isn't too great if you have something very specific in mind, at least it looks like the generated picture deviates from the real one, though it's impressive that it's still so extremely similar.

The virtual one doesn't load for me unfortunately

At first, I thought, "Cool, a sensor but no lens." Nope, no sensor.

  • Same here: "Hey, let the photons hit the sensor, and the AI do the lensing".

    Neat if at all possible, probably not. Perhaps make a cheap lens work as a good one, with some calibration on known images?

I see no reason to consider this a camera. Its just an image generator.

  • Think its more interesting as an art piece when we take into account the amount of AI and processing phone cameras do that crosses at times into the areas of "just an image generator"

  • It's a camera in the general sense of "result is a image" but not a camera in the sense of capturing physical properties of our world and convert it into bytes/pixels/analog film.

    I guess it's a "camera" as much as 3D software has "cameras" for controlling what the viewport is pointing towards.

    • > It's a camera in the general sense of "result is a image"

      So is the midjourney discord bot a camera? Microsoft paint? Your printer?

      You could say its kinda a camera because it takes pictures are your location. But it is in no way seeing anything (obviously, because there is no lense). It's just sliding some preset parameters around based on location.

      Again, not that its not impressive, but camera seems like the wrong word for it.

Interesting, but (to me at least) the point of photography will always be the technical process behind it, and capturing a _specific_ aspect of the environment. This will turn out fine generic images, but if you want to capture more, that's not something you can just synthesise.

This isn't really a camera, it's a GPS hooked up to Stable Diffusion.

Unfortunately the major weakness of this camera is the one thing I actually use my camera for: photos of my kids (and other people I know). But for stuff like landmarks, I never even bother taking photos of them—I’ll never take a photo of Delicate Arch that is as good as one million others I can find by doing an image search.

The title is really misleading. I was expecting a camera without lens, using a type of light sensitive array and reconstructing the focused image using AI

Very cool idea. I wonder if the design is a reference to the star-nosed mole. Appropriate for a blind camera!

  • Correct! From the site https://bjoernkarmann.dk/project/paragraphica

    “ The star-nosed mole, which lives and hunts underground, finds light useless. Consequently, it has evolved to perceive the world through its finger-like antennae, granting it an unusual and intelligent way of "seeing." This amazing animal became the perfect metaphor and inspiration for how empathizing with other intelligences and the way they perceive the world can be nearly impossible to imagine from a human perspective.”

this kind of technology is just what we need. It will always take a photo of the past, with more youthful self, hallucinated.

Even if you bring that back to the 80s they’d think you are doing witch craft if you showed this to anyone

  • This comment reminded me of some thoughts...

    When Stable Diffusion was released, and I saw the whole model was ~4 GB.. I instantly thought how insane it would be if it somehow was possible to take the model and a compiled binary of the inference code for x86_64 (without any modern extensions) on a DVD back in time, say around 2005-2006ish, and the implications that would have, psychologically, on the world.

    You could load that model on a moderate desktop with a 64 bit Core 2 Duo and 8GB of ram and let it chug .. without GPU acceleration, on CPU only it would take ~2 hours to make an image. But it would do it without an internet connection, without any inspectable code or heuristics... just... numbers, spitting out an image from text of whatever_people_want.

    It would be called a hoax. (In fact, I came across people on reddit when Dalle2 came out claiming it was somehow a trick or a hoax, and that all the images it produced must be existing beforehand somehow and prerendered).

    Scientists who dissected the weights file and the machine code for the inference engine would eventually figure out it was a neural net, but how such a net was trained would be a complete mystery. Theories involving aliens would likely appear.

    I wonder if it would be allowed to be made public, just the knowledge that such a thing was working. It would scare people, I think. Having it make these images without anyone knowing how.

    Hell, it is kinda scary now, ever knowing how it all works.

exhibit a of how prompt engineering wont be a valued salaried skill, it is a valuable skill to create a revenue stream from