Comment by perardi

5 years ago

Are there diminishing returns for an image sensor, given so many of your features are going to be waaaaaay bigger, given so much of the chip will be taken up by photosites, which have to be an order of magnitude larger for sensing the visible spectrum?

I don't know. The parts of the chip that are shuffling the data off the sensor obviously benefit from having the latest process node (minimizing rolling shutter is a huge deal), and reducing heat is also a big benefit (see Canon's R5 overheating problems), but maybe the design is gated by the photosite size?

Yes the photosites are big, but FPGAs and CIS will merge, esp for AI applications, the photosites might even directly feed into an analog first layer, or each pixel will have its own AD.

Or memory and CIS will merge, and each photosite gets its own AD and a 4 byte memory location. By having CIS sensors directly on the DRAM or PCIe bus means they could feed a DL with higher bandwidth and lower latency. Even at 20Mpix, 4 byte pixels as 120 fps, that is just under 10GB/s, it might DMA it directly into the gpu. So the other place to put a sensor is in RDMA hardware, or if it is on the PCIe bus, it could talk directly to Infiniband nics.

Imagining have a CIS device that is also a PCIe device. It could many device classes (network, memory, storage, display). It could DMA directly to a nic, or memory controller. In the os you could trap a read to a specific inode, interpose the call and return an image. No drivers necessary.

Or a NN that runs on the chip and detects objects, infers a depth map, colorizes, smooths, augments, up res, object removal, with enough compute, you could run all the kernels or a subset on every frame.

28nm might be ok for vanilla MIPI interface CIS, but it would make a lot of sense for future innovative CIS sensors to use the smallest node they can.