TSMC Builds a Dedicated 28nm Fab for Sony Orders

5 years ago (image-sensors-world.blogspot.com)

CIS = CMOS Image Sensor, apparently. Gotta love macronyms/nested acronyms.

  • Thanks. I hate it so much when people do not explain the acronyms they use, at least once.

    It is like a map without a legend. It is so confusing.

    And I am an expert on electronics, I can guess it, most people will read that as nonsense.

    • My wife was talking about OHP which she used when teaching while explaining something today. I was a bit confused as we didn’t have anything called OHP when I was in school. After a bit of explaining we realised she had only ever known it as OHP. She has never stopped to think about what it was an acronym for. Turns out she was talking about the overhead projector which we had plenty of in school. Reminded me of the five monkeys experiment https://workingoutloud.com/blog/the-five-monkeys-experiment-...

      5 replies →

    • That is the part I dont understand.

      CMOS is widely known. And in the context of Image Sensor, as long as you write CMOS everyone should understand. But CIS isn't, not to mention it keep repeating itself like this sentence below

      "In the face of Samsung's close pursuit, Sony decided to expand its partnership with TSMC, hoping to win 60% of the global Market Share of CIS Image Sensors by 2025."

      What exactly is CIS Image Sensors? CMOS Image Sensor Image Sensor?

      Just call it CMOS Image Sensor. Not everything has to be an acronyms.

      1 reply →

  • A good way of dealing with this kind of thing in the future would be to include some clarifying words in the post title to HN. Like "TSMC Builds a Dedicated 28nm Fab for Sony CMOS image sensors".

    HN gathers all kinds of people from the technology world. We can't all be expected to know all of of the acronyms from all of the various sectors.

    The parent comment received 15 upvotes in a few hours, so I'm guessing most people on here don't know what "CIS" means in this context.

    As always, tailor your message to the likely reader.

    • > The parent comment received 15 upvotes in a few hours, so I'm guessing most people on here don't know what "CIS" means in this context.

      People are more likely to have opinions on writing style than imaging chip economics and manufacturing.

      By posting on HN to complain about writing style, that comment took the discussion off topic and provided plenty of room for people to bikeshed.

      2 replies →

  • and then there's gnu hurd...

    It's time [to] explain the meaning of "Hurd". "Hurd" stands for "Hird of Unix-Replacing Daemons". And, then, "Hird" stands for "Hurd of Interfaces Representing Depth". We have here, to my knowledge, the first software to be named by a pair of mutually recursive acronyms. — Thomas (then Michael) Bushnell

  • AFAIK, the usage of CIS became common when CMOS based sensors started replacing CCD (charge coupled device) sensors in scanners (and cameras?).

Sony is trying to own the market for image sensors.

The Nikon D850's image sensor was designed by Nikon but Sony made it.

They have a 'firewall' in place between the custom-contracted fab work they did for Nikon vs the team that designs Sony sensors, so that the Nikon IP stays only with Nikon.

See https://m.dpreview.com/news/1234108119/nikon-d850-sensor-con...

Only Canon at this point has stuck with their own image sensor IP and designs, as I understand it.

Not sure where Ricoh/Pentax gets their sensors from, it's believed some are Samsung and some are Sony.

Why do Sony want 28nm specifically?

  • Because image sensors need to have a certain size, not a certain amount of transistors.

    • Surprised no one has mentioned yield. With larger chip size, getting good yield becomes a challenge, especially with smaller nodes. 28nm is an extremely mature node with 8+ years of prod volume (in excess of 3k wafer starts a week in multiple fabs).

    • Ding ding ding!

      The physical size of the sensor is locked because it is tied to entire families of camera, lens, etc sizes that are very difficult to change. So that is the given constraint, and the other parameters flow from it.

      It's not that they really want this size/node. It just is the optimization of what they are allowed to work within.

      4 replies →

  • According to the Chinese article, for TSMC the 28nm node is their most profitable node. They have mastered its production process.

    Sony used to make their own CIS and only used TSMC for CMOS logic chips. But starting this year, Sony switched the production of their CIS chip from their foundary to TSMC (40nm.) The 28nm move is a continuation of their collaboration. Both sides are taking this very serious, with TSMC asking the surrounding factories to move out as quickly as they can.

    As for why, the article claims that advances in 5G will lead to more IoT and self-driving cars. These devices will need more CIS to sense their surrounding. Due to this trend the CIS market has grown ~17% year over year. Samsung, Sony's biggest CIS competition and 2nd in market share, is also targeting this particular market. Samsung is rapidly converting some of the DRAM foundaries in Taiwan to CIS production.

    • I believe it's not just TSMC. I saw a presentation from one of the ARM architects a couple years back where she said that 28nm would likely stay the most economic node for quite some time to come, as the nodes below that have increasingly expensive mask costs due to multi patterning, or get into very exotic light sources.

      1 reply →

  • IIRC from my Computer Architecture class, 28nm is the cheapest per transistor, so it’s stuck around for cases where performance isn’t important.

    I don’t think there’s a fundamental reason for that, it’s just a feature of the processes that exist. As you get smaller, the savings from miniaturizing components are outweighed by the need for more complicated equipment.

    • Cost as in cost passed to the buyer?

      I would think opex per transistor is always much lower on smaller processes. Even with higher opex of smaller nodes, the geometry heavily favors smaller nodes.

      5 replies →

  • Are there diminishing returns for an image sensor, given so many of your features are going to be waaaaaay bigger, given so much of the chip will be taken up by photosites, which have to be an order of magnitude larger for sensing the visible spectrum?

    I don't know. The parts of the chip that are shuffling the data off the sensor obviously benefit from having the latest process node (minimizing rolling shutter is a huge deal), and reducing heat is also a big benefit (see Canon's R5 overheating problems), but maybe the design is gated by the photosite size?

    • Yes the photosites are big, but FPGAs and CIS will merge, esp for AI applications, the photosites might even directly feed into an analog first layer, or each pixel will have its own AD.

      Or memory and CIS will merge, and each photosite gets its own AD and a 4 byte memory location. By having CIS sensors directly on the DRAM or PCIe bus means they could feed a DL with higher bandwidth and lower latency. Even at 20Mpix, 4 byte pixels as 120 fps, that is just under 10GB/s, it might DMA it directly into the gpu. So the other place to put a sensor is in RDMA hardware, or if it is on the PCIe bus, it could talk directly to Infiniband nics.

      Imagining have a CIS device that is also a PCIe device. It could many device classes (network, memory, storage, display). It could DMA directly to a nic, or memory controller. In the os you could trap a read to a specific inode, interpose the call and return an image. No drivers necessary.

      Or a NN that runs on the chip and detects objects, infers a depth map, colorizes, smooths, augments, up res, object removal, with enough compute, you could run all the kernels or a subset on every frame.

      28nm might be ok for vanilla MIPI interface CIS, but it would make a lot of sense for future innovative CIS sensors to use the smallest node they can.

This is really interesting from a business relationship perspective. Sony helped TSMC build and install a new fab that TSMC will run exclusively for Sony. This is like a cloud provider bringing up a new DC just for one customer using that customer's equipment.

Is TSMC just going to start rolling up every mom and pop fab? Are they the Sinclair of Silicon?

I can’t wait for the same efficiencies we’ve seen in mobile CPUs to translate to other areas, like image sensors. The latest Sony a7s III is supposed to have a giant heat sink as one of its flagship features, but just imagine if they were working on TSMC 5nm tech.

  • Sensors such as CCDs are largely analog devices (up to the A-to-D converter), where thermal noise / shot noise etc. are things that matter. Cooling a CCD via active cooling or a heat sink will help lower the noise floor and improve the signal-to-noise ratio and thus picture quality.

  • My understanding is that higher density leads to greater noise. The prefer to use larger transistors that produce cleaner images.