Surprised no one has mentioned yield. With larger chip size, getting good yield becomes a challenge, especially with smaller nodes. 28nm is an extremely mature node with 8+ years of prod volume (in excess of 3k wafer starts a week in multiple fabs).
The physical size of the sensor is locked because it is tied to entire families of camera, lens, etc sizes that are very difficult to change. So that is the given constraint, and the other parameters flow from it.
It's not that they really want this size/node. It just is the optimization of what they are allowed to work within.
You can still keep the physical sensor size the same but use a smaller process. Of course, if you can use 450mm+ wafer sizes at 25nm vs. 300mm wafers at <=10nm then suddenly it's a massive price difference.
According to the Chinese article, for TSMC the 28nm node is their most profitable node. They have mastered its production process.
Sony used to make their own CIS and only used TSMC for CMOS logic chips. But starting this year, Sony switched the production of their CIS chip from their foundary to TSMC (40nm.) The 28nm move is a continuation of their collaboration. Both sides are taking this very serious, with TSMC asking the surrounding factories to move out as quickly as they can.
As for why, the article claims that advances in 5G will lead to more IoT and self-driving cars. These devices will need more CIS to sense their surrounding. Due to this trend the CIS market has grown ~17% year over year. Samsung, Sony's biggest CIS competition and 2nd in market share, is also targeting this particular market. Samsung is rapidly converting some of the DRAM foundaries in Taiwan to CIS production.
I believe it's not just TSMC. I saw a presentation from one of the ARM architects a couple years back where she said that 28nm would likely stay the most economic node for quite some time to come, as the nodes below that have increasingly expensive mask costs due to multi patterning, or get into very exotic light sources.
IIRC from my Computer Architecture class, 28nm is the cheapest per transistor, so it’s stuck around for cases where performance isn’t important.
I don’t think there’s a fundamental reason for that, it’s just a feature of the processes that exist. As you get smaller, the savings from miniaturizing components are outweighed by the need for more complicated equipment.
I would think opex per transistor is always much lower on smaller processes. Even with higher opex of smaller nodes, the geometry heavily favors smaller nodes.
For digital electronics, yes. But not true for image sensors, since the sensor size (and thus the die size) is fixed. For example, a full-frame CMOS sensor has an imaging area of 36 × 24 mm, regardless of the process node, because full-frame is _defined_ as 36 × 24 mm.
In addition, it's just going to be harder to get fab time on more cutting edge nodes. Your going to be competing for fab time against bigger competitors with more at stake, and that's going to cost you as well.
Finally, it's only true that the long-run opex per working transistor is lower on smaller processes. Many processes are actually more expensive on a "per working chip" basis at introduction than their predecessors. It's only as process is worked on and improved that the per chip cost (ignoring opex) actually beats the preceding node.
Usually a fab will start shipping product on a process once it's "good enough" - the yields and costs (and therefore profit per unit) is sufficient to make sense (even if it's not necessarily cheaper than the old process).
Are there diminishing returns for an image sensor, given so many of your features are going to be waaaaaay bigger, given so much of the chip will be taken up by photosites, which have to be an order of magnitude larger for sensing the visible spectrum?
I don't know. The parts of the chip that are shuffling the data off the sensor obviously benefit from having the latest process node (minimizing rolling shutter is a huge deal), and reducing heat is also a big benefit (see Canon's R5 overheating problems), but maybe the design is gated by the photosite size?
Yes the photosites are big, but FPGAs and CIS will merge, esp for AI applications, the photosites might even directly feed into an analog first layer, or each pixel will have its own AD.
Or memory and CIS will merge, and each photosite gets its own AD and a 4 byte memory location. By having CIS sensors directly on the DRAM or PCIe bus means they could feed a DL with higher bandwidth and lower latency. Even at 20Mpix, 4 byte pixels as 120 fps, that is just under 10GB/s, it might DMA it directly into the gpu. So the other place to put a sensor is in RDMA hardware, or if it is on the PCIe bus, it could talk directly to Infiniband nics.
Imagining have a CIS device that is also a PCIe device. It could many device classes (network, memory, storage, display). It could DMA directly to a nic, or memory controller. In the os you could trap a read to a specific inode, interpose the call and return an image. No drivers necessary.
Or a NN that runs on the chip and detects objects, infers a depth map, colorizes, smooths, augments, up res, object removal, with enough compute, you could run all the kernels or a subset on every frame.
28nm might be ok for vanilla MIPI interface CIS, but it would make a lot of sense for future innovative CIS sensors to use the smallest node they can.
Not an expert, but it seems that the size of the logic wafer is locked to the size of the pixel wafer in Sony's CMOS image sensors? https://fuse.wikichip.org/news/763/iedm-2017-sonys-3-layer-s... ? It seems that 65nm was current-generation for CISes in 2018: https://www.azom.com/article.aspx?ArticleID=16321 .
Because image sensors need to have a certain size, not a certain amount of transistors.
Surprised no one has mentioned yield. With larger chip size, getting good yield becomes a challenge, especially with smaller nodes. 28nm is an extremely mature node with 8+ years of prod volume (in excess of 3k wafer starts a week in multiple fabs).
Ding ding ding!
The physical size of the sensor is locked because it is tied to entire families of camera, lens, etc sizes that are very difficult to change. So that is the given constraint, and the other parameters flow from it.
It's not that they really want this size/node. It just is the optimization of what they are allowed to work within.
You can still keep the physical sensor size the same but use a smaller process. Of course, if you can use 450mm+ wafer sizes at 25nm vs. 300mm wafers at <=10nm then suddenly it's a massive price difference.
3 replies →
According to the Chinese article, for TSMC the 28nm node is their most profitable node. They have mastered its production process.
Sony used to make their own CIS and only used TSMC for CMOS logic chips. But starting this year, Sony switched the production of their CIS chip from their foundary to TSMC (40nm.) The 28nm move is a continuation of their collaboration. Both sides are taking this very serious, with TSMC asking the surrounding factories to move out as quickly as they can.
As for why, the article claims that advances in 5G will lead to more IoT and self-driving cars. These devices will need more CIS to sense their surrounding. Due to this trend the CIS market has grown ~17% year over year. Samsung, Sony's biggest CIS competition and 2nd in market share, is also targeting this particular market. Samsung is rapidly converting some of the DRAM foundaries in Taiwan to CIS production.
I believe it's not just TSMC. I saw a presentation from one of the ARM architects a couple years back where she said that 28nm would likely stay the most economic node for quite some time to come, as the nodes below that have increasingly expensive mask costs due to multi patterning, or get into very exotic light sources.
Maybe one day we'll get EUV for 16nm making that the cheapest node instead.
IIRC from my Computer Architecture class, 28nm is the cheapest per transistor, so it’s stuck around for cases where performance isn’t important.
I don’t think there’s a fundamental reason for that, it’s just a feature of the processes that exist. As you get smaller, the savings from miniaturizing components are outweighed by the need for more complicated equipment.
Cost as in cost passed to the buyer?
I would think opex per transistor is always much lower on smaller processes. Even with higher opex of smaller nodes, the geometry heavily favors smaller nodes.
For digital electronics, yes. But not true for image sensors, since the sensor size (and thus the die size) is fixed. For example, a full-frame CMOS sensor has an imaging area of 36 × 24 mm, regardless of the process node, because full-frame is _defined_ as 36 × 24 mm.
The non-opex costs can also be very significant. This is from a few years ago (https://www.extremetech.com/computing/272096-3nm-process-nod...), but you can significant differences in start-up costs per node.
In addition, it's just going to be harder to get fab time on more cutting edge nodes. Your going to be competing for fab time against bigger competitors with more at stake, and that's going to cost you as well.
Finally, it's only true that the long-run opex per working transistor is lower on smaller processes. Many processes are actually more expensive on a "per working chip" basis at introduction than their predecessors. It's only as process is worked on and improved that the per chip cost (ignoring opex) actually beats the preceding node.
Usually a fab will start shipping product on a process once it's "good enough" - the yields and costs (and therefore profit per unit) is sufficient to make sense (even if it's not necessarily cheaper than the old process).
3 replies →
Are there diminishing returns for an image sensor, given so many of your features are going to be waaaaaay bigger, given so much of the chip will be taken up by photosites, which have to be an order of magnitude larger for sensing the visible spectrum?
I don't know. The parts of the chip that are shuffling the data off the sensor obviously benefit from having the latest process node (minimizing rolling shutter is a huge deal), and reducing heat is also a big benefit (see Canon's R5 overheating problems), but maybe the design is gated by the photosite size?
Yes the photosites are big, but FPGAs and CIS will merge, esp for AI applications, the photosites might even directly feed into an analog first layer, or each pixel will have its own AD.
Or memory and CIS will merge, and each photosite gets its own AD and a 4 byte memory location. By having CIS sensors directly on the DRAM or PCIe bus means they could feed a DL with higher bandwidth and lower latency. Even at 20Mpix, 4 byte pixels as 120 fps, that is just under 10GB/s, it might DMA it directly into the gpu. So the other place to put a sensor is in RDMA hardware, or if it is on the PCIe bus, it could talk directly to Infiniband nics.
Imagining have a CIS device that is also a PCIe device. It could many device classes (network, memory, storage, display). It could DMA directly to a nic, or memory controller. In the os you could trap a read to a specific inode, interpose the call and return an image. No drivers necessary.
Or a NN that runs on the chip and detects objects, infers a depth map, colorizes, smooths, augments, up res, object removal, with enough compute, you could run all the kernels or a subset on every frame.
28nm might be ok for vanilla MIPI interface CIS, but it would make a lot of sense for future innovative CIS sensors to use the smallest node they can.
Cheapest and most stable to mass produce. A lot of semis don't need CPU level form factor
Why do anyone want a lower nm number?
Lover usually means faster and more advanced. So that’s why I’m asking.
Cheaper and more yield/wafer maybe?
1 reply →
What's why you're asking?
They are getting lower.. thats what the article is about?
You did.. read the article right?
1 reply →