Sony continues to increase the number of layers on its latest CMOS sensor

By dispersing the components into additional layers, a better quality image can be extracted from the sensor with a given surface than before.

For years, Sony has been producing image sensors where the components are placed on separate layers, which allows for better image quality without having to increase the sensor size. This is especially important for smartphone cameras, as manufacturers do not have the opportunity to place very large sensors there, but of course these innovations can also be useful in traditional cameras.


(source: Sony) (+)

The Japanese have now taken another step forward, namely Sony announced the development of its newest sensor, the major innovation of which is the separation of the photodiodes performing light detection and the transistors responsible for their control. On the current sensors, the mentioned components are placed next to each other within one layer, but in the new development, most of the semiconductors are banished one level down, to a new layer. This is a logical decision not only because it leaves more space for the photodiodes, thanks to which the saturation signal level of the sensor can be doubled (this practically means that the sensor can more accurately interpret the information extracted from the incoming light in low light conditions, i.e. the dynamic range increases), but in this way the control transistors can also be spread over a larger area. In this form, the manufacturer can also use larger transistors for amplifying the signals, which will have a good effect on the noise level of the images.


(source: Sony) (+)

It is not yet clear whether Sony will first demo the new sensor in smartphones or in its own Alpha series cameras, but there is a good chance that the technology will sooner or later be respected in both categories.

Source: prohardver.hu