Saturday, September 15, 2012

The Science of Photography -- Part Eleven


In the race to invent and produce better and more dense photo sensors for use in digital cameras, large corporations and chip manufacturers are combining physics and engineering in state-of-the-art ways. At stake is a marketplace that shipped 139 million cameras in 2010, not counting the cameras in cell phones. This was a 7% increase over 2009. I couldn’t find the total revenue number, but most of these cameras cost more than $100, even more than $200, and some more than $1,000. So you do the math. It’s a big market. There are billions of dollars in revenue to be had.

So how do these little sensors work? Well, it takes degrees in physics and electronics engineering and computer science to understand in depth. But let me give a quick overview of how they work. I’ll start with the individual active element in the sensor. In modern cameras this is a photodiode. That is, a diode, a two wire device that allows current to flow in one direction easier than the other. Photodiodes conduct in a manner that depends on the amount of light shining on the diode. Combined with an electronic circuit with transistors and other components, the change in conductivity of the photodiode can be used to measure the amount of light.

That is where we’ll start. With the design of the photodiode. Then we’ll examine how these are arranged on a substrate or “chip” with optical filters and what-not to provide a light sensor that can distinguish color. Focus a sharp image onto this sensor chip with a camera lens, capture the individual photo diode’s response to the light combined with knowledge of where that diode is located on the sensor, run it through more amplifiers and electronic filters and convert it to a digital value -- a number. Then process all these individual number values with computer programs and -- voila -- you get a digital image good for saving and viewing and printing -- just like film, only without the need to develop.


To begin with, I assume you all realize that normal or “white” light is made up of a combination of all the colors. A rainbow is a good example of splitting up the light into individual colors. Although most people name seven colors of the rainbow, there is actually a continuum of colors from the deepest red to the highest blue. These different colors are different frequencies of light, with red the lowest frequency and blue or “violet” the highest.

Light consists of electromagnetic waves, just like radio waves, only the frequency of light is much higher than either the AM band or the FM band or even the RADAR band. Light is of such a high frequency that it exhibits some characteristics of particles. This dual wave/particle behavior is part of what is called quantum physics and names like Einstein are associated with the discoveries in the early twentieth century that culminated in atomic bombs and nuclear power plants.

Light isn’t radioactive, but it is a form of radiation. Light of a frequency below red is called Infrared and heat is radiated as Infrared waves. Above the frequency of light is the Ultraviolet, which is how we get sun burns. Above Ultraviolet is X-rays, which can pass right through flesh to photograph bones, but X-rays can really give you a bad sun burn.

It is time to dig into your favorite EE text book and review the facts of solid state physics.


Silicon Image Sensors

Nearly every modern image sensor is produced using silicon. The main reason is that silicon, being a semiconductor, has an energy gap in between its valence band and conduction band, referred to as the bandgap, that is perfect for capturing light in the visible and near infrared spectrum. The bandgap of silicon is 1.1 electron volt (eV). If a photon hits silicon, and that photon has an energy more than 1.1 eV, then that photon will be absorbed in the silicon and produce an electrical charge, subject to the quantum efficiency of silicon at that wavelength. The energy of a photon is defined as Plank's constant, h, times the the speed of light c, divided by the wavelength of the light, λ. Visible light has wavelengths between about 450 nm to 650 nm, which corresponds to photon energies of 2.75 eV and 1.9 eV respectively. These wavelengths are absorbed exponentially from the surface based on their energy, so blue light (450 nm) is mostly absorbed at the surface, while red light penetrates deeper into the silicon.


Figure


Most silicon photodetectors are based on a diode structure like the one shown in the figure. The crystalline structure of pure silicon is doped with different types of materials to produce silicon that uses holes (positive charges, and hence p-type) or electrons (negative charges, or n-type) as the majority carriers. When these two types of silicon are joined, called a PN junction, they create a diode. Furthermore, there is a region formed around the junction depleted of charge called the depletion region. This is shown in the figure as the area in between x1 and x2.

When a photon hits the silicon, it penetrates based on the wavelength, and, if it is absorbed, will create an electron-hole pair where it is absorbed. A simple view would be that the photon knocks an electron off the outer shell of the silicon atom creating both a free electron and also a "hole" where the electron was. Both the free electron and the hole are available to be part of an electric current. These materials are called "semiconductors" precisely because they are not full conductors such as copper wire, nor are they full insulators like the plastic surrounding a copper wire. They are in a "middle area" where their resistance or conductivity can change based on, in this example, the number of photons striking the material.

The result of this absorption is to create an electrical current (a “diffusion current” if the absorption is not in the depletion region and a “drift” current if it is), with the total amount of current based on how many photons are hitting the sensor, as well as the area of the sensor. A larger sensor can collect more photons, and can be more sensitive to lower light intensities.

There you have it. Several things to note. First is the fact that light comes in a “quanta” called a photon. It is the actual photons of light that cause the electrical change in the photodiode. So there is a low end to light sensitivity. Depending on the physical size of the photodiode, you can get light levels so low that only one photon strikes the detector during the shutter opening. You just can’t detect the light lower than that. To increase light sensitivity, you must make the individual diode bigger.

Basically the light causes a change in the resistivity or electrical resistance of the diode. That's how CMOS detectors work. With Charge Coupled Devices (CCD) it is a similar mechanism, but what is measured is the change in electrical capacitance or charge caused by the light. In addition, CCD devices pass the charge change down the line like a bucket brigade. This design is called a "shift register" architecture. In the early days of sensors, this charge-coupling made for simpler circuit design on the integrated circuit. Modern circuit manufacturing makes it possible to add circuits or "wiring" to the senors and that is how CMOS sensors are connected. For that and other reasons I discussed in the previous chapter of "The Science ...," CMOS is used in most modern digital cameras.

Photon energy is measured in electron volts (which are different than the 110 volts in your wall socket). You should also know that the higher the frequency of the light, the higher the energy. Violet light has more energy than red light. However, we typically give the frequency of light in terms of its wavelength, which is the reciprocal of frequency, and given in nanometers or nm. Nano is ten to the minus 9 or a thousandth of a millionth. That's pretty small! The higher energy with higher frequency (or shorter wavelength) is why Ultraviolet can burn the skin and X-rays can do even more damage.

Also notice that the depth of light penetration is based on color or light frequency (or photon energy) with blue only penetrating part way and red going the deepest into the photodiode. (The higher energy of blue light means it is likely to interact with the semiconductor material sooner, at a higher location in the diode.) Recall that the Foveon X3 sensor is based on that fact. Threre are three photodiodes stacked on top of each other.

Most CMOS sensors, however, use colored lenses over the photodiode, only admitting blue or green or red light. This reduces the total light or energy striking an individual photdiode, and it takes several individual sensors to fully resolve the color of one pixel.

Third is the fact that the diode also detects infrared light. That light is too low a frequency to be visible to the eye, although infrared cameras can be used to detect heat. Normally the infrared is detrimental to the camera’s image process, so we typically add an IR (Infrared) filter in front of the diode to block the light that is not part of the visual image. This filter is a piece of plastic (or glass) that blocks the IR radiation.

Now that we understand how the photodiode detects light (and I’m certain you understood all about bandgaps and n-type and p-type semiconductor materials), we can describe how the individual photodiodes get packaged in the millions to make a digital camera sensor.

Pixel Layout

The most common measurement used in comparison of image sensors is the pixel count, usually expressed in megapixels, or millions-of-pixels. The number of pixels in a sensor is a function of how large of a chip area is available. Another metric that goes with pixel count is the pixel pitch, the distance in microns between the center of two adjacent photodiodes. Since pixels are created in repeated arrays, the pitch defines how much area each pixel in an array occupies. Typically, the smaller the pitch, the more pixels that can be fit on the sensor. Smaller pixels, however, will not collect as much light, and thus may not be desirable for short shutter speeds or sensors designed for low light.

The pixel fill factor is another metric that deals with how a pixel is physically laid out. The fill factor is listed as the percentage of the pixel area that is consumed by the actual photodiode. A fill factor of 100% would be ideal, as then all of the pixel area is used for light collection. Some chips can achieve this by using a backside illuminated design, where the readout circuitry is placed below the photodiode, and thus it can occupy the entire pitch of the pixel. Most sensors do not have a 100% fill factor, and a sensor designer has many considerations to make when deciding how much of the sensor to leave for the diode, and how much readout or charge transfer circuitry will occupy. Microlenses may be used to improve fill factor. Microlenses are lenses that are placed directly on top of the pixel, and they focus the incident light so that more hits the photosensitive portion of the pixel. Geometry has some to do with it. You can’t stack circles as close together as squares. Remember these sensors are in the neighborhood of 1 to 10 millionths of an inch, so it is harder to manufacture certain shapes.

Given the limited ability to add illustrations and photographs to a FB note, at this point I’m going to send you to another web site. I won’t copy the text here, but if you want to see several nice drawings of how the individual photodiodes in a typical photo sensor, then check this web site.

http://www.cambridgeincolour.com/tutorials/camera-sensors.htm

Recall I said that the RGB array had twice as many green sensors as red or blue. This web site gives a very good, non technical explanation of how that is done and you can see why it takes several photodiodes to actually create one color pixel.

This site has a very good description of how the Bayer Array is processed. This technique is named after the Kodak scientist who developed it. I said that Kodak had many key digital camera patents ... this is one. This processing is called variously Bayer filter or demosaicing algorithm. Read carefully and you’ll understand why Bayer filter gives basically half the total photodiode resolution instead of the one-third you would expect with three (or even four) detectors per pixel. Anyway, remember they all lie and count every photodiode as pixel while Foveon X3 triples its diode count to try to match them in the literature.

So now you should have a better understanding of what is going on inside the photo sensor. Either that or your eyeballs have rolled to the back of your head, and you’re snoring loud enough to wake the next-door neighbor.

There is still more and relatively important things to discuss such as dynamic range and the digital color measurement and bit depth. But I think this is enough for now. So thanks again to “Cambridge in Color,” and you should have a little better understanding of the sensor in your camera. More important, you can see why jamming many pixels on a small sensor makes it harder for the camera’s electronics to process a high quality, noise free image. Bigger really is better. Now go get that physics book down off the shelf and dig into the chapter on photons and quantum theory.  I’ll wait here while you read up on that.

http://mickey-cheatham.blogspot.com/2012/09/the-science-of-photography-part-twelve.html 

No comments:

Post a Comment