In this part we will learn how digital camera sensors are made, their limitations, and the various engineering tricks that have been polished to the “state-of-the-art” in order to provide maximum camera performance.
In the digital camera market the key performance indicator is typically the total number of pixels, measured in the millions: “megapixels.” Even the name, “mega,” screams advertising. It is a wonder of modern science and engineering just how many of these mega-things can be squeezed onto tiny sensors. So what is the importance of megapixels? Are they as irrelevant as horsepower has become in today’s automobile market place. Do megapixels really count?. And, just as horsepower typically is a trade off with fuel economy, are megapixels a trade-off too? What do we give up to get a high pixel count?
The answer to these and other questions will be answered in a few short, and maybe not so short, paragraphs. Read on.
The sensors used in digital cameras, from the cheapest point-and-shoot or smartphone camera to the sensor in a professional DSLR camera with a price that rivals the cost of a new automobile, are all pretty much made the same way. Regardless of whether the design is CCD or CMOS, a camera sensor is simply an integrated circuit; one originally designed as a form of memory storage. Today, CCD designs are only used for sensors, but CMOS is used as well for computer memory, processing, and other digital functions.
Charge Coupled Device, or CCD sensors were some of the earliest sensors developed. I’ve mentioned previously their use in early video cameras in the 50’s. CCDs, though eclipsed long ago for use as memory devices, and needing additional off-chip processing circuitry, are still in use at the high end of image capture, including in all medium format digital cameras. (Medium format is larger than 35mm but smaller than very large cameras like the 4” x 5” Press camera.)
The reason for this is their superior image quality. They have the potential for greater light sensitivity, lower noise, and higher dynamic range. This is not to say that there aren't CMOS cameras that are capable of very good, even excellent performance in these areas. But, one has to ask why companies like Phase One, Leaf, Hasselblad / Imacon, and various military and scientific applications prefer to use CCD chips, in spite of their higher costs, greater power needs, and other drawbacks. The answer likely comes down to one thing – image quality.
But for those of us with smaller budgets, consumers of point-and-shoot cameras and even pro-am DSLRs will find Complementary Metallic Oxide Semiconductor or CMOS sensors in our cameras. CMOS has become the current mainstream of chip design. It is capable of incorporating a great many additional functions right on the chip, including analog to digital conversion (ADC) and other aspects of image processing. (Recall I mentioned earlier that these sensors are actually analog devices, so the output must be converted to a digital format, numbers consisting of zeros and ones. Hence the need for the ADC function on the sensor chip.)
Combined with the fact that more capabilities can be built right on to the chip, and the fact that the fabrication plants are used for chips found in many other contemporary silicon products, especially memory chips, CMOS designs are much less expensive to build. In combination with the higher circuit density allowed by integrating more functions on the chip, you end up with lower costs and smaller ultimate size, an important factor when used in small cameras, such as those in cell phones, for example. CMOS sensors also use less power than do CCDs. That improves camera battery life.
Imaging sensors come in different sizes, ranging from smaller than the head of a nail to the size of a saltine cracker to even bigger. A sensor chip's size is a key variable in determining its cost. Chips are made from silicon wafers. Whereas literally thousands of sensor chips, such as those used in a web cam, can be cut from a single wafer, only a handful of medium format chips can be derived from a standard 6"or 12" wafer. This means that their wholesale costs can range from less than a dollar, to several thousand dollars per finished chip.
The value of wafer real estate is just one factor in determining its cost. Rejection rate is another. A wafer with hundreds or even thousands of individual chips on it is tested, and only the chips that function properly are passed to the next manufacturing stage. The percentage of good chips on a wafer is called “yield.” With large chips, there is a greater opportunity for a defective area to be inside the boundary of the chip. With smaller chips, the probability is lower. For that reason alone, the yield is higher when manufacturing smaller chips out of a wafer of silicon.
With millions of components (diodes and transistors) on modern sensor chips, it's almost impossible with today's technology to make a perfect chip. Consequently both manufacturers and camera makers set their own criteria for whether a chip is usable or not. This will include the number of defective pixels as well as rows, and consecutive rows that are permitted. The nature of these criteria are closely held corporate secrets. Regardless, the conclusion is that the larger the sensor, the higher the wholesale cost to the camera manufacturer.
Sensor size has several effects on overall picture quality. First, the larger the sensor, the more light it will gather. Plus, the larger the captured image, the less the image has to be enlarged or “blown up” to reach a final usable size. Of course, this final usable size depends on what that use is. Is it to be sent over the internet, or displayed on a computer monitor? Or do we want to produce prints? What size prints? 4x5, or large wall hangings?
How much you can enlarge a digital photograph depends on the picture’s resolution. The maximum resolution you can obtain from a given digital camera is equal to the resolution of the picture sensor in the camera. And resolution is also related to the size of sensor. Obviously, the larger the sensor, the greater the number of individual sensors called pixels, that can be fit on the chip. The total number of pixels on a sensor is the “megapixel” measurement, the total count of pixels.
What is most interesting is how small sensors can have as many pixels or megapixels as a large sensor. Obviously, from the rules of geometry, to fit the same number of sensors on a smaller chip will require the size of the individual sensor to be smaller.
Pixel Size
Chip size and individual pixel size are factors which combine to determine not only a sensor's cost but also many aspects of its performance. More pixels means higher resolution. But having more pixels to record images means either putting more of them on a given chip size or making the chip bigger.
In the race to get high “mega” numbers, even small sensor chips are designed with a large number of these sensor sites. But, the only way to add more pixels to a chip of a given size is to make the pixels smaller. Further, as the pixels become smaller they are less able to capture photons (the discrete components of light), and therefore their signal to noise ratio decreases. All electronic circuits have inherent noise. The more signal (light) there is, the lower the noise is relative to that signal. In engineering we call this the signal to noise ratio or “S/N.”
You may have noticed, when listening to AM radio stations on you car radio, that strong stations sound fine, but weak stations seem to have a lot of noise or static. That is because the weak stations have to be amplified more in the radio circuitry. There are automatic level controls in the radio that adjust the amplifier gain based on signal strength. The greater the amplifier gain, the more noise introduced. That’s why digital cameras experience noise problems with low light. So this noise issue is tied directly to low light performance.
Camera “noise” is directly related to the size of the individual pixel sensors. We will measure something called “pixel pitch” which is the distance between the centers of adjacent pixels. It is a good approximation of the individual pixel diameters. We can assume that the individual pixel sensors can be packed together very tightly. In reality there is some space between sensors, but it is quite small, and we’ll ignore that.
As you compare pixel pitch between two cameras with the same megapixel count, but one, such as an inexpensive point-and-shoot camera, and the other an expensive DSLR camera with a APC or full size sensor and, obviously the larger sensor with the same number of pixels has larger individual pixel sensors. This results in greater sensitivity to light and lower noise in the image capture. If the sensor is not as sensitive to light, then the captured signal must be amplified more than with large sensor pixels.
This is why point-and-shoot cameras produce noisy images at high ISO settings, and why the Nikon D3 with its large pixel pitch (8.4 µm) has much lower noise at high ISO settings than a point-and-shoot that might have a pixel pitch as small as 1.7 µm.
So what is the optimum size for a pixel sensor? The size of a pixel directly impacts how much noise an image will have in low light, and in some cases even in daylight. The bigger the pixel is, the lower the noise because more photons can reach a bigger pixel sensor. But stating the optimum size value depends on the state of the art -- and that state is continually improving. Certainly, pixel pitch of 6-8 microns is very good, but smaller sensors and smaller pixel pitch sizes are continually improving. And, since smaller sensors work with smaller lenses, which can be made more precisely for less money, the total camera package could outperform larger sensor cameras.
Camera and sensor manufactures are both continually improving their products and a lot of powerful engineering goes into today’s products to let them perform better than what was sold just one year ago. At this point you can’t say what the optimum pixel sensor size is. It keeps getting smaller.
An example of just one of those state-of-the-art improvements is “backside illumination” used in the camera chip in the iPhone 4s. In order to increase the amount of light reaching the sensor on the latest iPhone, the image chip is actually mounted upside down. This is called backside illumination. (The chip isn’t so much “mounted upside down,” it is designed to allow it to be mounted upside down.) This puts the circuits, the small “wires” that connect the components on the bottom of the chip. They are normally on the top, getting in the way of light falling on the sensor. So backside illumination increases the total amount of light striking the sensor. This is one example of the engineering improvements that appear as sensor design improves.
So, again, what is the optimum size for a pixel sensor? This is a bit of a moving target. Chip makers and camera makers continue to improve their circuitry and noise reduction capabilities. But the laws of physics can't be denied. Plus, all technologies used to reduce noise on new designs of chips with smaller pixels can equally be applied to those with larger pixels. So while absolute improvements are being seen, relatively speaking the gap between them remains roughly the same.
As noise reduction and other improvements continue to be developed we expect to see continued increases in pixel density. However, all engineering improvements reach their limits, and we may be pretty near those limits now. The physics of light, itself, and the fact it comes in discrete packets called photons provides a lower limit to pixel pitch. So we will eventually see the smallest possible pixels and that will be the end of the road for size reduction.
Sensor Size
Now that we know about pixel size and the accompanying issues, we can make some sense of the entire situation of relative sensor size, image quality, and costs. Put simply, bigger is better, and costs more. That's the core of any discussion about digital image sensors.
The statement that bigger is better has implications for the competitive marketplace. In the days of film no one argued with the fact that large format (4” x 5” film) produced superior image quality to medium format (2-1/4” x 2-1/4” film), and that medium format offered higher image quality than 35mm, and so on down the line -- if you ignore the negative issues of large size such as camera weight and cost.
With digital many argue that as long as an image doesn't need to be enlarged beyond the resolving ability of the output medium (say, 300 dpi when making an inkjet print) there is no disadvantage to a smaller imaging chip. Well, maybe, but then, maybe not.
In my opinion, for most everyday camera uses, a resolution of 10-16 MP is probably adequate. Certainly 8MP cameras can perform pretty well, but when you get down below 5MP, you are starting to sacrifice quality. To illustrate this, the iPhone 3s only had a 3 MP camera. The iPhone 4 has a 5MP camera and is noticeably better, although some improvement is in the electronic processing and some is in a better lens. The iPhone 4s has an 8MP sensor, plus again the electronics was improved and the lens is now a sophisticated 5 element stack. I would argue that the current iPhone camera is about as good as you can expect from a fixed focus design.
On the high end of resolution, I don’t think there is a specific limit. As long as the total sensor area is large enough to support adequate sized pixels, the more resolution the greater the ability to enlarge the photo. I don’t know the upper limit for normal use. Certainly when enlarging a photo to fit on an entire wall, higher resolution is useful. But, then, how often do we need to enlarge a digital photography to eight feet tall? The digital sensor in a spy satellite would have little cost-consciousness and I expect no limit in the resolution goals of the government designing such high powered systems.
Further, we know from personal experience that you need to, at least, double the number of pixels to really get a significant improvement in the pictures, and some think the needed change is even greater, as much as a times four increase to see visible improvement. So there is probably little difference between a 12MP camera and a 16MP camera. It would probably take at least a 24MP sensor to show significant improvement over a 12MP camera, all else being equal. For all of these reasons, I think the MP race is probably over with 10-15MP as the sweet spot with cameras using small sensors.
I once saw a chart of all the various pixel pitches and there was a big gap between what we saw in the small sensor -- point-and-shoot cameras and the larger sensor, higher quality digital cameras such as the DSLRs. There are some good quality cameras in the market place with sensors smaller than the APC size. For example, the new Nikon CX and the smaller Panasonic and Olympus 4/3” sensors. I expect to see more quality cameras using those size sensors and benefiting from the small size and lower weight of an overall design matching these sensors. But, for the high depth of field effects that you can obtain from a full format camera and the higher resolution and low noise performance of full 35mm sensors (and even larger), the simple physics will always give these larger format sensors the edge -- albeit at the expense of size and weight -- as well as at the expense of dollars to purchase the camera and compatible lenses.
There are so many different situations and variables that I’m not ready to state a particular best pixel count, but I have noted some new camera models actually have lower megapixel counts than the previous model. However, people are still impressed by the number of pixels, so it is hard to determine in more pixels are for better photographs, or just to sell more cameras. In photography, the basic rule of physics applies: “You get what you pay for!”
What it is that we're seeing is these higher resolution photos is another matter. It can be called micro-contrast; very fine tonal transitions that seems to get lost with smaller size sensors. This could well be caused by the relative lack of strain on the camera's lens design when lower magnifications are called for. It is also something that we've always seen when comparing larger formats to smaller ones in the film world. I will discuss methods to measure this small detail sensitivity in a future article when I describe the Modulation Transfer Function, a precise method to measure resolution with images. The final measure is a combination of sensor and lens and light and several other factors.
So, regardless of advances in engineering, the fact remains that physically larger sensors will always have an advantages over smaller ones. This means that (other factors aside) image quality from medium format cameras (2-1/4” x 2-1/4”) will be higher than full-frame 35mm, which will be better than APS size, which will have an edge over 4/3, which is better than 1/2.3”, etc, etc.
Of course, as I said, pixel size is not the sole determinant of image quality. There are others:
- CCD sensors perform better than CMOS sensors.
- Sensor quality differs between sensor chip manufacturers.
- Sensor quality varies within the range of products from any one sensor manufacturer. Every product line has a high-end line and a low-end.
- Furthermore, any one maker of cameras e.g. Nikon may use sensor chips from different manufacturers between their camera models. So they might put a cheap sensor from one manufacturer in a cheap camera, but a pricey sensor from another manufacturer in their high-end camera.
- Newer sensor technology is surely better than older technology, because chip manufacturers are getting better every year, improving the optical and electrical characteristics of sensors. Thus the technology's generation matters.
- Optics can play a big role at high megapixels. Not all lenses are equally finely polished. Some high-MP cameras are being sold with lenses that are just barely sufficient to support the number of megapixels. Some cameras with over 12 megapixels are being paired with lenses that are not ground fine enough to support that many pixels.
- Lastly, whether sub-pixels (red, green and blue) are adjacent versus stacked is a big factor in image sharpness. (Stacked sub-pixels are used in Foveon sensors. Adjacent are used in Bayer sensors.)
Now that we’ve delved into the macro world of sensors, and how they impact image and photo quality, it is time to jump into the micro world of sensors and see how they are made. I’ve alluded to specifications such as pixel pitch in microns, but there is a lot more to discuss such as IR filters, sensor lenses, and many mechanical and electronic issues. That will be our next topic as we continue our journey through the science of photography. So, until next time, I won't say good-bye, I'll say "see you soon."
Click here to jump to the next article.
No comments:
Post a Comment