Knowledge Base Presented By Altavian “DSLR vs Point-and-Shoot Cameras”

Series Introduction: The aim of this knowledge base series is to answer the question “Why use DSLR (digital single-lens reflex) cameras when smaller cameras are lighter, cheaper, and get good results?” This is a question we hear a lot and it is worth answering. But, while the question is simple to ask, the answer is complex. We’re going to provide a technical explanation through a three-part series of articles as to why we opt for the larger, DSLR cameras and why they’re better for data collection.

Payload design at Altavian is always a fun challenge. We aim for the bullseye of utility, product quality, and value. In short, we want to build payloads people will use, that won’t break, and are cost-effective. This involves everything from component selection for things we don’t directly manufacture to the convergence of efforts on the mechanical, electrical, and software fronts. For mapping payloads, the single most important choice we can make is which camera and lens to use.

But, how do we classify cameras?

We’ll start by dividing them into two broad categories: digital single-lens reflex (DSLR) cameras and point-and-shoot cameras. Generally, DSLR cameras will have larger sensors than point-and-shoot cameras. While sensor size matters, it’s really the size of the individual pixels that counts. For example, two sensors having the same overall area but different pixel densities will produce images of differing quality. On the other hand, two sensors of different overall size but with identical pixel size will generally produce similar images at the pixel level, all other things being equal—and leaving field of view aside, for now.

So, why is the pixel size important? To answer this, let’s start with breaking down what a pixel actually is.

WHAT IS A PIXEL?

Think of a pixel as a bucket that captures photons and converts them to electrons. Then, your camera samples the number of electrons in the bucket, records it, and empties the bucket. This is repeated for every pixel present and the combined values captured are what comprises an image.

That’s the job of a digital imaging device: to convert photons to electrons. Ideally one would want a perfectly linear correspondence; for every photon received a fixed number of electrons would be generated. Let’s assume for this article that the proportion actually is one-to-one (For a deeper discussion about the quantum efficiency, download the technical addendum to this article).

These electrons accumulate at each pixel or photosite on the sensor according to how the image is focused through the lens. Darker areas of a scene will therefore have smaller charge packets stored in the corresponding pixel than brighter ones. The electric potential generated by this accumulation of electrons in a pixel is then sampled and the voltage is converted to a digital number (DN). After this is finished and the DN for a pixel is stored, depending on the underlying technology, the charge is either already dissipated or it must be actively dumped, allowing the pixel to start “empty” for the next image.

However, there are limitations to the number of electrons a pixel can capture and contain which affects image quality. The number of electrons is determined by a pixel’s full well capacity (FWC).

FULL WELL CAPACITY, READ NOISE, AND IMAGE QUALITY

Full well capacity is a sensor’s ability to capture a wide range of brightness from deep shadow to spectacular sunlight reflections. If we return to the bucket metaphor, a pixel cannot continue to accumulate electrons once its FWC has been reached just as a bucket can only hold so much water. When this occurs, the area of image in question becomes saturated and all further detail is not recordable.

Similarly, at the dark end the limit is governed by a combination of the camera’s read noise and a fundamental physical property of light. The span between the darkest and brightest values a sensor can record without encountering these limits is its dynamic range. The larger the FWC and the smaller the read noise, which are both specified in numbers of electrons, the larger the dynamic range. A high dynamic range means that even challenging scenes containing very bright and very dark areas are more likely to contain detail instead of just being black or purely white. This equates to high image quality.

If we put it in terms of our bucket analogy, if you want to determine exactly how much rain is falling in an area, it helps to have a large bucket to capture rainfall, be it a downpour or barely sprinkling. A smaller bucket will fill up too quickly, or be too small to collect any drops in a very slight rainfall.

For more information visit: altavian.com

DSLRs have larger pixels which have a higher FWC, lower read noise, and thus, a greater dynamic range. For a look at the formula behind calculating dynamic range, download the technical addendum.

SHOT NOISE

The last aspect of pixel size we need to discuss is shot noise. No matter how good the sensor, there will always be additional noise present in an image due to the nature of how photons are captured by pixels.

Let’s return to the metaphor of a pixel as a bucket. If we put four buckets side-by-side in a steady rain shower for 10-minutes they will all capture water. If you then simultaneously covered all four buckets and measured the water in each bucket, the volumes would be different. The variance will be caused by slight differences in the numbers of rain drops that fell into each bucket.

The same is true with photons and pixels.

If you zoom in close enough on light, you’ll see that it is a moving stream of photons with variance—it’s not a constant. This is a natural feature of light. So, just like the buckets in the rain, each pixel will have variances in the number of photons (energy) captured. This is a property of light and nature manifesting as a phenomenon that we call ‘shot noise’.

In other words, if you calculate the amount of energy that should be received given knowledge of the relevant physical quantities for the scene and then convert that to the equivalent number of photons, the count will only be the expected value of an underlying probability density function. For a closer look at the mathematics behind this, download the technical addendum.

For more information visit: altavian.com

 

https://www.altavian.com/

Identical crops from a full-frame camera having 6.25-micron pixels (top) and a sensor having 4.3-micron pixels (bottom). The noisier result in the darker patches of the bottom image is largely a function of shot noise.

CONCLUSION

Larger pixels allow more photons to be gathered in any given amount of time and therefore produce images in which the noise constitutes a smaller portion of the recorded signal. All other things being equal, larger pixels produce cleaner images. Certainly this is of aesthetic importance, but it also matters in the quantitative realm of map accuracy, since noise in images is propagated through feature detection and matching all the way to the geometric errors in the densely-matched point cloud. In the debate between DSLR and point-and-shoot cameras, this is why we use DSLR cameras. Point-and-shoot cameras tend to have pixel sizes with in the 1.5-2.5 micron level, whereas DSLR cameras tend to have pixels in the 4-6 micron range. There are trade-offs to be made, however, which will be covered in part two.

Download the Technical Addendum to this article here.

Be on the lookout for parts two and three in the coming weeks.

For more information visit: altavian.com

 

 

No Replies to "Knowledge Base Presented By Altavian "DSLR vs Point-and-Shoot Cameras""