TECHNOSAVVIE: How do smartphone cameras work?

ezmob_search

adport_banner728x90

coinads

7search

zerads_728x90

multiwall_728x90

Search This Blog

Tuesday 29 October 2024

How do smartphone cameras work?

The first cellphone with a camera landed on American shores in 2002. Since then, OEMs have been racing to produce better mobile cameras. The dawn of the smartphone era in 2008 and the rise of social media made having a good phone camera a must for everyone who wants to keep up with the Joneses. But before mobile cameras began taking most of the world's photos, pictures were taken with devices that were larger than nearly all the mobile devices that replaced them. How is this possible? What's going on inside our phone that lets it take 200MP photos?

All smartphone cameras are made of three basic parts. The first is the lens that directs light into the camera. The second is the sensor that converts the focused photons of light into an electrical signal. And the third is the software that converts those electrical signals into an Instagram-ready photo. Let's take a closer look at each of these parts.

Lenses

Before light reaches the image sensor, it must pass through the lens. And before that, it passes through a small hole in the phone's body. The size of that hole is called the aperture, and it determines how much light makes it into the camera's sensor. Generally, a larger aperture is a good thing for mobile cameras because it means the camera has more light to work with.

Aperture is measured in f-stops, which is the ratio of the camera's focal length to the physical diameter of the aperture, so an f/1.7 aperture is wider than an f/2 aperture.

Once the light enters the camera module, the lens gathers the incoming light from your shot and directs it to the sensor. Smartphone cameras are made up of many plastic lenses called elements. Due to the nature of light, different wavelengths of light (colors) are refracted (bent) at different angles as they pass through a lens. That means that the colors from your scene are projected onto your camera sensor out of alignment. Cameras need multiple lenses to transmit a clear image to the sensor to correct this and other similar effects.

smartphone image sensor and lens elements

Focus

One essential function of the lenses that has traditionally been abstracted away from the user is focus. Some camera apps let you manually control the camera's focus. However, most of them control it through software using the sensor, extra hardware like a laser range finder, or a combination of the two.

Software-based autofocus (known as passive autofocus) uses data from the image sensor to determine whether the image is in focus and adjusts the lenses to compensate. The common passive autofocus technique is based on detecting the contrast of the image and adjusting the focus until it's maximized. This method is entirely software-based, making it the cheapest option. However, it's slow and doesn't work as well in low light conditions.

A slightly faster method is called phase-detection autofocus. It ensures the same amount of light reaches two closely placed sensors on the photo sensor. Traditional PDAF systems rely on dedicated photosites on the sensor (around 5% of all photosites) to measure the light coming from either the right side of the lens or the left side. The image is in focus if the right-facing photosites measure the same light intensity as their left-facing counterparts. If they don't measure the same light intensity, it's possible to calculate how far out of focus the camera is, making PDAF systems faster than contrast-detection methods. Modern PDAF systems (like on the Isocell HP2 image sensor on the Samsung Galaxy S23 Ultra) use 100% of the photosites on the image sensor and use top-facing and bottom-facing photosites in addition to left-facing and right-facing photosites.

Active autofocus uses extra hardware to determine the distance from the phone to your target. The first active autofocus systems used sonar similar to Soli on the Google Pixel 4. Changes in active autofocus systems have added a low-powered infrared laser to estimate distance, but it can be tricked by other sources of IR light like fires or other smartphones.

Sensor

The sensor is a thin wafer of silicon. Its only job is to turn photons (light) into electrons (electrical signals). This photovoltaic conversion happens at millions of photosites across the tiny surface of the sensor. If no photons reach the photosite, the sensor registers that pixel as black. If a lot of photons reach the photosite, that pixel is white. The number of shades of gray the sensor can register is known as its bit depth.

Huawei RYYB CFA and image sensor

So how does your phone take color photos? Overlaid on top of each photosite is a color filter that only lets certain colors of light through. The Bayer color filter array is the most common, and it divides every 2×2 square of photosites into one red, one blue, and two green filters (RGGB). Huawei's SuperSpectrum sensors are a notable exception to the traditional RGGB CFAs. Huawei uses yellow filters (RYYB) instead of green filters to allow more light through, improving luminance data. This mosaic of color and light data must be converted into a full-color image after the fact via complicated demosaicing algorithms.

Five different CFA arrangements

The two sensor metrics to pay attention to are its size and the size of its pixels. Sensor sizes are measured in fractions of an inch. Generally speaking, a bigger sensor (like the 1/1.31-inch sensor on the Google Pixel 7 Pro) produces better pictures because it has more and larger photosites.

Photosites (the number of which constitutes the potential megapixel count of the camera) are measured in micrometers (µm). Larger photosites are better able to gather light, making them ideal for low light conditions. Smaller photosites don't necessarily mean lower-quality photos. Still, it's a metric you may want to consider if you anticipate using your phone camera in low light conditions. Samsung compensates for small photosites (it's the only way to fit 200 million of them on a chip) by "binning" multiple photosites. On its latest sensors, it treats 4×4 squares of 16 photosites as a single "pixel," allowing it to collect more luminance data.

Image stabilization

When trying to take a good picture, one of the essential ingredients is a stable platform. Smartphone makers know you're not likely to have a tripod in your pocket, so they pack their phones with technology to mitigate camera movement as much as possible. Image stabilization comes in two basic flavors: optical and electronic.

Optical image stabilization (OIS) relies on a gyroscope to detect phone movement and tiny motors or electromagnets to move the lenses and sensor to compensate. OIS is ideal for low light situations where the image sensor needs more time to gather light.

Electronic image stabilization (EIS) relies on the phone's accelerometer to sense any movements. Instead of moving the camera parts, it moves the image frames or exposures. Because the exposures are aligned based on the content of the image and not the frame of the image sensor, the final image or video has a reduced resolution.

Many newer phone models use a combination of both systems, sometimes called hybrid image stabilization. Using both systems in concert gives you the best of both worlds,

Software

Once the image sensor has done its job and converted the light brought to it by the lenses into an electrical signal, it's the job of the image signal processor (ISP) to turn those 1s and 0s into a Snapchat-worthy image.

The data sent to the ISP is essentially a black-and-white image (a RAW image for the shutterbugs out there). The first job of the ISP is to bring back the color data based on the known arrangement of the CFA. Now we have an image, but its pixels are varying intensities of either red, green, or blue (or red, yellow, and blue for Huawei phones).

The next step is a process called demosaicing. This is where the ISP modifies the pixel colors based on the colors of its neighbors. If, for example, there are a lot of green and red pixels in an area but not much blue, the demosaicing algorithm converts them to yellow.

We finally have a photo! Most ISPs apply denoising and sharpening after demosaicing. Still, every OEM has its own pipeline and algorithms for producing a final image. Google, in particular, is known for using AI-developed algorithms to produce some of the best smartphone photos.

Are you ready to snap some photos?

Now that you have a better understanding of how your camera works, it's time to get out and take more pictures. And, speaking of more pictures. Use our Google Photos tips and tricks to tame your photo collection. And if you're looking for a phone with a fancier camera, knowing how a phone camera works will help you decide which camera phone is right for you.

No comments:

Post a Comment

Featured post

What should students, parents, and teachers know about AI?

AI education will help people understand the risks, limitations, and opportunities Former judge Kay Firth-Butterfield began to think about h...

multiwall_300x250