Mod­ern smart­phones are already able to com­pete with full-fledged cam­eras in terms of shoot­ing qual­i­ty (for exam­ple, we recent­ly did a review of movies shot on smart­phones). Even if you don’t plan on doing full-time film pro­duc­tion and are just get­ting into mobile pho­tog­ra­phy, it’s help­ful to under­stand how it all works.

Mod­ern smart­phones are freely used when shoot­ing movies. Source: pixabay.com

This guide is your go-to guide to help you nav­i­gate the world of mobile pho­tog­ra­phy and find the per­fect smart­phone with a good cam­era.

The main characteristics of camera phones

Matrix size

A cam­era phone is a cam­era that can make calls. There­fore, the matrix is ​​​​the main para­me­ter. The qual­i­ty of the pic­tures pri­mar­i­ly depends on it. The main para­me­ter that deter­mines the qual­i­ty of a matrix is ​​its size. A large matrix is ​​able to accom­mo­date large pix­els (we’ll talk about pix­el sizes a lit­tle lat­er). The size of the matrix is ​​mea­sured in frac­tions of an inch, denot­ed by a sin­gle quote. For exam­ple, like this — 1/3″.

Now to the pix­els, their num­ber and size. Let’s start with this exam­ple: the cam­era of the Sam­sung Galaxy S21 Ultra smart­phone (we’ll make a reser­va­tion right away — the phone’s cam­era is good) has 108 megapix­els. The full-fledged top cam­era Sony Alpha A7R III has 42.4 of them. Does this mean that a smart­phone can eas­i­ly take a Japan­ese cam­era? Obvi­ous­ly not — the matrix of smart­phone cam­eras is approx­i­mate­ly 50 times small­er than in cam­eras. Name­ly, the qual­i­ty of the image depends on the size of the matrix.

We can say that megapix­els are phys­i­cal­ly locat­ed on the matrix. These are pho­to­sen­si­tive ele­ments. For clar­i­ty, let’s imag­ine them as con­tain­ers filled with light. The deep­er the con­tain­er, the more light will fit in it, right? It fol­lows from this that if there are many megapix­els, and the matrix is ​​tiny, then each indi­vid­ual pix­el will be small. This affects the qual­i­ty of the final pho­tos. When approach­ing, they will notice a lot of noise.

And if the matri­ces are the same size, but one of them has few­er pix­els, but they are large, and the oth­er has a lot of small ones? Let’s fig­ure it out.

Choose a cam­era phone based on the size of the pix­els on the matrix. Source: deep-review.com

Number of megapixels

Sup­pose, on aver­age, 4 pho­tons of light fall on each pix­el of the matrix. We can cal­cu­late the noise of the entire matrix using the Pois­son dis­tri­b­u­tion. Accord­ing to him, noise is the square root of the total num­ber of pho­tons. In our case, the noise will be 2 pho­tons. That is, the bright­ness of the points in the pho­to on aver­age will dif­fer by as much as 50%! If we increase the pix­el size by 4 times (simul­ta­ne­ous­ly reduc­ing their total num­ber by the same fac­tor), then now one of our pix­el can accom­mo­date 16 pho­tons. Apply­ing the Pois­son dis­tri­b­u­tion again, we find that the aver­age noise is now 4 pho­tons, or 25%. The pho­to has become twice as clear. There­fore, large pix­els are prefer­able to small ones.

Anoth­er prob­lem with small pix­els is crosstalk, i.e. when light from one pix­el hits anoth­er. Man­u­fac­tur­ers are fight­ing this by choos­ing more reflec­tive mate­ri­als for the man­u­fac­ture of par­ti­tions between the pix­els.

Mod­ern smart­phones use pix­el bin­ning. In such matri­ces, pix­els are col­lect­ed in groups of 4. Dur­ing shoot­ing, one pair will col­lect light longer than the oth­er. If all pix­els were to col­lect light for the same amount of time, there would be a risk of over­flow­ing with light from bright spots in the scene. Because of this, in the place of such pix­els in the final image, there would be white dots. When using bin­ning, this risk dis­ap­pears, because the pair that col­lect­ed less light did not have time to over­flow. By com­bin­ing the infor­ma­tion received from these two pairs of pix­els, we get an image on which there are no black and white dots, that is, a high­er dynam­ic range.

The main advice that can be giv­en here is not to chase the num­ber of megapix­els. Pix­el size is more impor­tant.

For exam­ple, the rec­og­nized cam­era phone Google Pix­el 6 is equipped with a wide-angle cam­era with a large matrix and a pix­el size of 1.2 microns. For com­par­i­son, the base mod­el iPhone 13 has a pix­el size of 1.7 microns on a wide-angle cam­era. If we com­pare the qual­i­ty of the pic­tures, we find that the Pix­el is expect­ed, but not fatal­ly infe­ri­or. Google’s algo­rithms some­times bright­en shad­ows and also smooth out bumpy tex­tures, mak­ing the pic­ture flat­ter.


Thanks to an exter­nal lens, you can achieve a stronger zoom than the built-in one allows. Source: pricevdom.ru

The qual­i­ty of the image does not depend on one matrix. Cam­era phones in 2022 are no dif­fer­ent from old­er cam­era broth­ers: smart­phone pho­tos are strong­ly influ­enced by the qual­i­ty of the lens.

If a bad lens is installed in front of a good matrix, no mat­ter how many megapix­els and what size they are, the pic­tures will be fuzzy and with chro­mat­ic aber­ra­tions.

Most mod­ern smart­phones have sev­er­al cam­eras that dif­fer just in lens­es: a stan­dard mod­ule that is turned on by default, an ultra-wide-angle lens and a tele­pho­to lens. But the ultra-wide-angle lens is most like­ly not as wide as that of full-frame cam­eras (after all, smart­phones have small sen­sors, which implies a large crop fac­tor). And tele­pho­to lens­es can tol­er­ate dis­tor­tion.

You can fix this by using exter­nal lens­es. A selec­tion of lens­es for smart­phones can be viewed here.


On aver­age, the aper­ture val­ue in smart­phone lens­es ranges from f/1.7 to f/2.2. The size of the aper­ture deter­mines how much the lens is able to open, how much light it is able to pass onto the matrix. For exam­ple, dif­fer­ent aper­tures are set even with­in the same iPhone 13 line (more on this in our analy­sis). Also for com­par­i­son: here is how the aper­ture num­bers dif­fer for smart­phones of dif­fer­ent price cat­e­gories:

– Bud­get Sam­sung Galaxy A12 with f/2.0;

– Mid-price Sam­sung Galaxy A52 cf/1.8;

– Flag­ship Huawei P40 Pro with f/1.28.

In addi­tion, it depends on the aper­ture whether bokeh is obtained. But since the size of the matrix also affects the bokeh, on smart­phones the back­ground will be blurred only when macro shoot­ing small objects.

To com­pen­sate for this, smart­phones imple­ment por­trait modes — cre­at­ing bokeh pro­gram­mat­i­cal­ly. This approach has its own prob­lems. Some­times algo­rithms blur hair along with the back­ground, or, for exam­ple, the tem­ples of glass­es. Algo­rithms are get­ting bet­ter from mod­el to mod­el. So, for exam­ple, they work bet­ter on the iPhone 13 than on the 11th mod­el.

This is how por­trait mode arti­facts look on dif­fer­ent iPhone mod­els: on the left — 13 Pro, on the right — 11. Source: iphones.ru

Viewing angle and zoom

Among smart­phones, there are mod­els with cam­eras that can pro­vide a wide angle. Usu­al­ly this is an addi­tion­al cam­era in addi­tion to the main one (more on this below).

As for the zoom, in most smart­phones it is imple­ment­ed dig­i­tal­ly, that is, the pic­ture is sim­ply cropped and scaled. This method will work well only on a steep matrix. Full zoom, which allows you to shoot a larg­er object with­out los­ing qual­i­ty, is rare and does not give more than two / three times the mag­ni­fi­ca­tion. But there are also exter­nal lens­es.


Dif­fer­ent smart­phones may have dif­fer­ent auto focus tech­nolo­gies:

- The most com­mon option is con­trast aut­o­fo­cus, which is found in most smart­phones. Its prin­ci­ple of oper­a­tion is as fol­lows: the cam­era looks for con­trast zones, then moves the lens­es, com­par­ing the con­trast at their dif­fer­ent posi­tions. Such focus­ing is slow, works bet­ter when shoot­ing sta­t­ic objects, and in poor light­ing, which means poor con­trast, it may not work at all.

- Laser aut­o­fo­cus works due to the emit­ter locat­ed next to the cam­era. The emit­ter emits a beam, which is reflect­ed from objects in the frame, and the smart­phone cal­cu­lates the dis­tance to them. This kind of focus­ing only works if you are shoot­ing close up. When focus­ing on dis­tant objects, the cam­era will switch to con­trast or phase focus.

- Phase-detec­tion aut­o­fo­cus works based on sen­sors that ana­lyze the dou­bled image of the object, adjust­ing the focus so that they match. This focus is faster than con­trast, in day­light it will be able to focus on fast mov­ing objects. But it does­n’t work well at night.

Dif­fer­ent man­u­fac­tur­ers may have their own imple­men­ta­tion of phase detec­tion aut­o­fo­cus. For exam­ple, Apple calls it Focus Pix­els, while Sam­sung calls it Dual Pix­el. These imple­men­ta­tions dif­fer in that spe­cial sen­sors are built right into the pix­els on the matrix. In Dual Pix­el, they are built into lit­er­al­ly all pix­els.

Dual Pix­el pro­vides much faster focus­ing and the abil­i­ty to more accu­rate­ly main­tain focus on fast-mov­ing sub­jects. This is espe­cial­ly true when you need to quick­ly get your smart­phone out of your pock­et and take some­thing off. Source: mob-mobile.ru


Smart­phones have three main types of sta­bi­liza­tion:

dig­i­tal sta­bi­liza­tion in most smart­phones is used when shoot­ing video. Since the video record­ing uses only the cen­tral part of the cam­era matrix, the rest of it is designed to com­pen­sate for shak­ing by shift­ing the image in one direc­tion or anoth­er beyond the orig­i­nal frame.

opti­cal sta­bi­liza­tion works due to a spe­cial mech­a­nism that shifts the lens­es in the oppo­site direc­tion of shak­ing, allow­ing you to main­tain sharp­ness.

hybrid sta­bi­liza­tion imple­ment­ed, for exam­ple, in the Google Pix­el 4 smart­phone. This is a com­bi­na­tion of opti­cal and dig­i­tal stub, which is con­trolled by spe­cial algo­rithms that ana­lyze what is hap­pen­ing in the frame.

Video filming

In addi­tion to the qual­i­ty of the matrix and lens, video shoot­ing on smart­phones depends on the cen­tral proces­sor. The video has three main para­me­ters:

  • per­mis­sion;
  • num­ber of frames per sec­ond (deter­mines smooth­ness);
  • bitrate (deter­mines the qual­i­ty of the image).

In order to shoot a good video, in which all these indi­ca­tors will be high, you need a pow­er­ful proces­sor. Also, if the device is prone to over­heat­ing, it will turn off every 15 min­utes of video record­ing — you can’t shoot that much.

This prob­lem is solved, for exam­ple, in One­Plus 9 Pro, as the smart­phone is equipped with an addi­tion­al cool­ing sys­tem.

Number of cameras

As we wrote above, dif­fer­ent cam­eras can have dif­fer­ent view­ing angles, allow­ing you to shoot, for exam­ple, with a zoom on one, and wide-angle on anoth­er.

Also, increas­ing the num­ber of cam­eras improves the qual­i­ty of pho­tos. For exam­ple, a smart­phone will take a series of pic­tures on them with dif­fer­ent shut­ter speeds. Then the algo­rithms glue the pho­tos togeth­er — the tech­nol­o­gy works iden­ti­cal­ly to pix­el bin­ning.

This pho­to was tak­en on an iPhone 11 using night mode. Source: ilounge.ua


Flash­es in smart­phones illu­mi­nate only a small object well at small dis­tances. They also shine direct­ly. There­fore, strict­ly speak­ing, smart­phone flash­es are more suit­able for use as a flash­light.

Mod­ern smart­phones use LED flash­es. Either with a sin­gle diode, or paired, where the diodes have dif­fer­ent col­or tem­per­a­tures to improve col­or repro­duc­tion and reduce the like­li­hood of over­ex­po­sure.

Why physical characteristics are not always the most important

Most of the short­com­ings of mobile pho­tog­ra­phy, which are of a pure­ly phys­i­cal nature — the small size of the matrix, for exam­ple — are solved by algo­rithms. The pro­cess­ing pow­er of mod­ern smart­phones allows neur­al net­works to com­plete pho­tos.

Thanks to this, the smart­phone is able to cope with night shoot­ing, in which pro­fes­sion­al cam­eras can fail. Neur­al net­work algo­rithms in smart­phones are also used for oth­er tasks: zoom, detail enhance­ment, noise reduc­tion.

The smart­phone starts tak­ing pic­tures when you just launch the appli­ca­tion. And in order, for exam­ple, to achieve bokeh when shoot­ing a per­son (and above we wrote that this requires a wide matrix and a fast lens), neur­al net­works in a smart­phone per­form com­plex cal­cu­la­tions.

First, they seg­ment the image, that is, deter­mine which objects are in the image. This is nec­es­sary to sep­a­rate objects from the back­ground. Then she builds a depth map, and only then blurs the back­ground.


Phones are a full-fledged com­bat unit of a pro­fes­sion­al pho­tog­ra­ph­er. A duet of pumped iron and algo­rithms allows you to take great pic­tures. You just need to be mind­ful of their lim­i­ta­tions and make informed choic­es.