Pixel is a unit of length and area

5 days ago (nayuki.io)

I'd say it's better to call it a unit of counting.

If I have a bin of apples, and I say it's 5 apples wide, and 4 apples tall, then you'd say I have 20 apples, not 20 apples squared.

It's common to specify a length by a count of items passed along that length. Eg, a city block is a ~square on the ground bounded by roads. Yet if you're traveling in a city, you might say "I walked 5 blocks." This is a linguistic shortcut, skipping implied information. If you're trying to talk about both in a unclear context, additional words to clarify are required to sufficiently convey the information, that's just how language words.

  • Exactly. Pixels are indivisible quanta, not units of any kind of distance. Saying pixel^2 makes as much sense as counting the number of atoms on the surface of a metal and calling it atoms^2.

  • That is exactly how it is and it makes the whole article completely pointless. Especially as the article in the second sentence correctly writes "1920 pixels wide".

  • Is it that, or is it a compound unit that has a defined width and height already? Something can be five football fields long by two football fields wide, for an area of ten football fields.

    • This example illustrates potential confusion around non-square pixels. 5 football fields long makes perfect sense, but I'm not sure if 2 football fields wide means "twice the width of a football field" or "width equaling twice the length of a football field". I would lean towards the latter in colloquial usage, which means that the area is definitely not the same as the area of 10 football fields

      2 replies →

    • No, it is a count. Pixels can have different sizes and shapes, just like apples. Technically football fields vary slightly too but not close to as much as apples or pixels.

      5 replies →

  • It think the point of the article is that you don't say "5 pixels wide x 4 pixels tall" but just "5 pixels x 4 pixels", though I would say that "5x4 pixels" is the most common and most correct terminology.

    And the article concludes with : "But it does highlight that the common terminology is imperfect and breaks the regularity that scientists come to expect when working with physical units in calculations". Which matches your conclusion.

    • > And the article concludes with : "But it does highlight that the common terminology is imperfect and breaks the regularity that scientists come to expect when working with physical units in calculations". Which matches your conclusion.

      But it's not true. Counts (like "number of pixels" or "mole of atoms") are dimensionless, which is a precise scientific concept that perfectly matches the common terminology.

  • > If I have a bin of apples, and I say it's 5 apples wide, and 4 apples tall

    ...then you have a terrible bin for apple storage and should consider investing in a basket ;)

What a perplexing article.

Isn't a pixel clearly specified as a picture element? Isn't the usage as a length unit just as colloquial as "It's five cars long", which is just a simplified way of saying "It is as long as the length of a car times five", where "car" and "length of car" are very clearly completely separate things?

> The other awkward approach is to insist that the pixel is a unit of length

Please don't. If you want a unit of length that works well with pixels, you can use Android's "dp" concept instead, which are "density independent pixels" (kinda a bad name if you think about it) and are indeed a unit of length, namely 1dp = 158.75 micro meter, so that you have 160 dp to the inch. Then you can say "It's 10dp by 5dp, so 50 square dp in area.".

  • Yeah, this isn't really that complicated. It's just colloquial usage, not rigorous dimensional analysis. Roughly no one is actually confused by either usage ("1920 by 1080" or "12 megapixels").

    It's nearly identical to North American usage of "block" (as in "city block"). Merriam Webster lists these two definitions (among many others):

    > 6 a (1): a usually rectangular space (as in a city) enclosed by streets and occupied by or intended for buildings

    > 6 a (2): the distance along one of the sides of such a block

  • Another colloquial saying to back this up is that "Oh, that house is five acres down the road" or, for a non-standard unit, "The store is three blocks away". We often use area measurements for length if it's convenient.

    The pixel is a unit of area - we just occasionally use units of area to measure length.

    • > Another colloquial saying to back this up is that "Oh, that house is five acres down the road" or, for a non-standard unit, "The store is three blocks away". We often use area measurements for length if it's convenient.

      I have never heard someone use the first instance, and I wouldn't understand what it meant. I mean, I could buy that it meant that there is a five-acre plot between that house and where we are now, but it wouldn't give me any useful idea of how far the house is other than "not too close." Perhaps you have in mind that, since the "width" of an acre is a furlong, a house 5 acres away is 5 furlongs away?

      1 reply →

See also:

A Pixel Is Not a Little Square (1995) [pdf] – http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

  • This is written from a rather narrow perspective (of signal processing) and is clearly wrong in other contexts. For an image sensor designer, gate length, photosensitive area and pixel pitch are all real-valued measurements. That pixels are laid out on a grid simply reflects the ease of constructing electronic circuits this way. Alternative arrangements are possible, Sigma's depth pixels being one commercial example.

    • Ok, but that’s not what an digital image is. Images are designed to be invariant across camera capture and display hardware. The panel driver should interpret the dsp representation into an appropriate electronic pixel output.

      36 replies →

  • If you want a good example of what happens when you treat pixels like they're just little squares, disable font smoothing. Anti-aliasing, fonts that look good, and smooth animation are all dependent upon subpixel rending.

    https://en.wikipedia.org/wiki/Subpixel_rendering

    Edit: For the record, I'm on Win 10 with a 1440p monitor and disabling font smoothing makes a very noticeable difference.

    People are acting like this is some issue that no longer exists, and you don't have to be concerned with subpixel rendering anymore. That's not true, and highlights a bias that's very prevalent here on HN. Just because I have a fancy retina display doesn't mean the average user does. If you pretend like subpixel rendering is no longer a concern, you can run into situations where fonts look great on your end, but an ugly jagged mess for your average user.

    And you can tell who the Apple users are because they believe all this went away years ago.

    • This might have been a good example fifteen years ago. These days with high-DPI displays you can't perceive a difference between font smoothing being turned on and off. On macOS for example font smoothing adds some faux bold to the fonts, and it's long been recommended to turn it off. See for example the influential article https://tonsky.me/blog/monitors/ which explains that font smoothing used to do subpixel antialiasing, but the whole feature was removed in 2018. It also explains that this checkbox doesn't even control regular grayscale antialiasing, and I'm guessing it's because downscaling a rendered @2x framebuffer down to the physical resolution inherently introduces antialiasing.

      4 replies →

    • You’re conflating different topics. LCD subpixel rendering and font smoothing is often implemented by treating the subpixels as little rectangles, which is the same mistake as treating pixels as squares.

      Anti-aliasing can be and is done on squares routinely. It’s called ‘greyscale antialiasing’ to differentiate from LCD subpixel antialiasing, but the name is confusing since it works and is most often used on colors.

      The problem Alvy-Ray is talking about is far more subtle. You can do anti-aliasing with little squares, but the result isn’t 100% correct and is not the best result possible no matter how many samples you take. What he’s really referring to is what signal processing people call a box filter, versus something better like a sinc or Gaussian or Mitchell filter.

      Regarding your edit, on a high DPI display there’s very little practical difference bewteen LCD subpixel antialiasing and ‘greyscale’ (color) antialiasing. You don’t need LCD subpixels to get effective antialiasing, and you can get mostly effective antialiasing with square shaped pixels.

      4 replies →

    • I don't think that's true anymore. Modern high-resolution displays have pixels small enough that they don't really benefit from sub-pixel rendering, and logical pixels have become decoupled from physical pixels to the point of making sub-pixel rendering a lot more difficult.

      1 reply →

  • Agreed. The fact that a pixel is an infinitely small point sample - and not a square with area - is something that Monty explained in his demo too: https://youtu.be/cIQ9IXSUzuM?t=484

    • A pixel is not the sample(s) that is value came from. Given a pixel (of image data) you don't know what samples are behind it. It could have been point-sampled with some optical sensor far smaller than the pixel (but not infinitely small, obviously). Or it could have been sampled with a gaussian bell shaped filter a bit wider than the pixel.

      A 100x100 thumbnail that was reduced from a 1000x1000 image might have pixels which are derived from 100 samples of the original image (e.g. a simple average of a 10x10 pixel block). Or other possibilities.

      As an abstraction, a pixel definitely doesn't represent a point sample, let alone an infinitely small one. (There could be some special context in which it does but not as a generality.)

      3 replies →

    • Eh, calling it infinitely small is at least as misleading as calling it a square. While they are both mostly correct, neither Monty’s explanation nor Alvy-Rays are all that good. Pixels are samples taken at a specific point, but pixel values do represent area one way or another. Often they are not squares, but on the other hand LCD pixels are pretty square-ish. Camera pixels are integrals over the sensor area, which captures an integral over a solid angle. Pixels don’t have a standard shape, it depends on what capture or display device we’re talking about, but no physical capture or display devices have infinitely small elements.

      5 replies →

Well the way I see them I don't think they are a unit at all.

And the end pixels are "physical things". Like ceramic tiles on a bathroom wall.

Your wall might be however many meters in length and you might need however squared meters of tile in order to cover it. But still, if you need 10 tiles high and 20 tiles width, you need 200 tiles to cover it. No tension there.

Now you might argue that pixels in a scaled game don't correspond with physical objects in the screen any more. That's ok. A picture of the bathroom wall will look smaller than the wall itself. Or bigger, if you hold it next to your face. It's still 10x20=200 tiles.

The article starts out with an assertion right in the title and does not do enough to justify it. The title is just wrong. Saying pixels are like metres is like saying metres are like apples.

When you multiply 3 meter by 4 meter, you do not get 12 meters. You get 12 meter squared. Because "meter" is not a discrete object. It's a measurement.

When you have points A, B, C. And you create 3 new "copies" of those points (by geometric manipulation like translating or rotating vectors to those points), you now have 12 points: A, B, C, A1, B1, C1, A2, B2, C2, A3, B3, C3. You don't get "12 points squared". (What would that even mean?) Because points are discrete objects.

When you have 3 apples in a row and you add 3 more such rows, you get 4 rows of 3 apples each. You now have 12 apples. You don't have "12 apples squared". Because apples are discrete objects.

When you have 3 pixels in a row and you add 3 more such rows of pixels, you get 4 rows of 3 pixels each. You now have 12 pixels. You don't get "12 pixels squared". Because pixels are discrete objects.

Pixels are like points and apples. Pixels are not like metres.

  • > When you multiply 3 meter by 4 meter, you do not get 12 meters. You get 12 meter squared.

    "12 meter(s) squared" sounds like a square that is 12 meters on each side. On the other hand, "12 square meters" avoids this weirdness by sounding like 12 squares that are one meter on each side, which the area you're actually describing.

    • that's just a quirk of the language.

      If you use formal notation, 12 m^2 is very clear. But i have yet to see anyone write 12px^2

      1 reply →

> A Pixel Is Not A Little Square!

> This is an issue that strikes right at the root of correct image (sprite) computing and the ability to correctly integrate (converge) the discrete and the continuous. The little square model is simply incorrect. It harms. It gets in the way. If you find yourself thinking that a pixel is a little square, please read this paper.

> A pixel is a point sample. It exists only at a point. For a color picture, a pixel might actually contain three samples, one for each primary color contributing to the picture at the sampling point. We can still think of this as a point sample of a color. But we cannot think of a pixel as a square—or anything other than a point.

Alvy Ray Smith, 1995 http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

  • A pixel is simply not a point sample. A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas. A modern display does not reconstruct an image the way a DAC reconstructs sounds, they render little rectangles of light, generally with visible XY edges.

    The paper's claim applies at least somewhat sensibly to CRTs, but one mustn't imagine the voltage interpolation and shadow masking a CRT does corresponds meaningfully to how modern displays work... and even for CRTs it was never actually correct to claim that pixels were point samples.

    It is pretty reasonable in the modern day to say that an idealized pixel is a little square. A lot of graphics operates under this simplifying assumption, and it works better than most things in practice.

    • > A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas.

      Integrates this information into what? :)

      > A modern display does not reconstruct an image the way a DAC reconstructs sounds

      Sure, but some software may apply resampling over the original signal for the purposes of upscaling, for example. "Pixels as samples" makes more sense in that context.

      > It is pretty reasonable in the modern day to say that an idealized pixel is a little square.

      I do agree with this actually. A "pixel" in popular terminology is a rectangular subdivision of an image, leading us right back to TFA. The term "pixel art" makes sense with this definition.

      Perhaps we need better names for these things. Is the "pixel" the name for the sample, or is it the name of the square-ish thing that you reconstruct from image data when you're ready to send to a display?

      8 replies →

    • A slightly tangential comment: integrating a continuous image on squares paving the image plane might be best viewed as applying a box filter to the continuous image, resulting in another continuous image, then sampling it point-wise at the center of each square.

      It turns out that when you view things that way, pixels as points continues to make sense.

    • The representation of pixels on the screen is not necessarily normative for the definition of the pixel. Indeed, since different display devices use different representations as you point out, it can't really be. You have to look at the source of the information. Is it a hit mask for a game? Then they are squares. Is it a heatmap of some analytical function? Then they are points. And so on.

  • The 'point' of Alvy's article is that pixels should be treated as point sources when manipulating them, not when displaying them.

    Obviously, when a pile of pixels is shown on a screen (or for that matter, collected from a camera's CCD, or blobbed by ink on a piece of paper), it will have some shape: The shape of the LCD matrix, the shape of the physical sensor, the shape of the ink blot. But those aren't pixels, they're the physical shapes of the pixels expressed on some physical medium.

    If you're going to manipulate pixels in a computer's memory (like by creating more of them, or fewer), then you'd do best by treating the pixels as sampling points - at this point, the operation is 100% sampling theory, not geometry.

    When you're done, and have an XY matrix of pixels again, you'll no doubt have done it so that you can give those pixels _shape_ by displaying them on a screen or sheet of paper or some other medium.

  • Every response in this thread rephrased as a reply to “an integer is a 32 bit binary twos complement”

    1. there exist other ways to represent an integer

    2. An old computer uses a different representation

    3. numbers are displayed in base 10 on my monitor

    4. when I type in numbers I don’t type binary

    5. twos complement is confusing and unintuitive

    6. it’s more natural to multiply by 10 when using base 10

    7. I’ve used 32 bits to represent other data before.

  • This is one of my favorite articles. Although I think you can define for yourself what your pixels are, for most it is a point sample.

This isn't just pixels, it's the normal way we use rectangular units in common speech:

* A small city might be ten blocks by eight blocks, and we could also say the whole city is eighty blocks.

* A room might by 13 tiles by 15 tiles, or 295 tiles total.

* On graph paper you can draw a rectangle that's three squares by five squares, or 15 squares total.

A pixel is a dot. The size and shape of the dot is implementation-dependent.

The dot may be physically small, or physically large, and it may even be non-square (I used to work for a camera company that had non-square pixels in one of its earlier DSLRs, and Bayer-format sensors can be thought of as “non-square”), so saying a pixel is a certain size, as a general measure across implementations, doesn’t really make sense.

In iOS and MacOS, we use “display units,” which can be pixels, or groups of pixels. The ratio usually changes, from device to device.

So, the author answers the question:

> That means the pixel is a dimensionless unit that is just another name for 1, kind of like how the radian is length divided by length so it also equals one, and the steradian is area divided by area which also equals one.

But then for some reason decides to ignore it. I don’t understand this article. Yes, pixels are dimensionless units used for counting, not measuring. Their shape and internal structure is irrelevant (even subpixel rendering doesn’t actually deal with fractions - it alters neighbors to produce the effect).

Pixel, used as a unit of horizontal or vertical resolution, typically implies the resolution of the other axis as well, at least up to common aspect ratios. We used to say 640x480 or 1280x1024 – now we might say 1080p or 2.5K but what we mean is 1920x1080 and 2560x1440, so "pixel" does appear to be a measure of area. Except of course it's not – it's a unit of a dimensionless quantity that measures the amount of something, like the mole. Still, a "quadratic count" is in some sense a quantity distinct from "linear count", just like angles and solid angles are distinct even though both are dimensionless quantities.

The issue is muddied by the fact that what people mostly care about is either the linear pixel count or pixel pitch, the distance between two neighboring pixels (or perhaps rather its reciprocal, pixels per unit length). Further confounding is that technically, resolution is a measure of angular separation, and to convert pixel pitch to resolution you need to know the viewing distance.

Digital camera manufacturers at some point started using megapixels (around the point that sensor resolutions rose above 1 MP), presumably because big numbers are better marketing. Then there's the fact that camera screen and electronic viewfinder resolutions are given in subpixels, presumably again for marketing reasons.

  • Digital photography then takes us on to subpixels, Bayer filters (https://en.wikipedia.org/wiki/Color_filter_array) and so on. You can also divide the luminance colour parts out. Most image and video compression puts more emphasis on the luminance profile, getting the colour more approximate. The subpixels on a digital camera (or a display for that matter) take advantage of this quirk of human vision.

Happens to all square shapes.

A chessboard is 8 tiles wide and 8 tiles long, so it consists of 64 tiles covering an area of, well, 64 tiles.

I think the author forgets that pixels inherently have both width and height, a single pixel, is inherently a 2 dimensional entity, whereas the meter is a purely one dimensional concept. You don't usually talk about whether your meters are the same height as they are tall, or whether they're taller than they're wide.. because they don't have those two dimensions.. You don't talk about how your centimeters are arranged within your meter either.. (you can talk about how your subpixels are, and even if there are 3 or 4 of them).

So, I don't think it's entirely valid to talk about pixels as if they are pure, one dimensional units..

They're _things_ and you can talk about how many things wide or tall something is, and you can talk about how many things something has. Very much the same way you can with bricks (which are mostly never square) (though tiles are, you never talk about how many kilotiles is in your bathroom either, yet you can easily talk about how many tiles wide or tall a wall is).

So, no, pixels is not a unit in the mathematical sense.. it's an item, in the physical sense.

There are also things like scanners, that may have only one row of pixels on the scanner sensor, it does not have an area of zero, and you don't need to specify that there's one pixel on the other axis, because it's an inherent property of pixels that they have both width and height and thus area in and of themselves.

A pixel is two dimensional, by definition. It is a unit of area. Even in the signal processing "sampling" definition of a pixel, it still has an areal density an is therefore still two-dimensional.

The problem in this article is it incorrectly assumes a pixel to be a length and then makes nonsensical statements. The correct way to interpret "1920 pixels wide" is "the same width as 1920 pixels arranged in a 1920 by 1 row".

In the same way that "square feet" means "feet^2" as "square" acts as a square operator on "feet", in "pixels wide" the word "wide" acts as a square root operator on the area and means "pixels^(-2)" (which doesn't otherwise have a name).

  • It is neither a unit of length nor area, it is just a count, a pixel - ignoring the CSS pixel - has no inherent length or area. To get from the number of pixels to a length or area, you need the pixel density. 1920 pixel divided by 300 pixel per inch gives you the length of 6.4 inch and it all is dimensionally consistent. The same for 18 mega pixel, with a density of 300 times 300 pixel per square inch you get an image area of 200 square inch. Here pixel per inch times pixel per inch becomes pixel per square inch, not square pixel per square inch.

  • CSS got it right by making pixels a relative unit. Meters are absolute. You cannot express pixels in meters. Because they are relative units.

    If you have a high resolution screen the a CSS pixel is typically be 4 actual display pixels (2x2) instead of just 1. And if you change the zoom level, the amount of display pixels might actually change in fractional ways. The unit only makes sense in relation to what's around it. If you render vector graphics or fonts, pixels are used as relative units. On a high resolution screen it will actually use those extra display pixels.

    If you want to show something that's exactly 5cm on a laptop or phone screen, you need to know the dimensions of the screen and figure out how many pixels you need per cm to scale things correctly. Css has some absolute units but they only work as expected for print media typically.

  • > The correct way to interpret "1920 pixels wide" is "the same width as 1920 pixels arranged in a 1920 by 1 row".

    But to be contrarian, the digital camera world always markets how many megapixels a camera has. So in essense, there are situations where pixels are assumed to be an area, rather than a single row of X pixels wide.

    • The digital camera world also advertises the sensor size. So a 24MP APS-C camera has smaller pixels than a 24MP Full-frame camera, for example.

  • > in "pixels wide" the word "wide" acts as a square root operator on the area and means "pixels^(-2)"

    Did you meant "pixels^(1/2)"? I'm not sure what kind of units pixels^(-2) would be.

    • pixel^(-2) is "per squared pixel". Analogously, 1 pascal = 1 newton / 1 metre^2. (Pressure is force per squared length.)

  • Same as if you were building a sidewalk and you wanted to figure out its dimensions, you’d base it off the size of the pavers. Because half pavers are a pain and there are no half pixels.

  • > A pixel is two dimensional, by definition.

    A pixel is a point sample by definition.

    • An odd definition. A pixel is a physical object, a picture element in a display or sensor. The value of a pixel at a given time is a sample, but the sample isn't the pixel.

      1 reply →

    But it does highlight that the common terminology is imperfect and breaks the regularity that scientists come to expect when working with physical units in calculations

Scientists and engineers dont actually expect much, they make a lot of mistakes, are not very rigorous, not demanding towards each others. It is common for Units to be wrong, context defined, socially dependent and even sometimes added together when the operator + hasn't been properly defined

Hopefully most people get that the exact meaning of "pixel" depends on context?

It certainly doesn't make sense to mix different meanings in a mathematical sense.

E.g., when referring to a width in pixels, the unit is pixel widths. We shorten it and just say pixels because it's awkward and redundant to say something like "the screen has a width of 1280 pixel widths", and the meaning is clear to the great majority of readers.

A bathroom tile is also a unit of length and area. A wall can be so many tiles high by so many wide, and its area the product, also measured in tiles.

It is just word semantics revolving around a synecdoche.

When we say that an image is 1920 pixels wide, the precise meaning is that it is 1920 times the width of a pixel. Similarly 1024 pixels high means 1024 times the height of a pixel. The pixel is not a unit of length; its height or width are (and they are different when the aspect ratio is not 1:1!)

A syntax-abbreviating semantic device in human language where part of something refers to the whole or vice versa is called a synecdoche. Under synecdoche, "pixel" (the whole) can refer to "pixel width" (part or property of the whole).

Just like the synecdoche "New York beats Chicago 4:2" refer to basketball teams in its proper context, not literally the cities.

Pixels are not measurement units. They're samples of an image taken a certain distance apart. It's like eggs in a carton: it's perfectly legitimate to say that a carton is 6 eggs long and 3 eggs wide, and holds a total of 18 eggs, because eggs are counted, they're not a length measure except in the crudest sense.

A pixel is neither a unit of length nor area, it is like a byte, a unit of information.

Sometimes, it is used as a length or area, omitting a conversion constant, but we do it all the times, the article gives out the mass vs force as an example.

Also worth mentioning that pixels are not always square. For example, the once popular 320x200 resolution have pixels taller than they are wide.

  • It's not a unit of information. How many bytes does a 320×240 image take? You don't know until you specify the pixel bit depth in bpp (bits per pixel).

This kind of thing is common in english, though. "an aircraft carrier is 3.5 football fields long"

The critical distinction is the inclusion of a length dimension in the measurement: "1920 pixels wide", "3 mount everests tall", "3.5 football fields long", etc.

I’m surprised the author didn’t dig into the fact that not all pixels are square. Or that pixels are made of underlying RGB light emitters. And that those RGB emitters are often very non-square. And often not 1:1 RGBEmitter-to-Pixel (stupid pentile).

  • Or the fact that a 1 megapixel camera (counting each color-filtered sensing element as a pixel) generates less information than a 1 megapixel monitor (counting each RGB triad as a pixel) can display.

  • > "Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte."

    or

    > "I have made this longer than usual because I have not had time to make it shorter."

This article wastes readers' time by pretending to command of a subject in a manner that is authoritative only in its uncertainty.

Pixel is an abbreviation for 'picture element' which describes a unit of electronic image representation. To understand it, consider picture elements in the following context...

(Insert X different ways of thinking about pictures and their elements.)

If there is a need for a jargon of mathematical "dimensionality" for any of these ways of thinking, please discuss it in such context.

Next up:

<i>A musical note is a unit of...</i>

For those who programmed 8-bit computers or worked with analog video, a pixel is also a unit of time. An image is a long line with some interruptions.

A pixel is a sample or a collection of values of the Red, Green, and Blue components of light captured at a particular location in a typically rectangular area. Pixels have no physical dimensions. A camera sensor has no pixels, it has photosites (four colour sensitive elements per one rectangular area).

Also a measurement of life. Back in the 320x200 game days, when playing something with a health bar, we used to joke that someone had one pixel of life left when near death.

Pixel is just contextual. When you are talking about one dimensional things it's a unit of length. In all mother cases it's a unit of area.

Or perhaps it's multivariate and there's no point in trying to squish all the nuance into a single solid definition.

The pixel ain't no problem.

A "megapixel" is simply defined as 1024 pixels squared ish.

There is no kilopixel. Or exapixel.

No-one doesn't understand this?

Should be pixel as area and pixel-length as 1-dimensional unit.

So an image could be 1 mega pixel, or 1000 times 1000 pixel-lengths.

This is a fun post by Nayuki - I'd never given this much thought, but this takes the premise and runs with it