← Back to context

Comment by injb

3 years ago

I don't understand- in what sense is this "1 bit"?

Generally, "n bit" describes the color depth of an image by the number of bits used to store the color information of a pixel.

Some common formats:

  1 bit .... b/w  (foreground or background color set per pixel)
  4 bit .... 16 colors (indexed color palette or 16 shades of grey in 1 channel)
  8 bit .... 256 colors (indexed color palette or 1-channel greyscale image)
  12 bit ... 16 x 16 x 16 colors, 3 color channels at 4 bits for 4,096 colors total
  24 bit ... 256 x 256 x 256 colors, 3 color channels at 8 bits for 16,777,216 colors total
  32 bit ... as above, but including an extra alpha channel (mask) for 256 levels of transparency

There are also high-resolution color modes (e.g. in Photoshop) using either 16 bits or even 32 bits per color channel. (So 8-bit, 16-bit or 32-bit may also describe the depth for each individual color channel in imaging software.)

"8-bit" is also used for graphics on 8-bit computers, most often a palette of 8 or 16 indexed colors in some sort of sub-palette (e.g., 2 bit = 4 out of 8 or 16 colors), in rare cases even 128 colors (like on the Atari 2600). Here, "8-bit" relates to the platform and not to image depth or a specific implementation.

If you transmitted it one pixel at a time left to right top to bottom scan, or boudestrophon l r l r l r if you prefer then the pixel stream would equal a 1 bit value toggling on or off.