Understanding Color: A Comprehensive Guide for Developers
Introduction
- The talk begins with a question about understanding color, highlighting the audience's curiosity.
- The speaker reassures that there will be no math, only physics, to explain color.
What is Color?
- Color is defined as a visual perception described by hue, brightness, and colorfulness (saturation).
- It is emphasized that color is a perception created by our brains, not a physical entity.
The Science of Light
- Light consists of photons and exhibits wave-particle duality.
- The electromagnetic spectrum is introduced, focusing on the visible spectrum (400-700 nanometers).
- Human eyes contain three types of cones (short, medium, long) that detect different wavelengths of light.
Perception of Color
- Color perception involves the interaction of light with objects, where objects reflect certain wavelengths.
- The concept of spectral power distribution is explained, showing how different light sources affect color perception.
- The speaker discusses the famous dress phenomenon as an example of varying color perception.
Chromaticity Diagram
- The chromaticity diagram represents all perceivable colors, with the outer edge showing pure spectral colors.
- Imaginary colors, which cannot be seen, are also mentioned.
Color Models and Spaces
- Color is defined as a tuple of numbers within a color model associated with a color space.
- Common color models like RGB and CMYK are introduced, with a focus on RGB for application development. For a deeper understanding of RGB, refer to Understanding the Electronic Color Code for Resistors.
- The importance of color spaces is discussed, highlighting how different devices have varying color capabilities.
Color Management in Android
- The talk emphasizes the need for color management to ensure consistent color representation across devices. For more on color management, see Unlocking the Art of Color Scripting: A Comprehensive Guide.
- Android O introduces color management, allowing developers to associate colors with their intended color spaces.
- The speaker explains how to handle color spaces in design applications and the importance of using sRGB as a safe bet.
Practical Applications and APIs
- The new color APIs in Android O are introduced, allowing for better color manipulation and management.
- The speaker discusses how to work with bitmaps and color spaces, including loading and rendering images correctly.
- The concept of white color gamut rendering is introduced, allowing for the use of wider color spaces on compatible devices.
Conclusion
- The speaker reassures developers not to panic about color management complexities, encouraging them to do their best with calibrated displays and sRGB.
- Additional resources for learning about color management and transfer functions are provided. For a comprehensive overview of programming concepts that may aid in understanding color management, check out Java Programming: A Comprehensive Guide to Understanding Java and Its Concepts.
- The talk concludes with an invitation for further discussion at the Android sandbox.
[Music] Good morning everyone and welcome to this talk about color. Uh so how many of
you think they understand color? Raise your hand. Nobody. Okay, I guess that's why you're here. Um so I gave a similar
talk last year at NDE 360 and there was a lot of math and a lot of equations and a lot of worried developers asked me if
there's going to be math in this presentation. There's going to be no math, no equations. Instead, there's
going to be physics. So let's talk about color. The first question we have to ask ourselves is
what is color? It sounds like you know a deep question. Uh the answer is not so deep. So here's a definition that
applies to most of us as human beings. We're not developers. We're just regular people. So it's a visual visual
perception that can be described by hue, brightness, and colorfulness. Sometimes colorfulness is al also called
saturation. And uh if you've ever used the HLS, HSL or HSB colors in the Android APIs, you might be familiar with
that kind of definition. What's really important to understand is that color is just a perception of our brain. It's not
a real thing. And we're going to see why uh that is. So obviously we see colors with our eyes. And if any of you does
like see color through another mean, I would love to talk to you. Um, and to understand how we perceive color, we
first have to go back probably to high school or college and uh and try to understand what light is made of. So,
I'm sure most of you know that light is made of photons, but there's a duality to light. It can be a wave or a
particle. And we're going to look at the wave nature of light first. So, light is an electromagnetic wave. And our eyes
are just receptors for those electromagnetic waves. So here's the electromagnetic spectrum.
Uh is not it's not to scale. And from the shortest wavelengths to the left to the longest wavelengths to the right, we
have in order the gamma rays, the x-rays, uh the ultraviolet, the visible spectrum, that tiny little bit in the
middle full of colors. Then we have the microwaves and uh sorry, the infrared, the microwaves, and then the radio
waves. The part that interests us that it's it's that tiny bit in the middle. So that's what we call the visible
spectrum. It goes from about 400 nanometers to 700 nanometers. And all of us, these are the only wavelengths that
we can see. Why does this matter? Uh our eyes, like I said, our receptors, they're
actually made of millions of small receptors called cones. Most of us, uh almost everybody has three types of
cones that can detect different parts of that spectrum. So you see the spectrum here at the bottom. Those are all the
wavelengths of colors that we can see. And this diagram shows in three different colors the the sensitivity of
each type of cone uh that we have in our eyes. They're called short, medium, and long. Sometimes they're called blue,
green, and red. Uh it's not technically accurate to call them red, green, and blue. So instead, we're going to call
them short, medium, and long. And they're called that way because uh the short ones help us see in the blue
wavelengths. So ultraviolets, violets, and blues. Then we have the medium ones that help us see the greenish colors.
and the long ones that help us see green, orange, yellow, and a little bit of red. And you can see there's a lot of
overlap between the the medium and the long wave uh the longer receptors. So now what is light? So
light is a distribution of several wavelengths. So this is the what we call the spectral power distri distribution
of a light bulb. So the kind of light bulbs that you know you can find in any house. Uh it's an orang-ish light bulb
and you can see the amount of energy it outputs in different parts of the visible spectrum and you can see here
that this type of light bulb outputs most of its energy in the red and orange part of the spectrum and
that's why we perceive it that way. But what happens when the light from that light bulb hits our
eyes? So we saw that we have those receptors, they have different sensitivities, so different part of the
of the spectrum. So we just multiply the distribution of the incoming light with the with the sensitivity of our
different cones and the result is what we perceive. So when we multiply both uh we get this. So when you look at one of
those light bulbs you see almost nothing in the blues. You see a little bit of green and more orange and red. And then
our brain will interpret that as an orange color. The way we actually interpret the result of the scones is a
little more is a little complicated and we don't have time to go into uh into the details here. Uh you can go look on
Wikipedia how it works if you're interested. So here's what happens uh exactly when we look at a light source.
Uh the the light is emitted by the light sorry the there's wavelengths emitted by the light source. It hits our eyes we
multiply by the cones and that's what we perceive. But most of the time you're not looking directly at the light.
You're looking at different objects that surround us and how do those objects get their color from. So every object also
has what's called a spectral power distribution. It's a description of the wavelengths that the object can reflect.
We call that the reflectance. So some of the wavelengths will be absorbed. So for instance, a black object absorbs almost
everything that's incoming. And a white object will reflect pretty much all the wavelengths. So we also multiply to to
perceive the color of an object. What happens that we have light coming, it bounces off an object, some of it is
reflected. So we just multiply the two distributions, then it hits our eyes and we get the the the the final perceived
color. What's very interesting here is that any combination because it's a multiplication, any combination of light
and uh reflectance can yield to the same perceived color in our brain. So a very simple example is that if you have an
orange object lead by white light, you're going to see it orange. But if you have a white object lit by an orange
light, you're also going to see it orange. And our brain does a lot of post-processing really to help us
understand what is the color of the light, what is the color of the object. And you know, a few years ago, there was
this famous example of the dress. Uh some people were seeing it in black and blue, some people were seeing it in gold
and something else. And that's exactly what's happening is that some of us uh were interpreting the result
differently. And nobody was wrong. It's just without the context, you can't really know for sure.
So yeah that we can swap the colors the distribution of the light source and the object and we're going to see the same
result. So that's lead that leads us to something called the chromaticity diagram. Uh it was standardized in 1931
by the commission international declar CIE uh it's French French commission apparently and it represents all the
colors that we can perceive as human beings. So on that horseshoe shape the outside edge uh is called the spectral
locus. It represents all the pure colors the pure spectral color. So that that spectrum of colors that we saw in the
previous diagrams is actually bends around that edge all along and everything in between all the colors
that we can actually see are a mix of all those different wavelengths. Then um so it has this
weird shape. It's not a rectangle or anything. And there's a lot of colors that live outside of that spectrum. We
call them the imaginary colors because we cannot see them. No matter how hard you try, you won't be able to see those
colors. There are some optical optical illusions uh that you can find online that will kind of help you find those
colors. It's actually really weird. Not everybody can can see them or perceive them. Um there's one, for instance,
where it shows you blue uh a blue rectangle for one of your eyes and yellow rectangle for your other eye. And
the way I see it is this really weird color that you can't really describe that keeps changing from blue to yellow
but does but never stops on one of those two colors. Um the visible spectrum is
actually a little bit more complicated than that. What you're seeing is a slice. It's a 2D slice. There's a Z-axis
that's coming towards you and that is the brightness. So the the footprint at the bottom like this large footprint you
see is the dark colors. It's because our eyes are better at seeing dark colors than they are at seeing light colors.
So we've seen what colors are for us as human beings but you know everybody's a developer I think in the audience. So
the real question is what is color for us as developers? What does it mean for your application? So color really is an
encoding scheme uh for brain sensations and the the formal definition is that it's a tupole of numbers or list of
numbers defined within a color model and it's associated with what we call a color space. And we're going to look at
some examples uh to make you understand that. So here's our here are some of the color models uh that you might be
familiar with. RGB is one of the obvious one. I'm sure everybody's used that. CMYK. If you have ever printed a picture
or a book or something, you might also have uh you might have also dealt with it. And there's the another one for for
instance called LIB. There are others. There's XYZ. There's many others. What's interesting is that the color model
defines how many numbers we need to define a color. So you're used to RGB, it has three colors, it has three
values, but CMYK requires four values for instance. So the one that we're interested in today is the RGB color
model. It is a triplet of values, hence the name. Uh and you are mostly I'm sure most of you or all of you are familiar
with the hexadimal notation. So there are many ways of representing that tuple of numbers. This is one of them. It's
pretty popular especially on the web. You've probably found it used it a lot on Android when you want to pass a color
directly to one of our APIs. So this is a pinkish color. So this is the same color represented as a triplet of 8 bit
unsigned integers. Uh you're also most likely familiar with it. This is something we use a lot on Android when
you set the color on the paint or when use the color red to extract the red component of a color. Uh and uh this is
another notation. This is actually the one I prefer. Uh this uses floats. So the values are between zero and one. And
uh it's interesting because it's more versatile. You can use it to to to represent HDR colors. And we'll see that
Android actually makes use now in O float uh the float notation to have negative colors and colors that go
beyond one. So to reproduce color, so the big question is once we have those numbers, what colors do they actually
represent? So I told you this is a pink, but you've seen that spectrum of colors. There are many many many pinks. There's
actually an infinity of pink. So which one of those pinks does this represent? So to reproduce colors, all of our
displays use additive light. So we use red, green, and blue uh and uh our TVs, our phones, our computers, our old CRT
monitors, and they just mix those different lights. So the numbers that you that you just saw, the RGB triplets,
it might sound know obvious, but they're an intensity for each one of those lights. So let's say we pick three
lights. We pick a a green light, a red light, and a blue light. And they're not perfect spectral lights. They're just
random lights that you found. They are found somewhere in the visible spectrum and they together they form a triangle.
So when you have an RGB color in your application, you are just uh you can only represent one of the colors within
that triangle. You cannot represent colors that live in the entire visible spectrum. So and there's an and that
triangle is what we call a color space. And there's an infinity of them. depending on the the the three lights
you choose, you can represent any of of an infinity of color spaces. So here's for instance the widest or one of the
widest color spaces we could create. The problem we have is that with a triangle with just three lights, we cannot
encompass all the visible spectrum. We have to choose a smaller slice. So this one in particular is called the uh Adobe
white gamut RGB color space. Uh there's no device that I know of that can capture or or recreate this color
spectrum, this this color space. And the problem is that if we wanted to create a color space with RGB
that that contains all the visible colors, we would have to create lights that are in the imaginary space. So
lights that we cannot perceive because we've seen that our eyes cannot perceive outside of that horseshoe shape. But
color spaces are a little more complicated than that. So color spaces are actually defined by uh three
components. The first ones are called the primaries. There are three of them. Then we need a white point and then we
need something called transfer functions. So this might look like a complicated diagram, but what I did is
um I've put the visible spectrum on on the left and I've overlaid three triangles that represent three common
color spaces that are used uh in for steel images. So, the smallest one you see in the middle, the blue triangle, is
the one we call sRGB. I'm sure you've heard that term before. So, sRGB stands for standard RGB and it was designed in
the '90s and it kind of matches the reproduction capabilities of the CRT monitors of back then. Um, and to to
this day, it's still used everywhere and it's pretty much the universal color space, the only color space that you can
you can count on. There are other color spaces. There's Adobe RGB for instance. It's the orange triangle because it's
bigger than the sRGB color space. We say that it's a wide color space or it's a white gamut. So the three uh vertices of
the triangle, the triangle is something we call the gammut. And then there's something called profoto RGB that you
can see extends beyond the visible spectrum. And this is not a color space that we use to actually represent colors
on the screen. This is what we call a working color space. For instance, when I take a picture with my camera, my
camera is set to Adobe RGB. So it captures everything in in the color space that you see on screen. Then when
I import my photo in Lightroom, Lightroom internally, Adobe Lightroom works in Profoto RGB and it won't be
able to to recreate all the colors that it's working with on screen, but the idea is to have as much precision as
possible. So you can ignore this kind of color space uh for for your needs on Android applications. So I said a color
space has three primaries and the primaries really are the coordinates of each one of the three vertices of a
triangle in that chromaticity diagram in the visible spectrum. So they identify when you say for instance that you want
to color that red equals 1, green equals 0, blue equals 0, it tells you what red we're talking about. And if you look uh
here on the on the screen, sRGB and Adobe RGB when you say red equals 1, they have the same exact cred, but
Profoto RGB has a different red. So it's two different reds for us. Uh not necessarily for computers. We'll take a
look at that. And then we also need a white point. And the white point is the same idea. It gives us the coordinates
of white in that in uh in that color space. And we'll get back to that. So I also mentioned transfer
functions. Transfer functions are a little bit complicated. That's where a lot of math comes into play. You've
probably heard about them under the name gamma. So if you heard about gamma correction, that's actually transfer
functions. So I gave a talk last year about transfer functions. I don't have time to talk about them today. Uh I'm
going to give I'm going to show you a link at the end of the talk. If you're interested, you should definitely go
look at the talk. Uh there's a lot of things that you should learn about transfer functions that will that can
impact your applications. So, I've been talking about color spaces, but why do we care so much about
color spaces? So, the problem is that every device out there has a different color space. So, for instance, you can
have a phone that has an LCD screen and it's going to be close to sRGB. You have a phone that has a noled screen and it's
going to be closer to P3 to a color space called P3 or to the Adobe RGB color space that we just saw. And same
thing for your laptop, for your computer, for your TV. And things get even worse. Uh because even if you have
two phones, they're the same model. That's the same manufacturer. They're both supposed to be, let's say, P3.
There are variations in the manufacturing process. So, they won't be the exact same P3. So, the colors won't
be exactly the same on those two devices. I'll show you an example of what happens. So, let's imagine that we
have content that we created for sRGB. So, it's in the sRGB color space. We designed it at home on our computer that
shows sRGB colors. That's the white triangle you see. And then if we take that content as is, we don't do anything
to it and we just show it on a display that's Adobe RGB. What we're going to do is we're going to just take those RGB
triplets, those values we had and reinterpret them in a different color space. So suddenly your green that you
had in in sRGB like that green equals 1 is going to be a different green. It's going to look completely different to
your user who's using an Adob RGB screen. So here are concrete examples. Uh this is a photo I took. I I took it
in Adobe RGB. I processed it in Profoto RGB. I converted it I converted it nicely on my calibrated monitor to sRGB.
This is what it should look like. Actually, this is not what it should look like uh on my laptop. It's what it
should look like cuz those screens have a color space. I have no idea what it is, but it's definitely not uh sRGB. And
what you see is really wrong. But what matters the difference between the next photos. So let's say that this is proper
sRGB. This is how I wanted you to see the picture. Now, if I display the picture as is on a different screen,
let's say DCIP3, it's going to look like this. So, if I go back and forth, you can see
there's a difference in contrast. Some of the colors are a little bit more saturated. So, already my photo does not
look like the way I intended. Then, if I display it on a Profoto RGB display, if such thing existed, it would look like
this. So, super saturated, really garish. I don't know about you, I really don't like this picture. And those are
only when we that happens when we only affect the primaries. But you can have similar issues when you affect the white
points. So the next slide here that is the same sRGB photo. So we kept the primaries but I changed the white point
to something that's bluish and suddenly my photo well you know it's underwater so I can it kind of makes sense to be
blue but it is not the way I wanted it to look like. So again, if you take multiple Android devices for instance
and you put them side by side, chances are that some screens will appear yellow to you, some screens will appear blue to
you or green. And that's because of the white points. We have different white points across multiple displays. And
once again, the same model of device from the same manufacturer there there are going to be white point
variations. So when this happens, uh we said that colors are unmanaged. This is what Android has been doing since
Android 1.0. Uh and I hate it. absolutely hate it. It's horrible. Um, you know, your designers will spend
hours and hours like slaving away on their computer like creating a beautiful UI. Then you test it on a phone and
looks completely different and you test it on another phone and it looks completely different again. So you might
be thinking how can we have our design look exactly the way we want. So the solution is something called color
management and it's uh new to Android O. So the idea is that every color that we want to display needs to be associated
with a color space. We need to know what was the original intent of the design. So we need that information. Then
through the magic of a lot of math and matrices. It's actually more more complicated than just a matrix, but
that's most of it. We're going to do a controlled conversion to the destination color space. So what we're going to do
is that when we manufacture a device, we're going to use special devices that will measure the capabilities of the
display. We're going to measure the primaries and the white point of the device. We're going to create the
destination color space and then we can convert your original content to the destination color space and preserve the
same colors that you wanted to have. So this is something we're introducing in Android O. Uh and there are two parts of
it. There's first color management proper and then there's something called the white color gamut rendering. And
we're going to take a look at both. But first, before you need to worry about Android, you need to make sure that you
or your designers are using color spaces properly in your design application. And there are two things you can do with
pictures. You can assign a color space or you can convert a color space. So, I'm going to switch to a to a
demo. But first, uh I just wanted to show you earlier I said that color spaces this chromaticity diagram. We had
the horseshoe shape. It's a slice of the color space. Um, and so are the uh the gamuts, those triangles. So this is sRGB
in 3D. And it's interesting because I mentioned that our eyes see more in the darks. And you can see that in that
diagram. The brighter the colors, the less the less uh the fewer use we can perceive. All right. So for the demo, so
I'm using a tool called Affinity Photo. Uh, and this is a picture I took. same process, you know, I took it in Adobe
RGB. I was using RAW files from my camera, from my DSLR. I use Profoto RGB to do my work in Lightroom. And I
created an sRGB version of the picture. And if we zoom in, pretty much every any good design tool somewhere will show you
the color space of your image. So, here we know it's an sRGB image. Uh, I'm on a calibrated display. Looks fine. And like
I said, you can assign a a color space, often called an ICC profile, or you can convert. So if you assign, what you're
saying is you're just saying, keep the colors the way they are, keep the same values. I just want to move them to a
different color space. So let's say we move to the Profoto color space. And now it looks like this. This is exactly what
Android does. This is not the right way of doing it. Sometimes that's what you want because you know what you're doing,
but very often it's not going to be what you want. So instead, what you want to do is do convert a CC profile. It's
sometimes called MA. It's sometimes called match. So we pick prof. And when I click, you'll see no difference. And
this is what we wanted. But what happened is that every single RGB value has changed. It just looks the same. We
just have different values stored inside the image. Um, and to give you an idea of
what happens to your Android design. So this is a screenshot I took of my Pixel 2016 running uh Android N. And uh on
Android we pretty much assumed that all the content is sRGB. So when I open this file in a in the tool uh I was warned
that there was no color space and that the tool would assume sRGB. Now when you display the screenshot on a on a noled
display because we don't manage color what happens on the Pixel 2016 is this. So I hope you can see the
difference. Uh if you look at the red icons at the bottom, you can see that everything becomes more saturated. So
that's what you designed and that's what you see on an actual phone. All right? So when you don't know
what the color space is, just assume it's sRGB. It's the only assumption you can make. And this is why, like for
instance, all my photos are sRGB because when I put them online on the web, I have no idea what display you're going
to use. I have no idea what device you're going to use. I don't know if your app is going to be color managed or
not. And sRGB is your safest bet. It won't be always correct, but it's your safest bet. So, when you use a design
tool and you have a color picker, if there's no information about the color space anywhere in the tool, you are most
likely using either sRGB or what we call native gamut, which is whatever the screen can do. For instance, this is
what uh Apple keynote does. When you pick a color, you pick the color directly for your display and not
another display. If your application is color managed and your document has a color space, you're picking a color in
that color space. Some color pickers are more advanced. So this is for instance on Mac
OS Sierra. If you click that little gear in the second tab, you can choose the color space you want to use for the
color picker. Uh and actually when I was working on the slides, I I I picked colors for the slides and then I was
creating diagrams into a different tool and I tried to match the colors and I forgot that I had to change the to sRGB
in this speaker. So it took me 5 minutes to understand why my colors were different. So even when you know about
color spaces in color management, it can sometimes be confusing. Another tool you can use on Mac on Mac OS for instance
and other platforms have similar tools is the digital color meter. It's in the application/utilities folder. It lets
you pick any color on the screen and using the dropown you can choose the color space uh you want for that color.
So you can look at the the color in the native gamut of the display or you can use sRGB or P3 or whatever. on Android
if you have a recent version of Android and if you have a Pixel device for instance if you go to the developer
options you can turn on the sRGB mode. So here what we're going to do is apply color correction. We're going to apply
color management to the to the display uh to make sure that all the colors are interpreted and reproduced as sRGB. Uh
you might be surprised at first a lot of people complain that the colors looked washed out but they actually more
accurate. Just a matter of habits. All right so now some code. So on Android, uh what you've been using so far is what
we call the color int. Just an int. It contains alpha, red, green, and blue. Uh and the only assumption we can make
because we don't know what color space you're using is that it's sRGB. So it's pretty
simple. Now in O, uh we're introducing a crazy new API. We've had that color class for 10 years, but it only has
static methods. Now you can create instances of the color class. you can actually use color as the way it was
meant to be. So you just call value of you give your RGB values and that's going to give you an instance of the
color class in sRGB because note I didn't specify the color space. So my assumption is it is RGB. You can also
specify the color space. So here we also call value of we pass the alpha. So the order is RGBA when it floats and then we
say that we want that color to be in the Adobe RGB space. What this allows us to do is work with colors. So we can
convert them from one color space to the other. So I have this other RGB color and if I call convert I can convert it
to display P3 for instance. Someone's happy. Something else we're introducing.
So color ins are really useful because they're easy to manipulate. They're small. They're easy to store. Uh and
color objects they're the color class is very generic. It can represent colors. It usually as a reference to a color
space. So they can be pretty heavy object. So now we have something we call the color long. It's the same idea as
the color int. We use a primitive type to store a color but we also store the color space for that color. So it's
similar to value of you just call pack instead and you get a long. And here's the format of the long. So we have 16
bits for the red channel, 16 bits for the green channel, 16 bits for the blue channel. We have 10 bits for the alpha
channel. And then we have six bits that identify one of the built-in color spaces. And those 16- bit values um they
use something called half floats. So they're floats that use only 16 bits instead of 32 bits. So there's a new API
in O called and Android. That lets you manipulate half floats. Uh anybody who does HDR or you
know advanced rendering in OpenG or Vulcan might find that API useful. If you use the color class, uh you don't
have to worry too much about it. There's a ton of utility methods on color that will use the health API on your
behalf. So, we also have the the color space class. Uh it's it's well documented. It's pretty easy to do. You
just call get uh and you can query one of the common color spaces that we provide. You can create your own color
spaces if you want. Two methods that are interesting on color space. There's is white gamut. It will tell you if it's a
a wider gamut than sRGB and get model. It will tell you how many components are in the color in that color space. to
RGB, CMYK, LB, stuff like that. If the model is sRGB is RGB, sorry, you can cast it to color space.RGB, uh, that
gives you access to more APIs. You can query the primaries, you can query the white point, you have access to the
transfer functions. This is this is also very well documented. If you want to do
conversions between color spaces, it's pretty simple. Uh, you call connect. You give us the source color space and the
destination color space. And the reason why why you need to call connect is that we need to make sure that both color
spaces use the same white points. So when the color spaces have different white points, we do a little bit of math
internally to uh to turn them into the same to make them use the same white point effectively. So then if you call
the transform method uh you can give us RGB values and we're going to give you back the uh the corrected RGB
values. You can also change the white point of a color space. Um, so if you call adapt color space adapt, so here
for instance, we do what I did for one of my examples, my photo of the fish that was very blue, we took the sRGB
color space and I changed the white point from something called D65 to something called D50 that's that's
bluer. White points are usually defined as a color temperature. It's the perceived color of a black body when you
heat it to that temperature. So here D50, it's 5,000° Kelvin and it's uh it's blue. and D65, which is a very common uh
white point uh in color spaces for our monitors. It's 600 uh sorry, 6,54 degrees Kelvin and it's more
yellow bit maps. We now support uh color space uh color spaces embedded in bit maps. So they're called ICC profiles.
Until now, we would just ignore it completely on Android. So if you use bitmap factory to decode a bit map, now
you can call get color space on the bit map. and we're going to tell you what it is. Most likely for most images you're
going to load and presumably for all your resources that come inside your APK, the answer is going to be uh sRGB.
So you can call color space is sRGB to check what kind of color space it is. And uh that's very important. All
the bit maps on Android are always in the RGB color model. Uh we might expand on that in the future, but we don't let
you use lab or CMYK or XYZ or any of that. They're always RGB. So you can when you right now when you call get
color space on the bit map, you can always cast it to color space at RGB. It might not be future proof. So just make
sure by by checking the color model. You can do more interesting things with bitmap factory. Um so on
bitmap factory.options, options. We have this uh horribly named field called in uh just decode bounds. It was originally
created to let you query the dimensions of an image without without having to decode all the pixels in the image. So,
it's really quick. It gives you an idea of how big the image is going to be. Over the years, we've kind of abused uh
this field. So, now it's going to tell you the configuration of the bit map. You know, is it ARGB 888, is it RGB 565?
And now it also tells you what the color space is. So if you want to know the color space of a of a bit map ahead of
time, you have to ask us to uh give you just the bounds, just the dimensions. So you code your you call your your decode
method on bit map factory. You pass your options. Um and then we have this field called out color space that tells you
what is the color space of the bit map. So if you want to make sure that the bitmap is in the right color space
before loading it, you can do this. And if it's not in the right color space, you can use this uh other API on options
called in preferred color space where you can tell us what you want the color space of the decoded bit map to be. Um
so in this example, for instance, let's say we query the color space of the bit map. We saw that it was Adobe RGB. I
don't want Adobe RGB. I want sRGB. So I can use in preferred color space to force the system to to convert it at
load time. Then you can just call decode. Uh we also introducing in Android O so uh 16- bit maps uh so
they're bit maps that use 16 bits per channel and those bit maps are always always in a color space called linear
extended sRGB and we're going to take a look at at what it is. So when you have a 16- bit map, don't try to convert the
color space. We're not going to let you, at least for now. This is how you create a white game with bit map. You just
create call create bit map. You specify the width, the height, the configuration, the boolean tells us
whether there's alpha in the bit map and then you just give us a color space. So pretty
simple. Bit map has APIs that lets you read and write pixels inside the bit map. Uh so because get pixel and set
pixel use color ins they have to use sRGB. So when you call get pixel on a bit map that is not sRGB we're going to
do a conversion for you. Same thing when you call set pixel we expect sRGB. So you have to do conversion yourself. Now
if you use um this other API called copy pixels to buffer it gives you access to the raw data of the bit map. So that
data is going to be in the native color space of the bit map left completely untouched. You have to be a little bit
careful because of that new configuration for 16- bit maps. So if you do the 16 bit PNG, the data into
that in that bite buffer is going to be half floats. It's not going to be those color ins. Uh and that's why you might
want to take a look at Android. All right. So now what happens when you draw bit maps on the screen? Uh
so until now what we are doing is we take a bit map, we assume it's sRGB, we send it to the screen and if the screen
is not sRGB, too bad. The colors are going to be completely wrong. This is still what we do uh by default
on on Android O. Now, if we have a bit map that we know is not sRGB, it has a color space associated with it, we're
going to do an sRGB conversion on your behalf in the rendering pipeline. Uh it can be a little bit expensive, so you
should avoid it as much as you can if you don't need non sRGB bit maps, but at least the colors are going to be correct
on the display. Now if you render a bit map into another bit map. So if you create a bit map, you
create a canvas and then you call draw bit map on that canvas. We're going to do we're going to convert from whatever
the color the source color space is to whatever the destination color space is. So if you want to convert a bit map from
one color space to another, not at load time. This is the way you do it. You just create the destination bit map. You
create a canvas and you just draw. So pretty simple. So we make all those assumptions about
sRGB but like I said earlier if you remember I said that OLED displays on our phones have white gamuts. They can
show more colors than sRGB. Now the problem is is that if we take your sRGB content and we don't stretch it to the
entire gamut of the display anymore and we keep it in that little triangle. We have all those unused colors that we're
not taking advantage of. So in Android O we're adding this new API and it's super complicated. It's just one attribute
that you add to your manifest per activity. You have to tell us that you want to render using those extra colors.
You want the white color gamut mode. The way it works is as follows. So you have if you have sRGB content
uh sorry if you're on a device that does not support that mode, not all devices will support that mode. So if you're on
a device that does not support that mode, your window is going to use the ARGB88 format. That's what we've been
using for 10 years. So there's no nothing new there. You have your sRGB content, we just draw it directly on
screen. If you have non sRGB content, we convert it to sRGB and we send everything to the screen. Colors might
be wrong, but that's just because the device does not support the new uh white color gamut rendering. On a device that
does support white color gamut rendering, we're going to make a much larger window. We're going to use 16 bit
per channel. So it's going to double the size in memory of your window. It's going to double the needs in bandwidth.
So it is an expensive thing. So if you don't really need white cover gamut rendering, think twice before enabling
it. Uh if we have sRGB content, we're going we're going to send it directly to the display as
well. And if we have non sRGB content, we're going to convert it to a color space called extended
sRGB. And what is extended sRGB? Uh it's a kind of a weird color space. It's a really big color space. It's way bigger
than the visible spectrum. Much much bigger. And some of the values are negative and some of the values are
greater than one. They go all the way to 7.5. And what's interesting about that color space that all the values between
zero and one match exactly the sRGB color space. So what enables us to do is we can take your existing content, all
your sRGB content, basically everything you have in your app today and we can draw it directly. We don't need to do
any conversion because it's going to match sRGB. So what we do is we only pay the cost of a conversion when we're
drawing non sRGB content. So we have you know the expense is that we need 16 bit per channel because we need a lot of uh
precision and range to be able to encode this sRGB space. Uh but it's much better for you. It's much simpler. You just
have one attribute in your application. Interestingly uh because we use 16 bit per channel because of extended sgb we
can or we will be able to maybe in the future uh to render in HDR directly. So we could have HDR user interfaces.
We also have a new uh resource qualifier called white CG for white color gamut. Uh so you can create layouts or strings
or drawables that are specific uh to a to a display that supports the white color gamut rendering
mode. You we also have a few APIs that you can use to query whether the device supports white gamut. So if you have
resources instance you can grab the configuration and the configuration will tell you if you have a white color gamut
display. If you have a view you can call get display. We give you a display object and you can ask the display
whether or not it's it's white color gamut. So the main conclusion of all this is don't panic. I put a bunch of
hearts on the slides to make you feel better about all this. I know it's complicated and no matter how hard you
try and I don't want to sound disheartening but your colors are going to be wrong somewhere for someone like
they're wrong for me on that screen and I know a little thing or two about color spaces and color management but you're
not in control of everything. you're not in control of the final display and you're not in control of all the
software that intervenes in the in the pipeline of an application of of rendering colors. So don't worry too
much about it. Do your best. Make sure that your designers work on calibrated displays. Make sure they work in sRGB.
Um make sure the images contain color spaces and you should be okay. At least mostly okay. So if you want to learn
more about transfer functions uh so there's this talk uh I gave last year with chat. So if you go to that URL and
you go to minute 29 that's when the the color part of the talk starts. The first half is about animations. It's also
super interesting. You can also look at the documentation for color space.rgb. Uh there's a lot of details and
information about transfer functions and what they mean for you. Uh if you want to learn even more about
colors, uh I gave a talk at Divox US a couple months ago, uh I talked about bending and dithering. What is bending?
How how can you fight it? What do we do in the platform to fight it on your behalf? So if you're interested, uh go
at uh at that URL and it starts at minute 36. And I think that's it. We're I only
have one minute left, so I don't have time for questions. Uh but I'll be at the Android uh sandbox if you want to
talk more about color and color management and all that. Thank you. [Music]
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries

Unlocking the Art of Color Scripting: A Comprehensive Guide
Discover the essential skills and techniques for mastering color scripting in animation and visual storytelling.

Understanding Spherical and Chromatic Aberration in Optical Lenses
This video explains the concepts of spherical and chromatic aberration in optical lenses, detailing how to eliminate these issues through proper lens configuration. It covers the relationship between object distance, lens types, and image formation, providing insights into practical applications in optics.

Understanding the Role of a Color Scripter in Animation Production
Explore the essential skills and responsibilities of a color scripter in animation. Learn how they enhance storytelling through color.

Java Programming: A Comprehensive Guide to Understanding Java and Its Concepts
Explore Java programming concepts including OOP, exception handling, and collections. Learn how to build robust applications!

Understanding My Coding Journey: From Visual Basic to Object-Oriented Programming
In this video, the speaker shares their coding journey starting from Visual Basic in high school to learning C and Object-Oriented Programming (OOP) in college. They discuss the challenges faced while transitioning to OOP concepts and how they eventually grasped the importance of messaging and encapsulation in programming.
Most Viewed Summaries

A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.

Mastering Inpainting with Stable Diffusion: Fix Mistakes and Enhance Your Images
Learn to fix mistakes and enhance images with Stable Diffusion's inpainting features effectively.

How to Use ChatGPT to Summarize YouTube Videos Efficiently
Learn how to summarize YouTube videos with ChatGPT in just a few simple steps.

Ultimate Guide to Installing Forge UI and Flowing with Flux Models
Learn how to install Forge UI and explore various Flux models efficiently in this detailed guide.

Pag-unawa sa Denotasyon at Konotasyon sa Filipino 4
Alamin ang kahulugan ng denotasyon at konotasyon sa Filipino 4 kasama ang mga halimbawa at pagsasanay.