I will reference this other answer of mine:
Color composition
It is important to keep in mind that although white is the result of the sum of red, green and blue, it does not mean that each of these three colors represents a third of white. This is not true, and can be easily perceived empirically by noting that pure green is bright, while pure red is matte and pure blue is dark.
In fact, the exact proportion of white light composition depends on the disposition of the different receptor cells in the retina of the observer’s eye, health conditions, weariness, age and stress of the observer, lighting conditions, brightness and contrast of the screen, the angle and direction between the screen plane and the observer’s line of sight, the screen type (reflective or reflective Anti-reflective, CRT, LED, plasma, LCD, overhead projector, Kindle, etc.), among many other variables, may even vary from one eye to another in the same person with normal and healthy vision.
But disregarding these variables that are out of the programmer’s control and assuming that the user has a healthy view and is using a good quality screen in an environment with adequate lighting, there’s a formula I saw in a book once a few years ago that gave the following ratio:
It’s a shame I don’t remember the title, but the bfavaretto gave three references to it in the comments: 1, 2 and 3, although there are small variations in the exact factors.
Keeping these brightness composition factors in mind is important in case you want to make an algorithm for anti-aliasing considering that subpixels have different colors.
This same formula given above for the white color, can be used to measure the brightness of a certain color from its red, green and blue components. According to this page, the formula recommended by W3C (similar to the one above) is:
However, this same page says that this formula may still fail. For example, the color (240, 0, 30) is slightly brighter than (80, 80, 80), and by this formula of the W3C, the first would have a brightness of 75.18 while the second would have 80 ([![red and gray][2]][2]). The reason for this is that the brightness is actually the distance that a color has in relation to black, and not just the weighted sum of the values of its shades.
If we consider all colors arranged as different internal points in a parallelepiped where one of the vertices is black, the opposite vertex is white, the vertices adjacent to black are red, green and blue and the vertices opposite to these are cyan, magenta and yellow (in this order), one of the dimensions would correspond to the value of the red component, the other of the green component and the other of the blue component. If we define the size of each of the dimensions of this parallelepiped as the intensity of the corresponding color component, then we could use the Euclidean distance from the point occupied by any color within that parallelepiped to the vertex of the black color as a measure of brightness. Thus, to calculate the intensity of a color, just use the Pythagorean theorem. If we use the values of W3C, we would arrive at this formula:
In this formula, the brightness of the above colors would be 131,62 and 80.
Based on this, if the brightness of a color is the distance that this color has from the black color as explained above, then we can measure the difference between two colors from that distance. The formula would be the last one given above, only the values of the variables vermelho
, verde
and azul
are the differences between the values of the two colors.
In Java, that would be so:
public static double distanciaCores(int r1, int g1, int b1, int r2, int g2, int b2) {
int dr = r1 - r2;
int dg = g1 - g2;
int db = b1 - b2;
return Math.sqrt(0.299 * dr * dr + 0.587 * dg * dg + 0.114 * db * db);
}
The method getRGB(int,int)
returns in Java colors in format ARGB as a int
(32 bits). In this int
, the 8 most significant bits are the alpha (opacity), the next 8 bits are the red component, then another 8 bits of the green component and finally 8 bits of the blue component. Therefore, we can overload the above method to work with the two colors coded each one within a int
already thinking of using the format returned by getRGB
at a later date. Auxiliary methods to separate each component (disregarding the alpha) are also used:
public static double distanciaCores(int a, int b) {
return distanciaCores(red(a), green(a), blue(a), red(b), green(b), blue(b));
}
public static int red(int c) {
return (c >> 16) & 0xff;
}
public static int green(int c) {
return (c >> 8) & 0xff;
}
public static int blue(int c) {
return c & 0xff;
}
To do this in an entire image, it compares pixel by pixel and normalizes according to the image area:
public static double diferencaImagens(BufferedImage a, BufferedImage b) {
int h = a.getHeight();
int w = a.getWidth();
if (w != b.getWidth() || h != b.getHeight()) throw new IllegalArgumentException();
double dif = 0.0;
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
int c1 = a.getRGB(x, y);
int c2 = b.getRGB(x, y);
dif += distanciaCores(c1, c2);
}
}
return dif / (h * w);
}
The only detail is that this algorithm works well only when the pixels of the first image coincide with that of the second, which is useful to measure images where it has black and white vs colored version, images using different filters, different glares and contrasts, focusing, etc. This will not give good results if the pixel positions do not match (cases of rotations, displacements, enlargements and image reductions, etc).
Could you specify better what you want? An example of the two squares, if you already have something done, if you have any knowledge on the subject... All this is relevant to "help us help you".
– Randrade
Related: How google’s "Search for similar image" feature works?
– Luiz Vieira