I calculate similarity percent between two images. Another image is the same with changed brightness. I've already tried comparing by pixel (euclidean distance for grayscale and 0/1 values), comparing by HSV. I've tried also normalisation (pixel brightness-mean brightness). The results were quite good (80-90%), but the similarity rate with changed image was lower than similarity between images with compeletely different tables, for example. Is there any simple approach to solve this problem? Or should I use neural networks?
Asked
Active
Viewed 1,059 times
2 Answers
1
You can try multiplicative normalization (pixel brightness / mean brightness), or subtractive normalization after transforming with a logarithmic LUT.
0
Have a look at the h.264 spec. It’s for encoding movies, so it does a lot of comparing similar images.
The model is that image2 = image1, translated, with some linear transformation of the pixels.
gnasher729
- 32,238
- 36
- 56