The masked normalized cross-correlation and its application to image registration

Posted on 2019-04-30. Last updated on 2022-02-20.

Estimated reading time of 12 min.

Image registration consists in determinining the most likely transformation between two images — most importantly translation, which is what I am most concerned with.

How can we detect the translation between two otherwise similar image? This is an application of cross-correlation. The cross-correlation of two images is the degree of similitude between images for every possible translation between them. Mathematically, given grayscale images as discrete functions I1(i,j)I_1(i,j) and I2(i,j)I_2(i,j), their cross-correlation I1I2I_1 \star I_2 is defined as: (I1I2)(u,v)i,jI1(i,j)I2(iu,jv) (I_1 \star I_2)(u, v) \equiv \sum_{i,j} I_1(i, j) \cdot I_2(i - u, j - v)

For example, if I1=I2I_1 = I_2, then I1I2I_1 \star I_2 has its maximum at (u,v)=(u,v) = (0,0). What happens if I1I_1 and I2I_2 are shifted from each other? Let’s see:

The cross-correlation between shifted images exhibits a global maxima at the location corresponding to relative translation. (Source code)

In the above example, the cross-correlation is maximal at (50, 0), which is exactly the translation required to shift back the second image to match the first one. Finding the translation between images is then a simple matter of determining the glocal maximum of the cross-correlation. This operation is so useful that it is implemented in the Python library scikit-image as skimage.feature.phase_cross_correlation.

It turns out that in my field of research, image registration can be crucial to correct experimental data. My primary research tool is ultrafast electron diffraction. Without knowing the details, you can think of this technique as a kind of microscope. A single image from one of our experiments looks like this:

An electron diffraction pattern of polycrystalline chromium.

Most of the electron beam is unperturbed by the sample; this is why we use a metal beam-block (seen as a black rod in the image above) to prevent the electrons from damaging our apparatus.

Our experiments are synthesized from hundreds of gigabytes of images like the one above, and it may take up to 72h (!) to take all the images we need. Over the course of this time, the electron beam may shift in a way that moves the image, but not the beam-block1. Heres’s what I mean:

Here is the difference between two equivalent images, acquired a few hours apart. The shift between them is evident in the third panel. (Source code)

This does not fly. We need to be able to compare images together, and shifts by more than 1px are problematic. We need to correct for this shift, for every image, with respect to the first one. However, we are also in a bind, because unlike the example above, the images are not completely shifted; one part of them, the beam-block, is static, while the image behind it shifts.

The crux of the problem is this: the cross-correlation between images gives us the shift between them. However, it is not immediately obvious how to tell the cross-correlation operation to ignore certain parts of the image. Is there some kind of operation, similar to the cross-correlation, that allows to mask parts of the images we want to ignore?

Thanks to the work of Dr. Dirk Padfield2 3, we now know that such an operation exists: the masked normalized cross-correlation. In his 2012 article, he explains the procedure and performance of this method to register images with masks. One such example is the registration of ultrasound images; unfortunately, showing you the figure from the article would cost me 450 $US, so you’ll have to go look at it yourselves.


In order to fix our registration problem, then, I implemented the masked normalized cross-correlation operation — and its associated registration function — in our ultrafast electron diffraction toolkit, scikit-ued4. Here’s an example of it in action:

Using the masked-normalized cross-correlation to align two diffraction patterns of polycrystalline chromium. The mask shown tells the algorithm to ignore the beam-block of both images. (Source code)

Contributing to scikit-image

However, since this tool could see use in a more general setting, I decided to contribute it to scikit-image:

  1. My contribution starts by bringing up the subject via a GitHub issue (issue #3330).
  2. I forked scikit-image and integrated the code and tests from scikit-ued to scikit-image. The changes are visible in the pull request #3334.
  3. Finally, some documentation improvements and an additional gallery example were added in pull request #3528.

In the end, a new function has been added, skimage.registration.phase_cross_correlation (previously skimage.feature.masked_register_translation).


  1. Technically, the rotation of the electron beam about its source will also move the shadow of the beam-block. However, because the beam-block is much closer to the electron source, the effect is imperceptible.↩︎

  2. Dirk Padfield. Masked object registration in the Fourier domain. IEEE Transactions on Image Processing, 21(5):2706–2718, 2012. DOI: 10.1109/TIP.2011.2181402↩︎

  3. Dirk Padfield. Masked FFT registration. Prov. Computer Vision and Pattern Recognition. pp 2918-2925 (2010). DOI:10.1109/CVPR.2010.5540032↩︎

  4. L. P. René de Cotret et al, An open-source software ecosystem for the interactive exploration of ultrafast electron scattering data, Advanced Structural and Chemical Imaging 4:11 (2018) DOI:10.1186/s40679-018-0060-y. This publication is open-access .↩︎