A typical implementation of NL-means

Semi-nonlocal implementations

As can be inferred from the description, NL-means is a greedy algorithm: for each pixel, the denoising pixel can explore the whole image. Thus, it is common practice to limit the nonlocal exploration stage at a limited window around each pixel1. However, since nonlocality is achieved only inside a subwindow around each pixel, I refer to these implementations as semi-nonlocal.

It is common for an implementation of NL-means to include an additional parameter for the size of the learning window. Hence, patches are computed and compared only in a subset of the image that is close to the pixel to denoise, which makes these methods strctly speaking only semi-nonlocal.

This choice was originally made to make the computations lighter, otherwise the denoising of each single pixel would require the exploration of the whole image. However, as we will see in the next post, this does also help in achieving better denoising results

Central pixel handling

During the learning window exploration, there is always a time when the patch centered on the pixel of interest is compared with itself. Obviously, it will be affected the maximum weight of 1, which will disturb the noise removal process.

Hence, it is a common practice to replace this weight by the maximum weight found during the learning window exploration. This allows for a more efficient denoising.

Show me the code!

No examples nor code in this post in order to keep its length reasonable. The next post will contain some examples to support a somewhat disturbing claim: more non-locality in NL-means can degrade the result. The code to generate the examples will be posted on the blog’s github at the end of the series.


  1. This window can sometimes by really small o_0. ^

Related