Image gradients
Example
This example compares the quality of the gradient estimation methods in terms of the accuracy with which the orientation of the gradient is estimated.
using Images
values = LinRange(-1,1,128);
w = 1.6*pi;
## Define a function of a sinusoidal grating, f(x,y) = sin( (w*x)^2 + (w*y)^2 ),
## together with its exact partial derivatives.
I = [sin( (w*x)^2 + (w*y)^2 ) for y in values, x in values];
Ix = [2*w*x*cos( (w*x)^2 + (w*y)^2 ) for y in values, x in values];
Iy = [2*w*y*cos( (w*x)^2 + (w*y)^2 ) for y in values, x in values];
## Determine the exact orientation of the gradients.
direction_true = atan.(Iy./Ix);
for kernelfunc in (KernelFactors.prewitt, KernelFactors.sobel,
KernelFactors.ando3, KernelFactors.scharr,
KernelFactors.bickley)
## Estimate the gradients and their orientations.
Gy, Gx = imgradients(I,kernelfunc, "replicate");
direction_estimated = atan.(Gy./Gx);
## Determine the mean absolute deviation between the estimated and true
## orientation. Ignore the values at the border since we expect them to be
## erroneous.
error = mean(abs.(direction_true[2:end-1,2:end-1] -
direction_estimated[2:end-1,2:end-1]));
error = round(error, digits=5);
println("Using \$kernelfunc results in a mean absolute deviation of \$error")
end
The output of this code is:
Using ImageFiltering.KernelFactors.prewitt results in a mean absolute deviation of 0.01069
Using ImageFiltering.KernelFactors.sobel results in a mean absolute deviation of 0.00522
Using ImageFiltering.KernelFactors.ando3 results in a mean absolute deviation of 0.00365
Using ImageFiltering.KernelFactors.scharr results in a mean absolute deviation of 0.00126
Using ImageFiltering.KernelFactors.bickley results in a mean absolute deviation of 0.00038
kernelfun
options
You can specify your choice of the finite-difference scheme via the kernelfun
parameter. You can also indicate how to deal with the pixels on the border of the image with the border
parameter.
Choices for kernelfun
In general kernelfun
can be any function which satisfies the following interface:
kernelfun(extended::NTuple{N,Bool}, d) -> kern_d,
where kern_d
is the kernel for producing the derivative with respect to the th dimension of an -dimensional array. The parameter extended[i]
is true if the image is of size > 1 along dimension . The parameter kern_d
may be provided as a dense or factored kernel, with factored representations recommended when the kernel is separable.
Some valid kernelfun
options are described below.
KernelFactors.prewitt
With the prewit option [3] the computation of the gradient is based on the kernels:
See also: KernelFactors.prewitt
and Kernel.prewitt
KernelFactors.sobel
The sobel option [4] designates the kernels
See also: KernelFactors.sobel
and Kernel.sobel
KernelFactors.ando3
The ando3 option [5] specifies the kernels
See also: KernelFactors.ando3
, and Kernel.ando3
; KernelFactors.ando4
, and Kernel.ando4
; KernelFactors.ando5
, and Kernel.ando5
KernelFactors.scharr
The scharr option [6] designates the kernels
See also: KernelFactors.scharr
and Kernel.scharr
KernelFactors.bickley
The bickley option [7,8] designates the kernels
See also: KernelFactors.bickley
and Kernel.bickley
Choices for border
At the image edge, border
is used to specify the padding which will be used to extrapolate the image beyond its original bounds. As an indicative example of each option the results of the padding are illustrated on an image consisting of a row of six pixels which are specified alphabetically:
"circular"
The border pixels wrap around. For instance, indexing beyond the left border returns values starting from the right border.
Details
To appreciate the difference between various gradient estimation methods it is helpful to distinguish between: (1) a continuous scalar-valued analogue image
Analogue image
The gradient of a continuous analogue image
where
Digital image
In practice, we acquire a digital image
A straightforward way to approximate the partial derivatives is to use central-difference formulae
and
However, the central-difference formulae are very sensitive to noise. When working with noisy image data, one can obtain a better approximation of the partial derivatives by using a suitable weighted combination of the neighboring image intensities. The weighted combination can be represented as a discrete convolution operation between the image and a kernel which characterizes the requisite weights. In particular, if
The kernel is frequently also called a mask or convolution matrix.
Weighting schemes and approximation error
The choice of weights determines the magnitude of the approximation error and whether the finite-difference scheme is isotropic. A finite-difference scheme is isotropic if the approximation error does not depend on the orientation of the coordinate system and anisotropic if the approximation error has a directional bias [2]. With a continuous analogue image the magnitude of the gradient would be invariant upon rotation of the coordinate system, but in practice one cannot obtain perfect isotropy with a finite set of discrete points. Hence a finite-difference scheme is typically considered isotropic if the leading error term in the approximation does not have preferred directions.
Most finite-difference schemes that are used in image processing are based on
where
Separable kernels
A kernel is called separable if it can be expressed as the convolution of two one-dimensional filters. With a matrix representation of the kernel, separability means that the kernel matrix can be written as an outer product of two vectors. Separable kernels offer computational advantages since instead of performing a two-dimensional convolution one can perform a sequence of one-dimensional convolutions.
References
-
B. Jahne, Digital Image Processing (5th ed.). Springer Publishing Company, Incorporated, 2005. 10.1007/3-540-27563-0
-
M. Patra and M. Karttunen, "Stencils with isotropic discretization error for differential operators," Numer. Methods Partial Differential Eq., vol. 22, pp. 936—953, 2006. doi:10.1002/num.20129
-
J. M. Prewitt, "Object enhancement and extraction," Picture processing and Psychopictorics, vol. 10, no. 1, pp. 15—19, 1970.
-
P.-E. Danielsson and O. Seger, "Generalized and separable sobel operators," in Machine Vision for Three-Dimensional Scenes, H. Freeman, Ed. Academic Press, 1990, pp. 347—379. doi:10.1016/b978-0-12-266722-0.50016-6
-
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252—265, 2000. doi:10.1109/34.841757
-
H. Scharr and J. Weickert, "An anisotropic diffusion algorithm with optimized rotation invariance," Mustererkennung 2000, pp. 460—467, 2000. doi:10.1007/978-3-642-59802-9_58
-
A. Belyaev, "Implicit image differentiation and filtering with applications to image sharpening," SIAM Journal on Imaging Sciences, vol. 6, no. 1, pp. 660—679, 2013. doi:10.1137/12087092x
-
W. G. Bickley, "Finite difference formulae for the square lattice," The Quarterly Journal of Mechanics and Applied Mathematics, vol. 1, no. 1, pp. 35—42, 1948. doi:10.1093/qjmam/1.1.35