Unformatted text preview: Medical Image Processing 1 (MIP1)
Image Signal Analysis and Processing (ISAP)
SS’09
Image Enhancement
Dr. Pierre Elbischger
School of Medical Information Technology
Carinthia University of Applied Sciences Sharpening using the Laplacian (1)
The principal of sharpening is to highlight fine details in an image or enhance details that have
been blurred, either in error or as a natural effect of a particular method of image acquisition. The
Nabla operation can be replaced by the convolution of the image with the Laplacian of Gaussian
filter kernel.
Sharpening, using the Laplacian 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 2 Sharpening using the Laplacian (2) 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 3 Sharpening using the Laplacian (3) 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 4 Sharpening using unsharp masking (1)
Subtract a smooth version of the image from the original image to obtain a mask that contains the
highfrequency content (edge information) of the image, thus that represents a sort of highpass image. Add a multiple of the mask to the original image to sharpen the image lowpass hlp + +
•a + For the lowpass filter, the Gaussian filter is often used. More robust to noise compared to the sharpening using
the Laplacian
Tunable parameters σ (spatial spread, 1…20) and a
(sharpening intensity, 0.2…4.0)
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 5 Sharpening using unsharp masking (2)
Original and sharpened images 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 Line profile 6 Highboost filtering (1)
Unsharp masking. Subtract from an image f(x,y) a lowpass filtered version flp(x,y) of itself.
This results in a highpass filtered version fhp(x,y) of the image (the unsharp mask). In order to enhance the edges, the unsharp mask fhp(x,y) can be added
to the image. Highboost filtering. The constant factor 2 in the unsharpening operation is
replaced by the parameter A that can be used to adjust the contribution made
by the image to the overall enhanced result. lowpass HL
19.05.2009 +
 +
+ •(Dr. Pierre Elbischger  MIP1/ISAP'SS09
A1) 7 Highboost filtering (2) For the lowpass filter, the Gaussian filter is often used. More robust to noise compared to the sharpening using the Laplacian
Tunable parameters σ (spatial spread) and A (sharpening intensity) A=2 the expression is identical to unsharp masking (a=1, see previous slides) In general an arbitrary highpass filter can be used. If we use the Laplacian for the highpass
filter, then for: A=1 the expression is identical to the (negative) Laplacian of the image
A=2 the expression becomes identical to the sharpening using a Laplacian (see previous slides) 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 8 Probability  Classical definition (Laplace 1812)
A twodimensional representation of the outcomes of two dice, and the subspaces
associated with the events corresponding to the sum of the dice being greater than 8,
or less than or equal to 8. 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 9 Empirical probability (VonMises 1936)
The experiment is repeatable forP ( A) := lim n event occurs nA times.
n times whereby the
n
The probability is defined by the relative frequency nA/n.
Due to the fact that n is always finite, P(A) can only be an estimate.
Law of large numbers → For an increasing number of experiments, the probability
approaches a limit.
A n →∞ Example: Relative frequency (throw a dice n times) 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 10 Distribution function
FX (α ) : P( x ≤ α )
=
Example: Dice 19.05.2009 Properties: d X (+∞)
FX (−∞)
F
= 0 an= 1
FX (α 2 ) − FX (α1 ) P (α1 < x ≤ α 2 ) ≥ 0
=
P(= α ) 0, if FX is continous.
x= Dr. Pierre Elbischger  MIP1/ISAP'SS09 11 Probability density function
The first derivative of the distribution function FX(α) is defined as the probability
density function (PDF) fX(α) of the random variable x. dFX (α )
f X (α ) :=
dα
Properties: f X (α ) ≥ 0
=
FX (α ) α ∫
⇐ monotony of the distribution function
f X (γ ) ⋅ d γ in particular +∞ =
FX (+∞) −∞ ∫ =
f X (γ ) ⋅ d γ 1 −∞ Example: Uniform distribution 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 12 Continuous vs discrete random variable
Discrete random variable
A discrete random variable is one which may take on only a countable number of
distinct values such as 0, 1, 2, 3, 4, ... Discrete random variables are usually (but not
necessarily) counts. Examples of discrete random variables include the number of
children in a family, the Friday night attendance at a cinema, the number of patients in
a doctor's surgery, etc.. (e.g., Binomial distribution, Poisson distribution  count of the
number of events that occur in a certain time interval or spatial area → CCD element )
Example: Binomial distribution n
=)
p ( X k= p k (1 − p ) n − k
k Example: Bimodal distribution
X has value 1 with probability p and 1 with probability (p1).
Mixed distribtuion
Continuous random variable
A continuous random variable is one which takes
an infinite number of possible values. Continuous
random variables are usually measurements.
Examples include height, weight, the amount of
sugar in an orange, the time required to run a
mile. (e.g., Normal distribution).
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 13 Uniform / Impulse distribution
Uniform 19.05.2009 Impulse Dr. Pierre Elbischger  MIP1/ISAP'SS09 14 Gaussian distribution (normal distribution) z N (µ ,σ 2 )
µ mean σ 2 variance
P ( µ − σ < x ≤ µ − σ ) =7
0,
P ( µ − 2σ < x ≤ µ − 2σ ) =
0,95
Mathematical tractable in both the
spatial and frequency domain.
Is appropriate in many realistic cases →
central limit theorem. 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 15 Rayleigh distribution 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 16 Gamma / Exponential distribution
Gamma 19.05.2009 Exponential Dr. Pierre Elbischger  MIP1/ISAP'SS09 17 E17 Histogramming
Partition the measurement space into a finite number n of disjoint regions ℛi with
equal volume Vn (1D width of the bins, 2D area, 3D volume, nD hypervolume) called bins, and count the number of samples kn (often called the
frequency) that fall in each of these bins. The estimated probability is proportional
to that count. Choosing Gaussian distribution results in the following estimates of the underlying n=1000, #bins=100 n=100, #bins=10
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 18 Histograms of noisy images I
The original image had only three distinct gray levels. 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 19 Histograms of noisy images II 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 E16 20 Sample moments
Assuming a sample {x1, x2, …, xN} with N observations, the sample moments
are defined as:
r’th sample raw moment m1 represents the mean of the distribution
r’th sample central moment s2 represents the variance of the distribution 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 21 Other statistical parameters
Order statistics
The sample set is ordered. Often they are used as robust estimates for:
•the mean (e.g., median) and
•standard deviation (3σ > 99th percentile). pth percentile 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 22 Graylevel transformation (1)
Changes the gray value independent of the pixel location in the image
Is often used to improve the visual appearance to a human The gray value of each pixel in the input image f(x,y) is replaced by the gray
value determined by the graylevel transformation function T(.) resulting in the
output image g(x,y). T(.) relates the graylevel of an input pixel r to a graylevel of an output pixel s. In the case where the image consists of discrete
values, the transformation can efficiently be
implemented using a lookup table. s
brighter r1 r2 r brighter
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 23 Graylevel transformation (2) r negative image
19.05.2009 s s s r1 r2 r contrast enhancement
Dr. Pierre Elbischger  MIP1/ISAP'SS09 c r binary threshold
24 Graylevel transformation (3)
s r negative image of a mammogram 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 25 Gamma correction (1)
A pixel value may represent the amount of light falling onto a sensor element in a
camera, the photographic density of film, the amount of light to be emitted by a
monitor, the number of toner particles to be deposited by a printer, or any other
relevant physical magnitude. In practice, the relationship between a pixel value and
the corresponding physical quantity is usually complex and almost always nonlinear.
The exposure function specifying the
relationship between the logarithmic light
intensity B and the resulting film density D is
almost linear within a certain range. The slope
of the linear part is traditionally referred to as
the “gamma” value of the photographic
material. The term was adopted later to
characterize other devices such as the
cathode ray tube (CRT), that has a nonlinear
relation between the amplitude (voltage) of
the video signal and the emitted light. To
compensate for this nonlinearity a gamma
correction can be applied to the video signal
before visualizing it with a CRT. 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 26 Gamma correction (2)
The gamma correction is based on the gamma function. Where ° is called the gamma value. If a is limited to the
interval [0 1], then – independent of ° – the value of the
gamma function also stays within [0 1] and the function
always runs through the points (0,0) and (1,1). Depending
on the value of °, the function can imitated both logarithmic
and exponential types of function. Because of its monotony
properties the function can easily be inverted that again
leads to a gamma function with the new gamma value 1/°: Device ° 2.5 camera light 1.8 to 2.8 Receiver (TV, …) The transfer characteristic of a device with gamma
value ° is compensated for by a gamma correction with
1/°. The resulting signal b is proportional to the
original light intensity B. CRT, LCD 1/1.956=0.5
1 camera gamma
correction corrected
signal where s denotes the output signal of a certain
device (e.g. a camera).
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 27 Gamma correction (3)
Gamma correction denotes a simple point operation to compensate for the transfer
characteristics of different input and output devices and to map them to a unified
intensity space (“calibrated intensity space”), where the data is stored. When working with 8bit digital images the pixel
values range is [0 255] ([0 amax]) rather than [0 1],
thus we additionally have to perform proper scaling:
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 28 Automatic Contrast Adjustment
Contrast enhancement is achieved by mapping the
current darkest and brightest pixel value to the
lowest and highest possible.
alow and ahigh are the lowest and highest pixel value in
the current image, whose full intensity range is [amin
amax].
In order to make the algorithm more robust to outliers
(extremely high/low pixel values) a fixed percentage
(slow, shigh) of pixels is saturated to the upper and
lower ends of the target intensity range. original autocontrast Matlab: imadjust
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 29 Histogram Equalization (1)
Histogram equalization is a method for automatic contrast improvement.
Idea: Improve the contrast by applying a monotonic graylevel transform T(.)
to an input image with histogram H(r) that results in an output image with
uniformly distributed Histogram G(r), thus they use the entire dynamic range. G(r) H(r) r r
? For our considerations the histogram can be treated
as a continues function (similar to a PDF). The
monotonically increasing property of the transform
implies:
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 30 Histogram Equalization (2) Assuming an image of size NxN with the input brightness
range p0 to pu, the equalized PDF G(r) corresponds to the
uniform PDF with constant value over the entire output
brightness range q0 to qu (all N2 input pixels are equally
distributed of the entire output grayscale range.
Substitution into the first equation results in The desired transformation can than be derived by solving for q: The integral in the transform is the distribution
function an can be replaced by the cumulative
histogram for digital images:
Matlab: histeq
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 31 Histogram Equalization (3) 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 E14 32 The variance of summed random variables
Assuming N is arbitrarily distributed and statistically independent random variables with mean µk
= E{xk} and variance σk2 = E{(xkµk)2}, the variance σy2:=E{(yµy)2} of their sum is given by:
For the variance of the sum of N independent RVs xk with equal variances σk = σx  k=1…N follows: For the variance of the mean of N independent and RVs xk with equal variances σk = σx 
k=1…N follows: The variance of the mean is decreased
as the number of samples used to
estimate the mean is increased.
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 33 Noise sources
Counting statistics → due to a small number of incident particles (photons,
electrons, etc.) Instability in the light source or detector during the time required to scan or
digitize an image. Thermal noise that increases the dark current – cooling can reduce this effect.
Infrared cameras must be cooled more than a visible light camera. Quantization noise
Salt & pepper noise (defect pixels)
Readout noise
Additional sources of noise from the other associated electronics: clock signals,
wiring from the camera to the digitizer, pickup of electrical signals from the
computer itself, ... 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 34 Visualize the noise by image differencing Two images of the same view, acquired as sequential video frames, Difference between the two images.
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 35 Poisson distribution
The Poisson distribution expresses the probability of a number of events k occurring in a
fixed period of time if these events occur with a known average rate and are independent
of the time since the last event. e−λ λ k
f (k ; λ ) =
k!
= λ, σ 2 λ
µ= f (k ; λ ) λ is a positive real number, equal to the expected number of occurrences that
occur during the given interval. Example
For instance, if the events occur on
average every 4 minutes, and you are
interested in the number of events
occurring in a 10 minute interval, you
would use as model a Poisson distribution
with λ = 10/4 = 2.5 events/interval. 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 k
36 Signaltonoise ratio (SNR)
The SNR is a ratio between a signal
that contains meaningful
information and noise that is also
present in the signal and obscures
the meaningful information.
Because many signals have a very
wide dynamic range, SNRs are
usually expressed in terms of the
logarithmic decibel scale. Ps...
Pn... 19.05.2009 signal power noise power Dr. Pierre Elbischger  MIP1/ISAP'SS09 37 Signaltonoise ratio (SNR) – Quantization Noise E11 sine wave sawtooth signal The error of the sine wave
is approximately uniformly
distributed. Signal
Signal power of a sine wave with amplitude A Quantization error
Assuming a uniform distribution of input signal values, the quantization noise is a uniformlydistributed random signal with a peaktopeak amplitude of one quantization level,
The equidistant quantization step for a nbit resolution is given by
The resulting quantization error is uniformly distributed with
and its power corresponds to the variance of the distribution 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 38 Contrast in images
An object is visible in an image because it has a different brightness than its
surroundings.
The human eye can detect a minimum contrast c of about 0.5 to 5%, depending on
the observation conditions. 100% contrast is the difference between pure black and
pure white. In other words, humans can distinguish about 20 to 200 shades of gray
between the blackest black and the whitest white c = 0 pure black
c = 1 pure white Squares with diffeent contrast to the background (c is given in percent)
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 39 SNR/CNR in images
An object is visible in an image only if its contrast is large enough to overcome the
random image noise. In general, trouble begins when the SNR falls below about 1.0.
Sometimes the signal
power in the SNR
calculation is replaced by
the contrast c of the
image resulting in the socalled contrast to noise
ratio (CNR). Note, that in this case, the
variance (power) is not
used but the standard
deviation of the noise. In
logscale this differs only
in a multiplicative factor of
2. 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 40 Constant vs signal dependent noise
Video preamplifier noise originates from the random motion of electrons in the
transistors. This makes the noise level dependent on how the electronics are designed,
but not on the level of the signal being amplified. A typical CCD camera will have an
SNR of about 300 to 1000 (40 to 60 dB) caused by constant amplitude noise. 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 41 Constant vs signal dependent noise (2)
Noise that increases with the signal level results when the image has been represented by a
small number of individual particles. For example, this might be the xrays passing through a
patient, the light photons entering a camera, or the electrons in the well of a CCD. The
mathematics governing these variations are called counting statistics or Poisson statistics. 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 42 Example: CCD element
CCD is uniformly illuminated such that an average of N electrons are generated in each
well. By sheer chance, some wells will have more electrons, while some will have less.
To be more exact, the number of electrons will be Poisson distributed with a mean of
μ=N (signal level), and a variance σ² = N that describes how much variation there is
from welltowell. With increasing illumination (number of electrons per well is increased) the signal
becomes larger faster than the noise, resulting in an overall improvement in the SNR.
Consider a typical CCD camera with an CNR of 300. That is, the noise from the CCD
preamplifier is 1/300th of the full scale signal. An equivalent noise would be
produced if the quantum sink of the system contains 90,000 particles per pixel. If the
quantum sink has a larger number of particles, the preamplifier noise will be
predominant. Accordingly, most CCDs are designed with a full well capacity of
100,000 to 1,000,000 electrons, minimizing the Poisson noise.
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 43 Increasing the SNR by low pass filtering
Major assumption: Neighboring pixels have the same gray value, or a least differ only
slightly.
A N×N uniform filter kernel improves the SNR by a factor of N (the square root of N²)
Due to the low pass filtering the resolution of the image is decreased. The strategy can be continued until the filter kernel is equal to the size of the object
being detected. This means the ability to detect an object is proportional to the
squareroot of its area. If an object's diameter is doubled, it can be detected in twice
as much noise. Original image
19.05.2009 3x3 low pass filtered
Dr. Pierre Elbischger  MIP1/ISAP'SS09 11x11 low pass filtered
44 Increase the SNR by ensemble averaging
Major assumption: pixel readings at different times represent the same structure in the
viewed scene.
Fluorescence light microscopy images original size closeup 19.05.2009 one image Dr. Pierre Elbischger  MIP1/ISAP'SS09 ensemble average 45 E12 Increase the SNR by ensemble averaging – Integration over time
SEM images of a scratched metal surface The distribution
becomes narrower →
decrease in the
variance of the noise 1 second scan 19.05.2009 2 second scan
Dr. Pierre Elbischger  MIP1/ISAP'SS09 46 Lowpass filters vs. order filters
Lowpass filtering works fine in the case of Gaussianlike noise but is not effective in
removing salt & pepper noise. Single pixels that show a very different gray value
influence the mean of their neighborhood. Original Salt and pepper noise Mean filtering Solution: Ranking of the pixels in a certain neighborhood according to their gray value.
Then, for example, the median, minimum or maximum value can be used as the gray
value for the center pixel.
Benefits
• Produces no new gray values
• Retains edges in the image
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 47 E18 Order filters  Median filter
• Retains edges
in the image
• Slow because of
its nonlinearity
salt & paper
noise
(impluse
noise) original 3x3
Median
filtered 19.05.2009 5x5
Median
filtered Dr. Pierre Elbischger  MIP1/ISAP'SS09 48 Rotating mask filter
Reduce the blurring on edges as it is the case in lowpass filtering
Idea: Search for the most homogeneous neighborhood and use this region to compute the gray
value for the pixel of interest.
Use the variance of the gray values in the mask for the homogeneous criteria. Algorithm 1) Place the mask with every possible position over the pixel of interest
2) Compute the variance using the current neighborhood
3) Choose the mask that gives the smallest variance 4) The pixel of interest is assigned to the mean gray value of the neighborhood
determined by the mask 1 19.05.2009 2 ... 7 Dr. Pierre Elbischger  MIP1/ISAP'SS09 8 9 49 E46 T9 Adaptive mean filter (1)
Make the filtering dependent on the local image structure in a way that discontinuities are better
preserved. Mean under
the mask Current gray value Variance under the
mask Variance of the
noise
if no a priori information
about the noise is available
use the following estimate: Nb, Mb … size of the neighborhood
N, M
… size of the image In the case of a local variance that is below the expected noise
variance (e.g., in quite homogeneous regions), the quotient is
approximately ‘0’ and the pixel value in the output image is replaced
by the mean of the neighborhood. blending between the
two extreme states At locations with large local variance, an important object structure is assumed (e.g., edges). The
quotient becomes ‘1’ and, thus, the original image data is preserved.
Matlab: wiener2
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 50 E15 Adaptive mean filter (2) Corrupted by Gaussian
noise with
variance=1000 original Mean filter Adaptive Filter (7x7 window) (7x7) 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 51 Degradation functions
Motion blur H ( u, v ) = sin (π (VxTu + VyTv) ) π (Vx u + Vy v) e − jπ (VxTu +VyTv ) a = VT moved distance
V moving velocity
T moving duration Atmospheric turbulences Out of focus H ( u, v ) = J1 ( ar )
ar 2
r= u 2 + v 2
J1 1. order Bessel function a outoffocus distance (
H ( u, v ) = e − c u 2 + v2 19.05.2009 ) 5 6 c degree of turbulence
Dr. Pierre Elbischger  MIP1/ISAP'SS09 52 E19 Wiener filter
Degradation
Filter u (t ) s (t ) + m(t ) u (t ) Reconstruction
Filter H(f) W(f) n(t ) The optimal reconstruction filter in the
leastsquares sense is the Wiener filter
– It copes with image that are distorted
by the linear system H(f), and is
corrupted by additive noise. −1 W( f ) = H ( f ) corrupted image u (t ) reconstruced signal S( f ) 2 S( f ) + N( f )
2 2 inverse filter Wiener filter increased noise level u (t ) original signal
s (t ) regraded signal Whereby S(f) and N(f) denote the signal
power density spectrum and the noise
power density spectrum respectively.
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 53 Example  Wiener filter – 3D widefield microscopy
When taking a Zseries of images using a widefield fluorescence microscope, it does not discriminate
between light emanating from inside or outside your plane of focus. In fact, you need only to focus the
object near the center of the region of interest to you, and then tell the microscope the upper and lower
limits of Ztravel, the number of images to take in this range, and how far apart (with a resolution
capability of 0.1 micrometers in Z) the images should be. You can the let the microscope take the series
of images, many of which will be completely out of focus. A proper deconvolution software then does its
magic and reallocates light to the correct pixels, and lo and behold, the images suddenly make sense
again, or most of them will make much more sense than they did before, anyway. raw image
19.05.2009 deconvolution result
Dr. Pierre Elbischger  MIP1/ISAP'SS09 54 Extended depthoffield (microscopy) Image series of outoffocus images (large microscopy magnification) Extended depthoffield image
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 55 Solution: Exploiting the local contrast / variance local histogram
in a region of interest (ROI) frequency
Strong texture information
Sahrp image large standard deviation / variance
Weak texture information
unsharp image small standard deviation / variance gray value
19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 56 Extended depthoffield exploiting the local contrast 1. For each pixel in the image the local
variance in a certain neighborhood is
computed.
2. An image of extented depthoffield
is created by combining the pixel
values corresponding to sharp regions
in the input images, thus regions of
large local variance. 19.05.2009 Dr. Pierre Elbischger  MIP1/ISAP'SS09 57 Shading correction  Correction of nonuniform illumination
Multiplicative illuminationmodel f(i,j) = e(i,j)g(i,j)
g(i,j) ... undisturbed image
e(i,j) ... disturbance
f(i,j) ... corrupted image
Use a reference image c(i,j) to
compute the disturbance. In the
simplest case c may be a constant
(e.g., for images that are expected to
have a constant gray value –
homogeneous background, no
objects). g (i, j ) = c
f c (i, j ) = e(i, j )c
e(i, j ) =
19.05.2009 f c (i, j )
c Dr. Pierre Elbischger  MIP1/ISAP'SS09 Illumination correction g (i, j ) = f (i, j ) cf (i, j )
=
e(i, j )
f c (i, j )
58 ...
View
Full Document
 Spring '09
 PIERREELSCHBINGER
 Image processing, Normal Distribution, Probability theory, probability density function, Dr. Pierre Elbischger

Click to edit the document details