LecturesPart22 - Computational Biology Part 22 Basics of...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Computational Biology Part 22 Basics of Image Acquisition and Processing E. Glory, G. Steven Vanni Meel Velliste, Robert F. Murphy Copyright © 1998, 2000-2009. Copyright All rights reserved. Biological imaging s Significant advances in the fields of optics Significant and electronics in the past two decades have greatly increased the utility of imaging for addressing biological questions. addressing s These advances permit These x more diverse types of information to be more extracted from biological specimens x with greater accuracy with x and under more demanding conditions. Image Formation and Acquisition s s A digital image plane is acquired by recording a digital digital value proportional to the intensity of light (or other form of energy) impinging on each pixel pixel of a detector detector This intensity usually corresponds to the amount This of light emitted by or reflected from a corresponding point on a specimen corresponding 0 0 0 0 0 0 Specimen Projection of specimen onto dectector grid 0 3 6 2 0 0 0 7 8 8 4 0 1 8 8 8 8 2 0 3 8 8 8 1 Image Pixel 0 0 2 4 3 0 0 0 0 0 0 0 Image Formation s Biological images may be acquired via a Biological variety of imaging modes or modalities modes modalities s Each mode is a combination of an image Each formation system and a detector detector Sample Image formation system Detector While the examples so far have dealt with light microscope While images, we will now back up for a few minutes to consider many different types of images before concentrating on light microscopy. microscopy. Detector and image types In general, images may be classified according to In what is being detected: what s (Visible) light transmission, scattering or emission 3 s s s s s single wavelength, 3 color, or full spectrum Electron transmission or scattering X-ray transmission Radioactive particle emission Magnetic field perturbation Physical displacement from “atomic force” Arc Lamp Excitation Diaphragm Excitation Filter Fluorescence Microscope Ocular Objective Emission Filter Light sources in the object s Consider a fluorescent specimen made of Consider individual molecules of fluorescent dye. individual x Each molecule can emit light. x Each dye molecule may be seen as a vanishingly Each small emitter. small x Such an emitter is called a point source. Such point x The concept of a point source is useful because a The point is simple to model, and if we know how a point source is imaged, then we can easily model a complex specimen as a combination of many points and predict how it will be imaged. points Light sources in the object s A specific example might be a microscope specific slide containing cells stained with fluorescent dye. fluorescent s In an ideal image, a point source would In show intensity in only one pixel show Point-spread function s s s In reality, the light from each point in the In specimen is seen to spread out and affect many pixels in the image. pixels The mathematical description of this spreading or The blurring process is called a point-spread function (PSF) (PSF) The point-spread function (PSF) is determined by The the optics of the image formation system, including factors such as the refractive index, diameter and magnification of its components diameter Realistic Image of a Point Source s The resulting blurred region in the image can be The approximated by a 2D Gaussian distribution approximated s This graph shows intensity on the intensity z-axis for a PSF z-axis defined in the defined X-Y plane. X-Y s Later we will consider a PSF defined in three Later dimensions. Light sources in the object s Thus, when a 2D image is acquired, each point in Thus, the specimen will be blurred in all directions and will contribute to the recording in many pixels around that pixel to which it directly corresponds around Introduction to 3D Microscopy s The spreading of light from a point source The actually occurs in three dimensions as will be shown. be s First, however, it is necessary to understand First, the three dimensional (3D) nature of the object and image as acquired via 3D microscopy. microscopy. 3D Microscopy s When a microscope is focused on When a specimen, the detector records an image from a plane. x This is the focal plane. This focal x Parts of the specimen in the Parts focal plane are in the best focus. focus. Detector Focal plane s s 3D data is acquired by combining data from several 3D different focal planes into a stack of images. stack This is accomplished by changing the distance This between the specimen and the microscope’s objective lens from one image acquisition to the next. next. Image stack Objective Real 3D image of a point source s Now, with a better understanding of what makes up a 3D image stack, we can better consider how light from a point source spreads out and is imaged in three dimensions. source 3D Reconstruction of Point Spread Function (PSF) from 0.2 3D Micron Bead Micron y z x x Courtesy of Image & Graphics Inc.: Courtesy Increasing intensity http://www.imagepro.co.kr/ http://www.imagepro.co.kr/ NOTE: Spreading along the Z-axis is more pronounced. Image Formation s Image formation can be described as: x the convolution of an array describing the convolution original specimen or object x with a function describing the image formation with function system x to yield an acquired image. to The concept of a convolution s A convolution may be written in somewhat convolution simplified mathematical form as follows: simplified i(x,y,z) defines the image in its 3D space i(x,y,z) according to the form of the equation above. according s PSF(x-x’,y-y’,z-z’) defines the amount of light PSF(x-x’,y-y’,z-z’) from a point source at x’,y’,z’ that will be observed at x,y,z observed s o(x’,y’,z’) describes the specimen or object. s Image Formation s The mathematical view of convolution The emphasizes that each point in the sample can contribute to each point in the image can • The widefield fluorescence microscopy collects light emitted from all points in the specimen (with varying efficiencies depending on position relative to focal plane) •The result for specimens that are thick relative to the depth of focus of the objective is a blurred image Dealing with out-of-focus fluorescence s We can computationally deconvolve an We deconvolve image to correct for effects of the point spread function (more later) spread s An alternative to obtain images that better An represent the fluorescence distribution just in the focal plane is to use a confocal confocal microscope microscope Laser Excitation Pinhole Confocal Microscope Principle Excitation Filter PMT Objective Emission Filter Emission Pinhole http://micro.magnet.fsu.edu/primer/confocal/index.html Image Display s Operations that change display without Operations changing image changing x LUT - grayscale or color x Contrast stretching s Operations that change image x reversible x non-reversible (majority) Image Display s s A pixel value is just a number in the data set pixel representing a digital image. representing Pixel values may be displayed in different ways, Pixel determined by a look up table (LUT). look 0 0 0 0 0 0 LUT 7-8 0 3 6 2 0 0 0 7 8 8 4 0 1 8 8 8 8 2 0 3 8 8 8 1 0 0 0 0 0 0 Pixel values 4-6 1-3 0 Hot to cold LUT 1-8 4-6 0 1-3 0 0 0 2 4 3 0 LUT 7-8 Arbitrary Binary Image Display - LUT change Image Display - Enhance contrast Original (before contrast enhancement) After enhancement uses full range Thresholding Gray-level image Gray-level s Binary image Binary Thresholding refers to the division of the Thresholding pixels of an image into two classes: those below a certain value (the threshold) and threshold and those at or above it. The two classes are often shown in white and black, respectively. respectively. s Thresholding serves as a means to consider Thresholding only a subset of the pixels of an images. subset Thresholding s The choice of threshold must be made The empirically by considering the nature of the sample, the type and number of objects expected in the image, and/or a histogram of pixel values pixel s The threshold can be specified as a multiple The of the background value (normally the most common value) for partial automation common Ridler-Calvard Method s Find threshold that is equidistant from the Find average intensity of pixels below and above it it s Ridler, T.W. and Calvard, S. (1978) Picture Ridler, thresholding using an iterative selection method. IEEE Transactions on Systems, Man, and Cybernetics 8:630-632. Man, Ridler-Calvard Method Blue line shows histogram of intensities, green lines show average to left and right of red line, red line shows midpoint between them or the RC threshold Ridler­Calvard Illustration 0.25 0.2 0.15 0.1 Frequency 0.05 0 0 20 40 Pixel Value 60 80 Ridler-Calvard Method original thresholded Thresholding Thresholding s Once a threshold has been applied, the Once resulting image may be resulting x displayed in black and white x displayed with above threshold pixels at their original intensities and below threshold pixels in blue blue Thresholding Process/Binary/Threshold does auto threshold and applies it to make binary image Binary image operations Binary image structuring element (clean) Binary image Binary (clean) Binary image operations s Erosion x Remove pixels from edges of objects x Set “on” pixel to “off” if at least one of its eight Set neighbors is white given this structure element: neighbors erosion Original (before erosion) Result (after erosion) Binary image operations s Dilation x Add pixels to edges of objects x Set “off” pixel to “on” if at least one of its Set neighbors is black given this structure element: dilation Original (before dilation) Result (after dilation) Binary image operations s Open x Smooth objects and fill in small holes x Erosion followed by dilation erode dilate open Binary image operations s Close x Smooth objects and fill in small holes x Dilation followed by erosion erode dilate close Binary image operations erode dilate open close Binary image operations s Outline x Find “on” pixel, trace around outside until Find return to first “on” pixel return Binary image operations s Skeletonize x Remove pixels from the edges of objects until Remove the objects are one pixel wide the Basic Image Processing Operations s Image Math s Kernel/Filter Operations s Image Calculator Arithmetic Operations s Two cases: x Perform a single operand operation (e.g., Perform single logarithm, square root) on each pixel of an image image x Perform a dual operand operation (e.g., add, Perform dual multiply) on each pixel of an image using a constant as the second operand constant s In both cases, the result is usually stored in In the same pixel location (“storing in place”) the Arithmetic Operations Kernel/Filter Operations s Basic idea: Use a matrix (usually square and Basic of odd dimension, e.g., 3x3) in combination with an image to generate a new image with s Algorithm: x For each pixel in the image (the current pixel) For current pixel x Align the matrix to center it on that pixel x For each position in the matrix, multiply the For corresponding pixel value in the image by the value in the matrix and sum the results value x Store the result in the current pixel Store current pixel Kernel/Filter Operations s A matrix used in this fashion is called a matrix kernel or filter or s Note that the operation is different from Note matrix multiplication of the kernel by the image because image x the dimensions don’t match, and x all elements of the matrix are combined to give all one result one Common Kernel Operations used in Image Processing s 12 11 4 3 7 s s Smoothing 6 10 5 4 12 54 46 3 10 12 9 10 8 Image 3 7 8 8 11 Kernel 5 111 4 9 ⊗ 1/12x 1 4 1 111 14 13 normalization Sharpening = 6 -1 -1 -1 -1 12 -1 -1 -1 -1 Edge Finding (ex: Sobel-horizontal) 121 000 -1 -2 -1 s Original Original image image Examples of Kernel Operations using NIH Image s Smooth s Results Results of one Smooth s Results Results of a second Smooth Examples of Kernel Operations using NIH Image s Close smoothed image, reopen original image, then Sharpen s Original Original image image s Image Image after one Sharpen Sharpen s Image Image after a second Sharpen Sharpen Examples of Kernel Operations using NIH Image s Close sharpened image, reopen original image, then Find Close Edges Edges Image after Find Edges Install a new plugin in ImageJ x Exit ImageJ if open Exit x List of plaugin available in http://rsb.info.nih.gov/ij/plugins/index.html http://rsb.info.nih.gov/ij/plugins/index.html x Download .class files or .jar libraries from the plugin list x Save in the “plugins” directory of Image (/ImageJ/plugins/) (/ImageJ/plugins/) x Lauch ImageJ the new plugin appears in the Lauch “Plugin” menu Kernel Operation : Deconvolution s Install the plugin: Diffraction_PSF_3D Kernel Operation : Deconvolution s Install the plugin x Iterative_Deconvolve_3D Iterative_Deconvolve_3D original deconvolved Image Math s Basic idea: Combine two images using an Basic dual operand operator to generate a new image image s Algorithm: x For each pixel in the first image, operate on it For using the corresponding pixel in the second image and store the result in the corresponding pixel in a new (output) image pixel Image Math s Any operator can be used s Most common operators: x division: generate ratio image x logical AND: mask one image with another logical mask (usually binary) image (usually Examples of Image Math using ImageJ s Open original image and sharpen once (save as Open Abdomen.sharpen1), reopen original image Abdomen.sharpen1), s Ratio of Ratio sharp to original image (shows regions affected by sharpen) sharpen) Image Math vs. Arithmetic Operations s Note difference between Image Math which does Note an operation on two images and Arithmetic which does an operation on a single image and a constant does Summary: Basic Image Processing Operations s Arithmetic Operations x Inputs: Image, Constant (optional) x Common use: Subtract background s Kernel Operations x Inputs: Image, Kernel x Common use: Smoothing s Image Math x Inputs: Two images x Common use: Generate ratio image Image Processing s Basic example of image processing: find Basic objects in an image and describe them numerically numerically Object finding (Particle analysis) s Principle: Identify a contiguous set of pixels Principle: that are all above some threshold that s Implementation: x Start with a binary (thresholded) image x Find a pixel that is “on” and start a list or map x Recursively search all nearest neighbors for Recursively additional pixels that are “on” and add them to the list or map the Object finding (Particle analysis) Start with a thresholded image (Image/Adjust/Threshold) Object finding (Particle analysis) Object finding (Particle analysis) Save as Excel file using Save As... Object finding (Particle analysis) s Uses: x Counting objects x Obtaining area measurements for objects x Obtaining integrated intensity x Isolating objects for other processing ...
View Full Document

This note was uploaded on 12/03/2011 for the course BIO 118 taught by Professor Staff during the Fall '08 term at Rutgers.

Ask a homework question - tutors are online