{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}


LecturesPart25 - Computational Biology Part 25 Automated...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Computational Biology, Part 25 Automated Interpretation of Subcellular Patterns in Microscope Images II Robert F. Murphy Copyright © 1996, 1999, 2000-2006. Copyright All rights reserved. Preliminaries Acquisition considerations s s s Ensure Nyquist Sampling at Rayleigh limit Maintain low cell density if single cell measurements Maintain desired desired Control acquisition variables x x x x Select (initial) focal plane consistently Select fields consistently (at least one full cell per field) Maintain constant camera gain, exposure time, number of Maintain slices slices Select interphase cells or ensure sampling of cell cycle Acquisition considerations (continued) s Collect sufficient images per condition x x s For classifier training or set comparison, more than number For of features of For classification or clustering, base on confidence level For desired desired Collect reference images if possible (DNA, Collect membrane) membrane) Annotation considerations s s Maintain adequate records of all experimental settings Organize images by cell type/probe/condition Preprocessing s s s s s Correction for/Removal of camera defects Background correction Autofluorescence correction Illumination correction Deconvolution Preprocessing (continued) s Registration x s s Not critical if only using DNA or membrane references Intensity scaling (constant scale or contrast stretched Intensity for each cell) for Single cell segmentation x Manual, semi-automated, automated Feature Extraction Goal This is a microtubule pattern Assign proteins to major subcellular structures using fluorescent microscopy The Challenge s s s Problem is hard because different cells Problem have different shapes, sizes, orientations shapes, Organelles/structures within cells are not Organelles/structures found in fixed locations found Therefore, describe each image Therefore, numerically and use the descriptors numerically Feature-Based, Supervised Learning Approach 1. Create sets of images showing the location of many different proteins (each set defines one class class of pattern) of 2. Reduce each image to a set of numerical values 2. (“features”) that are insensitive to position and (“ ”) rotation of the cell rotation 3. Use statistical classification methods to “learn” 3. classification how to distinguish each class using the features how Subcellular Location Features (SLF) s s s Combinations of features of different types that Combinations describe different aspects of patterns in fluorescence microscope images have been created created Motivated in part by descriptions used by Motivated biologists (e.g., punctate, perinuclear) biologists To ensure that the specific features used for a To given experiment can be identified, they are referred to as Subcellular Location Features (SLF) ubcellular ocation and defined in sets (e.g., SLF1) Thresholding s s s s First type of feature is morphological Morphological features require some method for Morphological defining objects defining Most common approach is global thresholding Methods exist for automatically choosing a global Methods threshold (e.g., Riddler-Calvard method) threshold Ridler-Calvard Method s Find threshold that is equidistant from the Find average intensity of pixels below and above it it s Ridler, T.W. and Calvard, S. (1978) Picture Ridler, thresholding using an iterative selection method. IEEE Transactions on Systems, Man, and Cybernetics 8:630-632. Man, Ridler-Calvard Method Blue line shows histogram of intensities, green lines show average to left and right of red line, red line shows midpoint between them or the RC threshold Ridler­Calvard Illustration 0.25 0.2 0.15 0.1 Frequency 0.05 0 0 20 40 Pixel Value 60 80 Ridler-Calvard Method original original thresholded Otsu Method s Find threshold to minimize the variances of Find the pixels below and above it the s Otsu, N., (1979) A Threshold Selection Otsu, Method from Gray-Level Histograms, IEEE Transactions on Systems, Man, and Cybernetics, 9:62-66. Cybernetics Adaptive Thresholding s Various approaches available s Basic principle is use automated methods Basic over small regions and then interpolate to form a smooth surface form Suitability of Automated Thresholding for Classification s For the task of subcellular pattern analysis, For automated thresholding methods perform quite well in most cases, especially for patterns with well-separated objects patterns s They do not work well for images with very They low signal-noise ratio low s Can tolerate poor behavior on a fraction of Can images for a given pattern while still achieving good classification accuracies achieving Object finding s After choice of threshold, define objects as After sets of touching pixels that are above threshold threshold 2D Features Morphological Features SLF No. Description SLF1.1 The number of fluorescent objects in the image SLF1.2 The Euler number of the image SLF1.3 The average number of above-threshold pixels per object SLF1.4 The variance of the number of above-threshold pixels per The object object SLF1.5 The ratio of the size of the largest object to the smallest SLF1.6 The average object distance to the cellular center of The fluorescence(COF) fluorescence(COF) SLF1.7 The variance of object distances from the COF SLF1.8 The ratio of the largest to the smallest object to COF The distance distance 2D Features Morphological Features ER Nucleoli 108 # of objects 83 Average size of objects 31 Average distance to COF 6 232 4 Any of these features could be used to distinguish these two classes Suitability of Morphological Features for Classification s Images for some subcellular patterns, such Images as those for cytoskeletal proteins, are not well-segmented by automated thresholding well-segmented s When combined with non-morphological When features, classifiers can learn to “ignore” morphological features for those classes morphological 2D Features Morphological Features DNA features (objects relative to DNA reference) SLF No. Description SLF2.17 The average object distance from the COF of the DNA image SLF2.18 The variance of object distances from the DNA COF SLF2.19 The ratio of the largest to the smallest object to DNA COF distance SLF2.20 The distance between the protein COF and the DNA COF SLF2.21 The ratio of the area occupied by protein to that occupied by DNA SLF2.22 The fraction of the protein fluorescence that co-localizes with DNA 2D Features Morphological Features Skeleton features SLF No. Description SLF7.80 The average length of the morphological skeleton of objects SLF7.81 The ratio of object skeleton length to the area of the convex hull of the skeleton, averaged over all objects SLF7.82 The fraction of object pixels contained within the skeleton SLF7.83 The fraction of object fluorescence contained within the skeleton SLF7.84 The ratio of the number of branch points in the skeleton to the length of skeleton Illustration – Skeleton 2D Features Edge Features Edge features SLF No. Description SLF1.9 The fraction of the non-zero pixels that are along an edge SLF1.10 Measure of edge gradient intensity homogeneity SLF1.11 Measure of edge direction homogeneity 1 SLF1.12 Measure of edge direction homogeneity 2 SLF1.13 Measure of edge direction difference 2D Features Hull Features Convex hull (geometrical) features SLF1.14 The fraction of the convex hull area occupied by protein fluorescence SLF1.15 The roundness of the convex hull SLF1.16 The eccentricity of the convex hull Zernike Moment Features (SLF 3.17-3.65) • Shape similarity of protein image to Zernike polynomials Z(n,l) • 49 polynomials and 49 features left: Zernike polynomials A: Z(2,0) B: Z(4,4) C: Z(10,6) right: lamp2 image Haralick Texture Features (SLF7.66-7.78) s s s Correlations of adjacent pixels in gray level Correlations images images Start by calculating co-occurrence matrix P: N by N matrix, N=number of gray level. by Element P(i,j) is the probability of pixels with Element value i being adjacent with pixels with value j value Four directions in which a pixel can be adjacent 4 1 3 2 3 Co-occurrence Matrix 1 2 3 4 1 0 2 1 2 2 2 4 4 3 3 1 4 2 2 4 3 4 2 2 1 2 3 4 1 2 1 0 1 2 1 6 3 4 3 0 3 6 2 4 1 4 2 4 1 2 3 4 1 0 1 0 3 2 1 4 3 3 3 0 3 4 1 4 3 3 1 2 2 2 4 2 3 2 4 4 3 3 1 2 3 4 2 1 4 3 2 1 0 3 0 1 2 3 0 4 4 4 1 2 2 4 3 0 4 0 3 4 1 4 3 2 Pixel Resolution and Gray Levels s Texture features are influenced by the Texture number of gray levels and pixel resolution of the image of s Optimization for each image dataset Optimization required required s Alternatively, features can be calculated for Alternatively, many resolutions many Wavelet Transformation - 1D A: approximation (low frequency) D: detail (high frequency) X=A3+D3+D2+D1 2D Wavelets - intuition s Apply some filter to detect edges Apply (horizontal; vertical; diagonal) (horizontal; After Christos Faloutsos 2D Wavelets - intuition s Recurse Slide courtesy of Christos Faloutsos 2D Wavelets - intuition s Edges (horizontal; vertical; diagonal) s http://www331.jpl.nasa.gov/public/wave.ht ml Slide courtesy of Christos Faloutsos Wavelets s Many wavelet basis functions (filters): x Haar x Daubechies (-4, -6, -20) x Gabor x ... Slide courtesy of Christos Faloutsos Daubechies D4 decomposition Original image Wavelet Transformation Wavelet Feature Calculation s Preprocessing x x s Background subtraction and thresholding, Translation and rotation Wavelet transformation x x x The Daubechies 4 wavelet 10th level decomposition The average energy of the three high-frequency The components Gabor Function We can extend the function to generate Gabor filters by rotating and dilating Gabor Feature Calculation s s s s s Preprocessing 30 Gabor filters were generated using five 30 different scales and six different orientations different Convolve an input image with a Gabor filter Take the mean and standard deviation of the Take convolved image convolved 60 Gabor texture features 60 ...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online