# Register now to access 7 million high quality study materials (What's Course Hero?) Course Hero is the premier provider of high quality online educational resources. With millions of study documents, online tutors, digital flashcards and free courseware, Course Hero is helping students learn more efficiently and effectively. Whether you're interested in exploring new subjects or mastering key topics for your next exam, Course Hero has the tools you need to achieve your goals.

13 Pages

### Classification

Course: REU 06, Fall 2009
School: Utah State
Rating:

Word Count: 3944

#### Document Preview

Classification Outline Methods Xiaojun Qi Cluster Seeking: K-Means algorithm Feature Selection: Karhunen-Love Expansion Principal Components Analysis (PCA) Classification: Linear Discriminant Analysis (LDA) Statistical Classification: Quadratic Discriminant Analysis (QDA) -- Bayes Classifier 1 2 Cluster Seeking: K-Means Algorithm The K-means algorithm is based on the minimization of a performance index...

Register Now

#### Unformatted Document Excerpt

Coursehero >> Utah >> Utah State >> REU 06

Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Classification Outline Methods Xiaojun Qi Cluster Seeking: K-Means algorithm Feature Selection: Karhunen-Love Expansion Principal Components Analysis (PCA) Classification: Linear Discriminant Analysis (LDA) Statistical Classification: Quadratic Discriminant Analysis (QDA) -- Bayes Classifier 1 2 Cluster Seeking: K-Means Algorithm The K-means algorithm is based on the minimization of a performance index which is defined as the sum of the squared distances from all points in a cluster domain to the cluster center. Cluster Seeking: K-Means Algorithm (Cont.) Step 1: Choose K initial cluster centers z1(1), z2(1), ..., zK(1). These are arbitrary and are usually selected as the first K samples of the given sample set. Step 2: At the kth iterative step, distribute the samples {x} among the K cluster domains, using the relation, x S j (k ) if x - z j (k ) < x - z i (k ) (1) For all i = 1, 2, . . . , K, i j, where Sj(k) denotes the set of samples whose cluster center is zj(k). Ties in expression (1) are resolved arbitrarily. 3 4 Step 3: From the results of Step 2, compute the new cluster centers zj(k+1), j = 1, 2, . . . , K, such that the sum of the squared distances from all points in Sj(k) to the new cluster center is minimized. In other words, the new cluster center zj(k+1) is computed so that the performance index Jj = xS j ( k ) Step 4: If zj(k+1) = zj(k) for j = 1, 2, . . . , K, the algorithm has converged and the procedure is terminated. Otherwise go to Step 2. The behavior of the K-means algorithm is influenced by: The number of cluster centers specified; The choice of initial cluster centers; The order in which the samples are taken; The geometrical properties of the data. x-z j ( k + 1) , 2 j = 1, 2, K , K is minimized. The zj(k+1), which minimizes this performance index is simply the sample means of Sj(k). Therefore, the new cluster center is given by z j (k + 1) = 1 Nj xS j ( k ) x, j = 1, 2, K , K Where Nj is the number of samples in Sj(k) . The name "K-means" is obviously derived from the manner 5 in which cluster centers are sequentially updated. In most practical cases the application of this algorithm will require experimenting with various values of K as well as different choices of starting 6 configurations. 1 Feature Selection/Reduction: Karhunen-Love Expansion The motivations for using the discrete K-L expansion as a tool for feature selection are based on the optimum properties of the discrete K-L expansion. In the discrete case, the least-mean-square-error property implies that the K-L expansion minimizes the approximation error when fewer than n basis vectors are used in the expansion. The minimum-entropy property has the desirable clustering effects. Karhunen-Love Expansion Algorithm 1. Compute the correlation matrix R M from the patterns of the training set. R = p ( ) E{x i x i '} i =1 7 2. Obtain the eigenvalues and corresponding eigenvectors of R. Normalize the eigenvectors. 3. Form the transformation matrix from the m eigenvectors corresponding to the largest eigenvalues of R. 4. Compute the coefficients of the expansion from c i = ' x i . These coefficients represent the reduced image patterns. 8 Example: As a simple illustration of the use of the discrete K-L expansion, consider the patterns shown in the figure: Step 1: 1 0 x 11 = 0 0 1 x 12 = 0 0 1 x 13 = 0 1 1 x 14 = 1 0 2 0 x 21 = 0 1 0 x 22 = 1 0 0 x 23 = 1 1 1 x 24 = 1 1 where the first subscript identifies the class, and the second the pattern number. 9 10 Assuming p(1) = p(2) = 1/2, we have Step 2: The eigenvalues and corresponding normalized eigenvectors of R are: R = p ( ) E {x i x i '} i =1 2 1 1 = E {x 1 x 1 '} + E {x 2 x 2 '} 2 2 where E{x1x1'} and E{x2x2'} indicate expectations over all patterns of classes 1 and 2, respectively. 1 4 1 4 R = x 1 j x 1' j + 8 1 x 2 j x '2 j 8 j =1 j= 3 1 1 1 1 1 1 1 1 1 0 + 1 3 2 8 8 1 0 1 1 2 3 2 1 1 1 = 1 2 1 4 1 1 2 = 1 = 1, e1 = 2 = 1 , 4 1 1 1 - 2 1 e2 = 1 6 1 1 3 1 3 = , 4 11 0 1 e3 = 1 2 - 1 12 2 Step 3: Choosing e1 and e2, which correspond to the largest eigenvalues, results in the following transformation matrix: Step 4: Using the transformation c = 'x, we obtain the image patterns: 1 0 c11 = 0 2 c 21 = 2 - 2 2 2 -1 2 2 -1 c 22 = c 23 = c 24 = 1 6 1 6 1 6 1 6 2 1 2 1 2 2 2 3 2 0 1 = 1 1 3 3 3 -2 1 1 6 6 6 c12 = c13 = c14 = 1 6 1 6 1 6 13 These image patterns are shown in Fig. 7.2(b). Observe the clustering effect and also the fact that the linear separability 14 of the patterns has not been affected. If the patterns are reduced to one dimension by using the transformation matrix with the eigenvector corresponding to the largest eigenvalue, 1 1 = 1 3 1 We obtain the image points: Principal Components Analysis (PCA) Algorithm Consider a data set D={x1, x2, ..., xM} of N-dimensional vectors. This data set can be a set of M face images. The mean and the covariance matrix is given by 1 c 11 = 0 c 12 = 3 c 13 = 2 3 c 14 = 2 3 2 c 21 = c 22 = 3 3 = 1 M x m =1 M m =1 M m c 23 = 2 3 c 24 = 3 3 = M [x 1 m - ][xm - ] T These image patterns are shown in Fig. 7.2(c). Observe that several patterns of different classes overlap, a condition which 15 makes the last transformation undesirable. Where the covariance matrix is an NxN symmetric matrix. This matrix characterizes the scatter of the data set. 16 Example A=[0 1 1 1 0 0 1 0 0; 0; 0; 1 ;] Here: The dimension N = 3 and M = 4. That is, A (i.e., a data set) contains a set of 4 vectors, each of which has 3 elements. X1 = [0 0 0]' ; X2 = [1 0 0]' ; X3 = [1 1 0]' ; X4 = [1 0 1]' ; = [0.75 0.25 0.25]' ; Mx1 = X1 - = [-0.75 -0.25 -0.25]' ; Mx2 = X2 - = [0.25 Mx3 = X3 - = [0.25 Mx4 = X4 - = [0.25 -0.25 -0.25]' ; 0.75 -0.25 -0.25]' ; 0.75]' ; 17 Example (Cont.) (X1 ) (X1 - )T = [ 0.5625 0.1875 0.1875 (X2 ) (X2 - )T = [ 0.0625 -0.0625 -0.0625 (X3 ) (X3 - )T = [ 0.0625 0.1875 -0.0625 (X4 ) (X4 - )T = [ 0.0625 -0.0625 0.1875 0.1875 0.0625 0.0625 -0.0625 0.0625 0.0625 0.1875 0.5625 -0.1875 -0.0625 0.0625 -0.1875 0.1875 ; 0.0625 ; 0.0625 ]; -0.0625; 0.0625 ; 0.0625 ; ] -0.0625; -0.1875 ; 0.0625 ; ] 0.1875 ; -0.1875 ; 0.5625 ; ] 18 3 Example (Cont.) = 0.1875 0.0625 0.0625 0.0625 0.1875 - 0.0625 0.0625 - 0.0625 0.1875 PCA Algorithm (Cont.) A non-zero vector Uk is the eigenvector of the covariance matrix if u k = k u k It has the corresponding eigenvalue k This covariance matrix is a symmetric matrix. Each diagonal value i,i indicates the variance of the ith element of the data set. Each off-diagonal element i, j indicates the covariance between the ith and jth element of the 19 data set. If 1 , 2 ,...., K are K largest and distinct eigenvalues, the matrix U=[u1 u2 ... uk] represent the K dominant eigenvectors. 20 PCA Algorithm (Cont.) The eigenvectors are mutually orthogonal and span a K-dimensional subspace called the principal subspace. When the data are face images, these eigenvectors are often referred to as eigenfaces. PCA Algorithm (Cont.) If U is the matrix of dominant eigenvectors, an Ndimensional input x can be linearly transformed into a K-dimensional vector by: = U T (x - ) After applying the linear transform UT, the set of transformed vectors {1, 2, ... M} has scatter 1 U U = T 21 2 ...... k 22 PCA chooses U so as to maximize the determinant of this scatter matrix. PCA Algorithm (Cont.) An original vector x can be approximately constructed from its transformed as: PCA Algorithm (Cont.) Geometrically, PCA consists of projection onto K orthonormal axes. These principal axes maximize the retained variance of the data after projection. In practice, the covariance matrix is often singular, particularly, if M<N. However, the K<M principal eigenvectors can still be estimated using Singular Value Decomposition (SVD) or Simultaneous Diagonalization. 24 ~= x u k k =1 K k + In fact, PCA enables the training data to be reconstructed in a way that minimizes the squared reconstruction error over the data set. This error is: 1 = 2 m =1 M 2 xm - ~m x 23 4 Matlab Example 2.5 0.5 2.2 1.9 3.1 2.3 2 1 1.5 1.1 data = 2.4 0.7 2.9 2.2 3.0 2.7 1.6 1.1 1.6 0.9 Variability in the Two Columns boxplot(data,0) This data has 10 rows 2 cols 25 26 [pc, newdata, variances, t2] = princomp(data) ; variance = 1.2840 0.0491 pc = -0.6779 0.7352 -0.7352 -0.6779 newdata = -0.8280 1.7776 -0.9922 -0.2742 -1.6758 -0.9129 0.0991 1.1446 0.4380 1.2238 0.1751 -0.1429 -0.3844 -0.1304 0.2095 -0.1753 0.3498 -0.0464 -0.0178 0.1627 figure(1); plot(data(:,1), data(:,2),'+') xlabel('1st Dimension'); ylabel('2nd Dimension'); gname; figure(2) ; plot(newdata(:,1),'+') ylabel('1st Principal Component'); gname ; figure(3) ; plot(newdata(:,1),newdata(:,2),'+') xlabel('1st Principal Component'); ylabel('2nd Principal Component'); gname; 27 28 1st vs. 2nd element for original data set The 1st principal component 29 30 5 The 1st vs. the 2nd principal component PCA Illustration 31 32 PCA Summary The PCA method generates a new set of variables, called principal components. Each principal component is a linear combination of the original variables. All the principal components are orthogonal to each other so there is no redundant information. The principal components as a whole form an orthogonal basis for the space of the data. The First Principal Component The first principal component is a single axis in space. When you project each observation on that axis, the resulting values form a new variable. And the variance of this variable is the maximum among all possible choices of the first axis. 33 34 The Second Principal Component The second principal component is another axis in space, perpendicular to the first. Projecting the observations on this axis generates another new variable. The variance of this variable is the maximum among all possible choices of this second axis. Principal Components The full set of principal components is as large as the original set of variables. But it is commonplace for the sum of the variances of the first few principal components to exceed 80% of the total variance of the original data. By examining plots of these few new variables, researchers often develop a deeper understanding of the driving forces that generated the original data. 36 35 6 PCA Illustration Explanation Given an nxn matrix that does have eigenvectors, there are n of them. Scale the vector by some amount before multiplying it, the same multiple of it will be obtained. All the eigenvectors of a matrix are perpendicular, i.e., at right angles to each other, no matter how many dimensions you have. You can express the data in terms of these perpendicular eigenvectors, instead of expressing them in terms of the x and y axes. 37 Possible Use of PCA Dimensionality reduction The determination of linear combinations of variables Feature selection: the choice of the most useful variables. Visualization of multidimensional data Identification of underlying variables. Identification of groups of objects or of outliers 38 Linear Discriminant Analysis (LDA) -- Face Identification Suppose a data set X exists, which might be face images, each of which is labeled with an identity. All data points with the same identity form a class. In total, there are C classes. That is: X = {X1, X2, ...., Xc} LDA The sample covariance matrix for the entire data set is then a NxN symmetric matrix = M [x 1 x - x - ][ ] T where M is the total number of faces. This matrix characterizes the scatter of the entire data set, irrespective of classmembership. 40 39 LDA However, a within-classes scatter matrix, W, and a between-classes scatter matrix, B are also estimated. W= 1 C LDA The goal is to find a linear transform U which maximizes the between-class scatter while minimizing the within-class scatter. Such a transformation should retain class separability while reducing the variation due to sources other than identity, for example, illumination and facial expression. M [x - ][x - ] T c =1 C c c c x X c C 1 1 B= C =1 c - [ c ][c - ] T Where Mc is the number of samples of class c, c is the sample mean for class c, and is the sample mean for the entire 41 data set X. 42 7 LDA An appropriate transformation is given by the matrix U = [u1 u2 ... uK] whose columns are the eigenvectors of W-1B. In other words, the generalized eigenvectors corresponding to the K largest eigenvalues The data are transformed as follows: LDA = U T (x - ) After this transformation, the data has betweenclass scatter matrix UTBU and within-class scatter matrix UTWU. The matrix U is such that the determinant of the new between-class scatter is maximized while the determinant of the within-class scatter is minimized. This implies that the following ratio is to be maximized: | UTBU| / | UTWU| 43 44 BU k = kWU k There are at most C-1 non-zero generalized eigenvalues, so K<C LDA In practice, the within-class scatter matrix (W) is often singular. This is nearly always the case when the data are image vectors with large dimensionality since the size of the data set is usually small in comparison (M<N). For this reason, PCA is first applied to the data set to reduce its dimensionality. The discriminant transformation is then applied to further reduce the dimensionality to C-1. 45 PCA vs. LDA PCA seeks directions that are efficient for representation; LDA seeks directions that are efficient for discrimination. To obtain good separation of the projected data, we really want the difference between the means to be large relative to some measure of the standard deviation for each class. 46 LDA Illustration -- Bad Separation B 2.0 LDA Illustration -- Good Separation B 2.0 1.5 1.0 0.5 .... w 0.5 . .. . .. . A 1.0 1.5 2.0 1.5 1.0 ... .. .. . 0.5 0.5 w . . .. . 1.0 A 2.0 1.5 47 48 8 LDA Illustration -- 2-class case 8 6 4 2 0 -2 -4 -6 -8 -10 -15 + + ++ ++ + + + + + + + +++ + + + + ++ + + ++ + + + + + + + + + + + + ++ ++ + + + ++ ++ + + + + + + + + + ++ ++ + + + Variants of the LDA If the LDA is a class dependent type, for Lclass, L separate optimizing criteria are required for each class. The optimizing factors in case of class dependent type are computed as: Criterionj = inv(covj) * B For the class independent transform, the optimizing criterion is computed as: Criterion = W-1 * B 49 -10 -5 0 5 10 15 50 LDA on Expanded Basis Expand input space to include X1X2, X1X1 and X2X2 if the input has a 2-D feature (X1, X2) In this case, the input has 5-D instead of 2-D. That is: X= (X1, X2, X1X2, X1X1, X2X2) Quadratic Discriminant Analysis (QDA): Bayes Classifier Probability considerations become important in pattern recognition because of the randomness under which pattern classes normally are generated. It is possible to derive a classification approach that is optimal in the sense that, on average, its use yields the lowest probability of committing classification errors. 51 52 Bayes Rules -- Conditional Probability The conditional probability of an event B in relationship to an event A is the probability that event B occurs given that event A has already occurred. The notation for conditional probability is: P(B|A) Foundation The probability that a particular pattern X comes from class wi is denoted p(wi/x). If the pattern classifier decides that x came from wj when it actually came from wi, it incurs a loss, denoted Lij. As pattern x may belong to any one of W classes under consideration, the average loss incurred in assigning x to class wj is: P ( B | A) = P( A | B) = P( A | B) P( B) P( A | B) P( B) = P ( A) P( A | B) P( B) rj (x) = Lkj p ( k x) k =1 53 W Equation 1 P ( B | A) P( A) P ( B | A) P( A) = P( B) P( B | A) P( A) This equation often is called the conditional average risk or loss in decision-theory terminology. 54 9 From basic probability theory, we know that P( A | B) = P ( B | A) P( A) P ( B | A) P( A) = P( B) P( B | A) P( A) Because 1/p(x) is positive and common to all the rj(x), j = 1, 2, ..., W, it can be dropped from Equation 2 without affecting the relative order of these functions from the smallest to the largest value. The expression for the average loss then reduces to: W Using this expression, we write Equation 1 in the form: 1 W r j ( x) = Lkj p(x k ) P( k ) p (x) k =1 Equation 2 rj (x) = Lkj p (x k ) P ( k ) k =1 Equation 3 where p(x/wk) is the probability density function of the patterns from class wk and P(wk) is the probability of occurrence of class wk. 55 56 The classifier has W possible classes to choose from for any given unknown pattern. If it computes r1(x), r2(x), ..., rw(x) for each pattern x and assigns the pattern to the class with the smallest loss, the total average loss with respect to all decisions will be minimum. The classifier that minimizes the total average loss is called the Bayes classifier. Thus the Bayes classifier assigns an unknown pattern x to class wi if ri(x) < rj(x) for j = 1, 2, ..., w; j i. In other words, x is assigned to class wi if The "loss" for a correct decision generally is assigned a value of zero, and the loss for any incorrect decision usually is assigned the same nonzero value (say 1). Under these conditions, the loss function becomes where ij = 1 if i = j, and ij = 0 if i j Equation 5 indicates a loss of unity for incorrect decisions and a loss of zero for correct decisions. Substituting Equation 5 into Equation 3 yields L ij = 1 - ij Equation 5 L k =1 W ki p(x k ) P ( k ) < Lqj p(x q ) P( q ) Equation 4 q =1 57 W rj (x) = (1 - kj ) p (x k ) P ( k ) = p (x) - p (x j ) P ( j ) k =1 W for j = 1, 2, ..., w; j i. Equation 6 58 p (x) - p (x i ) P ( i ) < p (x) - p (x j ) P ( j ) Equation 7 p (x i ) P ( i ) > p (x j ) P ( j ) Or equivalently, if The Bayes classifier then assigns a pattern x to class wi if for all j i j = 1,2, K , W ; j i Equation 8 The Bayes classifier for a 0-1 loss function is nothing more than computation of decision function of the form: d j ( x) = p (x j ) P( j ) j = 1,2,K , W Equation 9 Where a pattern vector x is assigned to the class whose decision function yields the largest numerical 59 value. The decision functions in Equation 9 are optimal in the sense that they minimize the average loss in misclassification. For this optimality to hold, the probability density functions of the patterns in each class, as well as the probability of occurrence of each class must be known. The latter requirement usually is not a problem since it can generally be inferred from knowledge of the problem. Estimation of the probability density function p(x/wj) is another matter. If the pattern vectors, x, are n dimensional, then p(x/wj) is a function of n variables, which requires methods from multivariate probability theory for its estimation. 60 10 These methods are difficult to apply in practice, especially if the number of representative patterns from each class is not large, or if the underlying form of the probability density functions is not well behaved. Use of the Bayes classifier generally is based on the assumption of an analytic expression for the various density functions and then an estimation of the necessary parameters from sample patterns from each class. By far, the most prevalent form assumed for p(x/wj) is the Gaussian probability density function. The closer this assumpti...

Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education.

Below is a small sample set of documents:

Utah State - CS - 3100
1 2Chapter 13: I/O Systems Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations Streams Performance3Objectives Explore the structure of an OSs I/O subsystem
Utah State - CS - 3100
1 2Chapter 3: Processes Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems3Process Concept An operating system executes
Utah State - CS - 5400
xVertex Arrays Efficiency Number of function calls Redundant specification of vertices Enable Arrays glEnableClientState(GLenum array); GL_VERTEX_ARRAY GL_COLOR_ARRAY GL_INDEX_ARRAY GL_NORMAL_ARRAY GL_TEXTURE_COORD_ARRAY GL_EDGE_FLAG_ARRAY Enabling
Utah State - CS - 6890
Network Security Relies on host and application security Networks allow computers to communicate Vulnerabilities are exposed to the world Network accessible vs. inaccessible programs Eavesdropping and network vulnerabilitiesBasic Terminology
Utah State - CS - 6890
Master Project ReportStudent: Min Wu Director: Robert F. Erbacher1Text Categorization Techniques for Intrusion Detection - A N-Gram-Based MethodMin Wu minwu@cs.albany.eduAbstractText categorization techniques have been used in anomaly intru
Utah State - CS - 3100
1 2Chapter 1: Introduction Chapter 1: Introduction What Operating Systems Do Computer-System Organization Computer-System Architecture Operating-System Structure Operating-System Operations Process Management Memory Management Storage M
Utah State - CS - 3100
Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server SystemsChapter 3: ProcessesProcess Concept An operating system executes program
Utah State - CS - 3100
Chapter 1: Introduction What Operating Systems Do Computer-System Organization Computer-System Architecture Operating-System Structure Operating-System Operations Process Management Memory Management Storage Management Protection and Secu
Utah State - CS - 3100
Chapter 8: Memory Management Background Swapping Contiguous Allocation Paging Segmentation Segmentation with PagingChapter 8: Memory ManagementBackground Program must be brought into memory and placed within a process for it to be run Inp
Utah State - CS - 5400
Explicit RepresentationDependant variable is given in terms of the Independant Variable y=f(x) x=g(y), inverted relationship or in 3D y=f(x), z=g(x) surface, z=f(x, y)Implicit Representationf(x, y)=0 line, ax+by+c=0 circle, x2+y2-r2=0 Membership
Utah State - CS - 3100
Objectives To describe the services an operating system provides to users, processes, and other systems To discuss the various ways of structuring an operating system To explain how operating systems are installed and customized and how they boot
Utah State - CS - 6890
OS Security Host-based security Multi-programmed environment Shared resources Programs can interfere with one another Separate from networked issuesObjects Require Protection Memory Sharable IO devices Serially reusable devices, printers
Utah State - CS - 5890
Gene Expression an Overview of Problems &amp; Solutions: 3&amp;4Utah State University Bioinformatics: Problems and Solutions Summer 2006ReviewConsidering several problems &amp; solutions with gene expression data Previously:1: Data quality checks (GIGO; us
Utah State - CS - 6890
Similar Sequence Similar FunctionCharles Yan Spring 2006From Sequence to FunctionProtein sequence determine protein function. Thus similar protein sequences have similar functions One approach to predict function for a new protein is to sea
Utah State - CS - 6670
Matching Problems in BioinformaticsCharles Yan Fall 2008Matching ProblemGiven a string P (pattern) and a long string T (text), find all occurrences, if any, of P in T.Example T: Given a string P (pattern) and a long string T (text), find all o
Utah State - CS - 6890
Gene FindingCharles Yan 1Gene FindingContent sensors Extrinsic content sensorsIntrinsic content sensorsCompare with protein sequences Compare with cDNA and ESTs Genomic comparisons Prediction methodsSignal sensors 2In
Utah State - CS - 5050
ShortestPaths8 B 2 8 2 7 5 E 3 C A 0 4 1 8 F D 5 3 2 9ShortestPaths1OutlineandReadingWeightedgraphs(7.1) Shortestpathproblem Shortestpathproperties Algorithm EdgerelaxationDijkstrasalgorithm(7.1.1) TheBellmanFordalgorithm(7.1.2) Short
Utah State - CS - 6670
Inexact MatchingCharles Yan 2008 1Longest Common Subsequence Given two strings, find a longest subsequence that they share substring vs. subsequence of a string Substring: the characters in a substring of S must occur contiguously in
Utah State - CS - 6100
Intelligence Agents(Chapter 2)1An Agent in its EnvironmentAGENT Sensor Input action outputENVIRONMENT2Agent Environmentsaccessible (get complete state info) vs inaccessible environment (most real world environments) episodic (temporary
Utah State - CS - 5050
Computational Geometry Chapter 121Range queries How do you efficiently find points that are inside of a rectangle? Orthogonal range query ([x1, x2], [y1,y2]): find all points (x, y) such that x1&lt;x&lt;x2 and y1&lt;y&lt;y2 Useful also as a multi-attribu
Utah State - USU - 1360
ErrorsData Errors: Human error Bad Measurements Modeling Errors:Wrong formula Incorrect assumption Incorrect assumptionImplementation Errors:1999 Mars Orbiter lost as Lockheed Martins programmed using English units but NASA used metric units.
Utah State - CS - 5070
Great Principles Project Principles Summary 8/22/07ComputationThese principles define the nature of computational processes, both natural and artificial: what they can and cannot do, and how we cope with inherent and pervasive computational compl
Utah State - CS - 6100
Homework 7 CS 6100 (can be done in groups of 1,2, or 3) Old Exam 2 (Fall 2007) + two questionsFill in the blank using the technical description (1 point each)1. In negotiation, the situation of &quot;if I can help you without hurting me, I will&quot; is ter
Utah State - CS - 5070
Coordination Principles8/12/07These principles concern how autonomous entities work together toward a common result. A coordination system is a set of agents interacting within a finite or infinite game toward a common objective. A. Agents can be
Utah State - CS - 7100
Fuzzy Kernel-Stable Coalitions Between Rational AgentsBastian BlankenburgMatthias KluschDFKI - German Research Center for Artificial Intelligence Stuhlsatzenhausweg 3 66123 Saarbrucken, Germany Onn ShehoryIBM - Haifa Research Lab Tel Aviv Sit
Utah State - CS - 5050
Graphs Chapter 64 17 3802SFO3371843ORDLAX1233DFWGraphs1Graph A graph is a pair (V, E), where V is a set of nodes, called vertices E is a collection (can be duplicated) of pairs of vertices, called edges Vertices and edges are
Utah State - CS - 6100
Dynamic PricingPeter R. Wurman North Carolina State UniversityE-commerce Big PictureInfrastructureTCP/IP HTTP &amp; HTML Anonymity Databases EncryptionE-commerce Big PictureMake ContactWeb mining Data mining XMLRecommendationsInfrastructure
Utah State - CS - 5050
Maximum Flow4/6 s 3/5 v 1/1 3/3 1/1 u 2/2 w 3/3 4/7 1/9 z t 3/51Maximum FlowFlow Network A flow network (or just network) N consists of A weighted digraph G with nonnegative integer edge weights, wherethe weight of an edge e is called the
Utah State - CS - 5050
Chapter 7 Shortest Paths8 B 2 8 2 7 5 E 3 C A 0 4 1 8 D 5 32 9F1Shortest PathsWeighted Graphs In a weighted graph, each edge has an associated numerical value,called the weight of the edge Edge weights may represent, distances, costs,
Utah State - CS - 2420
Traversals of a graphHamiltonian TourHamiltonian path/tour: find a path through the graph such that every vertex is visited exactly once. If you must begin and end at the same point, it is a tour. Otherwise, it is a path. (NP complete) There is no
Utah State - CS - 2420
Chapter 15 Graphs and PathsYou know about trees. They have a rigid structure of each node have a single node that points to it (or none, in the case of the root). Sometimes life isn't so structured. For example: I need to fly to Tokyo. I want to fin
Utah State - CS - 6100
Ideas for 6100 topics. The most important thing is to find something you like. If you do what you like, you won't &quot;work&quot; a day on it. The digital libraries are WONDERFUL. One approach to finding a topic would be: 1. Thumb through the class text looki
Utah State - CS - 5050
R-7.2 (15 points) Algorithm ModifiedDijkstra (G, v) Input: A simple directed graph G with nonnegative edge weights and a vertex v. Output: A label D[u] for each vertex u, such that D[u] is the shortest distance from v to u in G for all u G.vertices(
Utah State - CS - 4700
Sample Midterm Questions 1. In a program you try to compile, if you mistype the constant integer &quot;Count12&quot; as &quot;Count 12&quot;, when would this error be recognized? 2. At lexical analysis 3. At parsing 4. At code generation 5. At load time 2. Your employer
Utah State - FIE - 2000
Session A TECHNOLOGY-ENHANCED LEARNING ENVIRONMENT FOR A GRADUATE/UNDERGRADUATE COURSE ON OPTICAL FIBER COMMUNICATIONSH. Scott Hinton1, Roberto Gonzalez2, Laura L. Tedder3, Sandeep Karandikar4, Harpreet Behl5, Paul C. Smith6, John Wilbanks7, James H
Utah State - FIE - 2000
Session VIRTUAL CIRCUIT LABORATORYHess Hodge1, H. Scott Hinton2, and Michael Lightner3Abstract We present the rationale, implementation and performance features of a virtual lab environment for an electronic circuits course. The primary purpose of
Utah State - ECE - 470
Group #3 Kelvin Khor James F. Kreycik Vivek Kurisunkal Justin Marz Nevin Mcchesney Team Problems (2.18, 2.19, Matlab, 1.1, 1.5, 1.6, 1.7, 1.8) 2.18 Electron Energy: Hydrogen atom En = - mo e 4 [ J s] 2 (4 0 ) 2 h2 n 2 -(9.11 10-31 ) (1.60 10-19
Utah State - ECE - 470
Group #2 Bryce Haas David Hawk Bradley Henry Justin Hermann Peter Hindman EECS 470 Problem Set #2 (1.1, 1.5, 1.6, 1.7, 1.8, 2.18, 2.19, and program) 1.1 a) Face-centered cubic corners 8 1 [atoms] = 1 [atom] 8 sides 6 1 [atoms ] = 3 [atoms ] 2 tot
Utah State - PROC - 250
LEGEND{ascii_file}{graphics_file}Plots a geological legend based on information recorded in an ascii file. Arguments {ascii_file} - name of the ascii file to be read containing information about the geological legend. The default file name lege
Utah State - PROC - 250
/* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /** GEOLOGICAL SURVEY OF CANADA --Name: cndlogo.aml Usage: CNDLOGO &lt;UL | CL | LL | UC | CC | LC | UR
Utah State - PROC - 250
7 findreplace.menu /* /* Called by: editanno.menu /* Calls made: findreplace.aml /* /* Description: Menu for entering find and replace annotation strings. /* Find string: %findstring Replace with: %replacestring %proceed %findstring input .ae\$findstr
Utah State - SANTANDER - 2
=eduCommons 2.2.0-final Localization=Summary-Since eduCommons is customized from Plone, it has built-in support for localization of menus,controls, and other chrome. As of June 2006 Plone is available in 56 different translations.eduCommons r
Utah State - EDUCOMMONS - 3
=LinguaPlone Translation Instructions=In the context of Department, Course, and ECObjects, translations must occur in a 'top down'manner. A Department must be translated prior to translating a Course, which must betranslated prior to any objec
Utah State - FRWS - 3800
Deserts in General Regions of sparse life largely because of usual aridity of their climate Biological definition Structurally simple but functionally complex, characterized by contracted to absent perennial vegetation; ephemerals when wet. Ephem
Utah State - RS - 6740
Sean Hammond Assignment #1/ Prospectus FRWS 6740 The project I wish to do coincides directly with my masters project. More specifically my objective is to explore the potential of classifying imagery by fuel loading. The objective is to classify fuel
Utah State - C - 5
These slides have been prepared as a general guide for preparing training data for USGS canopy and impervious predictions. These images have all been classified using different methods, but regardless of the method the end product is what is importan
Utah State - C - 5
NLCD2001 C5 and Cubist TrainingMike Coan (coan@usgs.gov) Limin Yang, Chengquan Huang, Bruce Wylie, Collin Homer Land Cover Strategies TeamEROS Data Center, USGS June 2003Overview Classification tree C5/See5 General description of the algorith
Utah State - C - 5
The following slides are intended to provide a few examples of some problems and issues that come up in Landcover mapping. This will be an ever-growing presentation as more issues and clear examples will arrive in the future. Please feel free to cont
Utah State - PHOTOS - 2
02/01/2004 11:04 AM 5,083,210 AZ020104RM006_1.JPG02/01/2004 11:04 AM 4,531,054 AZ020104RM006_2.JPG02/02/2004 10:00 AM 3,876,458 AZ020204RM001_1.jpg02/03/2005 02:42 PM 657,842 AZ020204RM001_2.jpg05/24/2002 10
Utah State - GIS - 4930
Goals for this week (Sept. 9 - 11)Projections Data storage formats Database Management Systems (tabular) Graphic Database Structure raster, vector, surficial Data compressionRDBMS in GISMedian income in cache county by census tract (19
Utah State - GIS - 4930
Sept 16 - 18Data entry (digitizing, scanning) Editing geodata Quality control and error checking Tiles Edgematching Georeferencing and transformationsEditing geodataOnce you have completed initial data entry, you will still need to clean
Utah State - GIS - 4930
Sep 30 - Oct 2Geographic objects Lowlevel vs. highlevel objects Spatial measurement Calculating area, length, shape, distanceFunctional distance ReclassificationFunctional distanceTuesday, we discussed conceptually the idea of functional
Utah State - GIS - 4930
Goals for this week (Sept. 9 - 11)Projections Data storage formats Database Management Systems (tabular) Graphic Database Structure raster, vector, surficial Data compressionProjectionsIn projecting a map, you are attempting to represen
Utah State - GIS - 4930
Goals for this weekUnderstanding geographicacy Understanding cartographic communication Relationship of scale to cartography Map symbolism Know different families of map projections Familiarity with some geographic grid systems Understand the
Utah State - GISCLASS - 2005
ArcGIS CustomizationsAny customizations added to the normal.mxt template will always be available from ArcGIS when the same user is logged onto the machine (this file is saved in &quot;\Documents and settings\&lt;profile_name&gt;\Application Data\ESRI\ArcMap\T
Utah State - GISCLASS - 2005
Customizing ArcGISWhy learn customization? Make the software suit your preferences Automate repetitive tasks Use tools from other sources Increase the power of the software Looks great on a resumCustomization levels Project-specific saved
Utah State - STREAMREST - 08
Version 4.0.0 March 2008Section - Arrays SizesPlan 01 1 1 0 27 90 F 7 1 0 0 9 9 0 0 0 0
Utah State - STREAMREST - 08
Proj Title=Design Exercise - Generic HEC RAS ModelCurrent Plan=p01Default Exp/Contr=0.3,0.1English UnitsGeom File=g01Flow File=f01Plan File=p01Y Axis Title=ElevationX Axis Title(PF)=Main Channel DistanceX Axis Title(XS)=StationBEGIN DESCRIP
Utah State - STREAMREST - 08
1.0591257542558040.0000000000000000.000000000000000-1.059287684282477460409.9029325585000004484271.707987529200000
Utah State - FRWS - 3800
MOJAVE CREOSOTEBUSH DESERTLOCATION Creosotebush represents the bottom of the vegetation zone distribution that we are studying.DISTRIBUTIONDISTRIBUTION Barely gets into Utah at the Dixie Corridor, but occurs throughout the lowlands of the Moj
Utah State - FRWS - 3800
Wildland Ecosystems FRWS 3800Syllabus, Important Dates, and Class Information Spring 2006_Lectures: M, W, F Instructors:8:30-9:20 BNR 314 Office Hours:MWF 9:30am 11:00am or by appointmentDoug Ramsey NR 355A 797-3783 doug.ramsey@usu.edu htt