Unformatted text preview: ogy MECH 4041 B. Eng (Hons.) Mechatronics S. Venkannah Mechanical and Production Engineering Department Substituting this relation into the expression for the cross-ratio gives In summary, the cross-ratio is an invariant of any sets of four collinear points in projective
correspondence. It is unaffected by
the relative position of the line or the position of the optical centre, as shown in Figure 7. Figure below: The cross-ratio of every set of four collinear points shown in this figure has the
same value. Recognition using invariants
There are two stages to model-based recognition using invariants:
1. Model acquisition. Models of objects to be recognized are acquired directly from images. For
planar objects, this involves computing their plane projective invariants. Their outline is also
stored for the verification process.
2. Recognition. Invariants are computed for geometric invariants in the target image. If the
invariant value corresponds to one in the model library, a recognition hypothesis is generated.
This hypothesis is either confirmed or denied by verification: The model outline from the
acquisition image is projected onto the target image. If the projected edges overlap image edges
sufficiently then the hypothesis is verified.
Faculty of Engineering Robotics Technology MECH 4041 B. Eng (Hons.) Mechatronics S. Venkannah Mechanical and Production Engineering Department Invariants:
They provide a simple means of comparison
They can provide position and orientation information (i.e. Moments etc..)
Too simplistic for some applications
Do not give a unique means of identification Neural nets:
Neural nets have seen an explosion of interest since their re-discovery as a pattern recognition
paradigm in the early 1980s. the value of some of the applications for which they are used may
be arguable, but there is no doubt that they represent a tool of great value in various areas
generally regarded as “difficult”, particularly speech and visual pattern recognition.
Most neural approaches are based on combinations of elementary processors (neurons), each of
which takes a number of inputs and generates a single output. Associated with each input is a
weight, and the output (in most cases) is then a function of the weighted sum of inputs; this
function may be discrete or continuous, depending on the variety of network in use. A simple
neuron is shown in the model below. The inputs are denoted by ν1,, ν2,…. And the weights by
w1, w2, …: the total input to the neuron is then the sum of the weighted inputs.
ν2 f (ν,w) Output y Σ νn Wn Where θ is a threshold associated with this neuron. Also associated with the neuron is a transfer
function f(x) which provides the output; common examples are
f(x) = 0 if x ≤ 0
1 if x > 0
f(x) = 1 /(1 + e-x)
this model saw a lot of enthusiastic use during an early phase, culminating in Rosenblatt’s
Faculty of Engineering Robotics Technology MECH 4041 B. Eng (Hons.) Mechatronics S. Venkannah Mechanical and Production Engineering Department The general idea of collections of these neurons is that they are interconnected (so that the output
of one becomes the input of another, or others)- this idea mimics the high level of
interconnection of elementary neurons found in brains, which is thought to explain the damage
resistance and recall capabilities of humans. Such an interconnection may then take some
number of external inputs and deliver up some number of external outputs. What lies between
then specifies the network: This may mean a large number of heavily interconnected neurons, or
some highly structured (e.g., layered) interconnection, or, pathologically nothing. Typical uses
of such a structure
Classification: if the output vector (m –dimensional) is binary and contains only a single one,
the position of the one classifies the input pattern into one of m categories.
Auto-association: Some uses of neural networks cause then to re-generate the input pattern at
the outputs (so m=n and vi = yi); the purpose of this may be to derive a more compact vector
representation from within the network intervals.
General association: At their most interesting, the vectors v and y represent patterns in
different domains, and the network is forming a correspondence between them.
Feed forward networks:
The standard approach to use such networks is to obtain a training set of data – a set of vectors
for which the ‘answer’ is already known. This used to teach a network with some training
algorithm, such that the network can perform the association accurately. Then, in classification
mode, unknown patterns are fed into the net and it produces answers based on generalizing what
it has learned.
Back propagation proceeds by comparing the output of the network to that expected, and
computing an error measure based on sum of square differences. Back propagation trains strictly
layered networks in which it is assumed that at least one layer exists between input and output.
Back propagation algorithm (Fr...
View Full Document