Maximum Likelihood Estimation
We have a probabilistic model, M, of some
phenomena. We know exactly the structure of
M, but not the values of its probabilistic
Each execution of M produces an
observation, x[i] , according to th
2015 IEEE International Conference on Bioinformatics and Biomedicine (BTBM)
Systematic Analysis of Machine Learning Algorithms
on EEG Data for Brain State Intelligence
Alexander Chan l, Christopher E. Early 2, Sishir Subede, Yuezhe Li4, Hong Lin3
Decision tree induction
Evaluation of classifiers
Naive Bayesian classification
Nave Bayes for text classification
Support vector machines
Supervised vs. unsupervised Learning
(Input, output) pairs of the function to be learned can
be perceived or are given.
Back-propagation in Neural Nets
No information about desired outcomes given
Curse of Dimensionality
A major problem is the curse of dimensionality.
If the data x lies in high dimensional space, then
an enormous amount of data is required to learn
distributions or decision rules.
Example: 50 dimensions.
1. Pattern Recognition and Machine Learning,
C. M. Bishop, Springer, Oct. 2007.
2. Pattern Classification, R. O. Duda, P. E. Hart,
D. G. Stork, Wiley-Interscience; 2nd Edition,
3. Neural Networks and Learning Machines,
A random variable x takes on a defined set of
values with different probabilities.
For example, if you roll a die, the outcome is random (not
fixed) and there are 6 possible outcomes, each of which occur
Lab 10 Report
1. Xilinx Design suite is opened and Project by name Lab_10 is created.
2. The VHDL code for the program is written as shown below:
3. The UCF file for the VHDL is also written as shown below:
4. Then the programming file has been generated.