Lecture 13: 03/07/2007
Recall:
Simulated Annealing Given S, : S>R, find x S minimizing (x) (i). SA solution: for each of a bunch of Tvalues, "run metropolis" at each T using some Q(I,j) (symmetric) for "proposal's" acceptance prob 1 ( , )
01/24/2007: Lecture 2
Evolutionary Process:
What is (or, what will we mean by) an "evolutionary process?" The word "evolution" comes up a lot in science and engineering (e.g. RLC circuit with sources.) From a vector x whose entries are capacitor vol
ECE 496
SOLUTIONS TO HOMEWORK ASSIGNMENT I Spring 2007
The numbers correspond with the numbering of the Thought Problems at the end of Chapter 1 of the book.
2. In parts (a) through (c) we list the outcomes of the games in the standard way assuming that P
Lecture 15: 03/14/2007
Recall:
Done with evolution strategies. (1+1)ES, (+)ES, (,)ES) Wrap up the EP + ES discussion: 1. EP a. b. c. d. e. 2. ES a. Also constantsized populations () b. Early (and bulk of later) work: mutation as on
Lecture 10: 02/26/2007
Recall:
Segueing into discussing "Monte Carlo stuff." Theoretical Tool: Strong Law of Large Numbers (SLLN). If x1, x2, x3,. are iid random variables with finite (conversion) mean / expected value xbar, we have with prob
Lecture 14: 03/12/2007
Recall:
Discussing early work on evolutionary computation. L. Fogel in the 1960's (evolutionary programming) Goal: evolve "machine intelligence" Focus: evolving finitestate machines for sequence prediction Sequ
Lecture ?+1: 04/11/2007 Recall: No Free Lunch theorems. Two views (with warning that it's not totally settled how they apply to evolutionary alogirthms.) Warning to exercise caution when applying these theorems. Bottom Line: the behavior of GAs
Lecture ?: 04/23/2007 Recall: General comments / criticisms about "classical" game theory. 1. Requires hyperrational players (over the years, people have tried a variety of approaches to bounded probability. 2. Games can have multiple Nash equ
ECE 220 Multimedia Signal Processing
October 12, 2006 Fall 2006
Topics & Concepts for Exam 2 (Oct. 19 7:30 pm Phillips 101)
The following is a list of topics and concepts in the material which exam 1 will cover. This is a comprehensive listing of w
ECE 493
HOMEWORK ASSIGNMENT II
Spring 2007
1. For the Markov chain in Figure 1, determine the following items. (a) P (2) (i, j ) for every i and j . (4) (b) f13 , the probability that you reach state 3 for the rst time after four steps given that you star
Lecture 17: 03/28/2007
Recall:
<Missed>
Specialize this to case where Q={representative in P(t) of some schema H} = ( ) True that Prob{some element of Q slected as a parent on any one pick}= ( )
Question: given that Hrepresentatives
Lecture 9: 02/21/2007
Recall:
Limit theorems for Markov chains. First two were about limiting behavior of Nij(n)=# (of visits to j starting from i in time <= n) Introduced P(i,j) notation: P(i,j)=prob(1step itoj transition)=the number next to
ECE 496
SOLUTIONS TO HOMEWORK ASSIGNMENT V Spring 2007
1. Suppose the players play a total of T stages. If you are Player 1, and Player 2 plays the proposed strategy, can you do better by playing a dierent strategy? Any such dierent strategy would result
ECE 496
SOLUTIONS TO HOMEWORK ASSIGNMENT IV Spring 2007
1. Start with the complicated one and work on it. First note that the k = 0-term is zero in the sum, so ! ! n n Xn Xn k nk kpsel (x)k (1 psel (x)nk kpsel (x) (1 psel (x) = k k k=1 k=0 ! n X n1 npsel
ECE 496
SOLUTIONS TO HOMEWORK ASSIGNMENT III Spring 2007
1. Markov chains are not only extraordinarily useful but also fun to play with. (a) The transition matrix P is dened by [P ]ij = Probcfw_i j in one step , so 1/2 P = 4 1/3 2/3 2 1/6 1/2 1/6 3 1/3 1/
ECE 496
SOLUTIONS TO HOMEWORK ASSIGNMENT II Spring 2007
1. Since this Markov chain has nitely many states, its convenient to solve parts of the problem using the transition matrix P dened by [P ]ij = P (i, j ), where P (i, j ) is the probability of making
Lecture 12: 03/05/2007
Recall:
Metropolis Algorithm S: big discrete set of states
( )
A probability distribution on S: ( ) Algorithm constructs a Markov Chain using "proposal followed by accept/reject." Start with symmetric Qij; Qij=prob
Lecture 16: 03/26/2007
Recall:
Standard (Simple) Genetic Algorithm (SGA) Fixedsize populations (n) Individuals in population are fixedlength (L) bit strings Exogenous fitness function-each bit string x has a fitness f(x)-assume that f(x)>
Lecture ?: 04/09/2007
Recall:
Talking about SGA as a Markov chain on population space (fitnessproportional selection.) Early examples (without mutation): had some absorbing statesthese are the monomorphic populations (all indiciduals same,) al
ECE 496
AMINO ACIDS AND THE GENETIC CODE
Spring 2007
Here are the 20 amino acids in alphabetical order (along with a three-letter abbreviation for each): ala alanine arg arginine asn asparagine asp aspartic acid cys cysteine gln glutamine glu glutamic aci
ECE 496
HOMEWORK ASSIGNMENT III
Spring 2007
1. Consider the three-state Markov chain in Diagram 1. (a) Find the transition matrix P and the stationary distribution for this Markov chain. (b) Find labels for the arrows in Diagram 2 so that the resulting Ma
ECE 493
HOMEWORK ASSIGNMENT IV
Spring 2007
1. Reconcile the two formulas for the expected number of times a specic string x from the current population P (t) is picked to be a parent for the next generation. Recall that |P (t)| = n and that we pick n pare
ECE 496
HOMEWORK ASSIGNMENT V
Spring 2007
1. In the nitely repeated (with summed payos) Battle of the Sexes/Brothers C B C (3, 2) (1, 1) B (1, 1) , (2, 3)
show that a pure-strategy Nash equilibrium prole is for the players both to play C on odd-numbered s
ECE 496
SUPPLEMENTARY HANDOUT
Spring 2007
COUNTABLE-STATE DISCRETE-TIME HOMOGENEOUS MARKOV CHAINS 1. Basic denitions, transition probabilities, etc. Picture a huge panel of colored lights. The panel might feature nitely many lights or countably innitely m