FunctionApproximationd

# FunctionApproximationd - %Outputs layer's gradient

This preview shows pages 1–2. Sign up to view the full content.

% % Mini Project1 Part2 % clear all n=10; %number of neurons in the hidden layer V1=rand(n,2); V2=-rand(n,2); V=0.5*(V1+V2); % initializing 1st layer's weights W1=rand(1,n+1); W2=-rand(1,n+1); W=0.5*(W1+W2); %initializing output layer's weights. y=zeros(n,1); % y(11,1)=-1; o=0; E=0; N=20; %number of samples between -pi and pi eta=.1; %learning coefficient. epoch=0; Hii=W for epoch=1:1000 E=0; % epoch=epoch+1; for i=1:8*N+1 x(i)=-pi+.25*pi*(i-1)/N; %inputs to the network d(i)=sin(x(i))*cos(2*x(i)); %Desired outputs y(:,1)=bivariateexp(V*([x(i) -1]')); %1 st layer's outputs are evaluated o(i)=bivariateexp(W*([y' -1]')); %2 nd layer's input is evaluated

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: %Outputs layer's gradient delta_L=.5*((d(i)-o(i))*(1-o(i)^2)); for nrn=1:n+1 % we also add a dump neuron for augmanted weight deltaY(nrn)=(1-y(nrn,1)^2)*W(1,nrn)*delta_L; end % step 3 error evaluation E=E + .5*((d(i)-o(i))^2) ; for k=1:n fy(k,1)=.5*(1-y(k,1)^2); if (k==n) fy(n+1,1)=0; end end delta_y = W(1,2)*delta_o*.5*fy(:,1); %W %fy(:,1) %step 5 adjusting weights of output layer W=W + eta*delta_o*[y' -1]; %delta_y %V %step 6 adjusting weights of hidden layer V=V + (eta*delta_y(1:n,1)*[x(i) -1]); end e end m=1:8*N; plot(d,'r') hold on plot(o) p epoch Ea=E/(8*N+1) E...
View Full Document