Chap9-Neural%2bNets - Chapter 9 Neural Nets Data Mining for...

Info iconThis preview shows pages 1–13. Sign up to view the full content.

View Full Document Right Arrow Icon
Chapter 9 – Neural Nets © Galit Shmueli and Peter Bruce 2008 Data Mining for Business Intelligence Shmueli, Patel & Bruce
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Basic Idea Combine input information in a complex & flexible neural net “model” Model “coefficients” are continually tweaked in an iterative process The network’s interim performance in classification and prediction informs successive tweaks
Background image of page 2
Network Structure Multiple layers Input layer (raw observations) Hidden layers Output layer Nodes Weights (like coefficients, subject to iterative adjustment) Bias values (also like coefficients, but not subject to iterative adjustment)
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Schematic Diagram
Background image of page 4
Example – Using fat & salt content to predict consumer acceptance of cheese
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Example - Data
Background image of page 6
Moving Through the Network
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
The Input Layer For input layer, input = output E.g., for record #1: Fat input = output = 0.2 Salt input = output = 0.9 Output of input layer = input into hidden layer
Background image of page 8
The Hidden Layer In this example, it has 3 nodes Each node receives as input the output of all input nodes Output of each hidden node is a function of the weighted sum of inputs
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
The Weights The weights θ (theta) and w are typically initialized to random values in the range -0.05 to +0.05 Equivalent to a model with random prediction (in other words, no predictive value) These initial weights are used in the first round of training
Background image of page 10
Output of Node 3 if g is a Logistic Function
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Initial Pass of the Network
Background image of page 12
Image of page 13
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 35

Chap9-Neural%2bNets - Chapter 9 Neural Nets Data Mining for...

This preview shows document pages 1 - 13. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online