This preview shows page 1. Sign up to view the full content.
Unformatted text preview: t variable, whereas the other
variables would be referred to as the independent variables. The next step is
to estimate, statistically, the functional relationship between the dependent
variable and the independent variable(s). Having determined this, and also
that the relationship is statistically significant, the predicted future values of
the independent variables can be used to predict the future values of the
dependent variable.
In this dissertation, billing data was used as independent variables. The conclusion was that regression and neural networks were not appropriate
techniques for area and transmission substation load forecasts. For more
details, refer to the dissertation, but some of the major shortcomings were:
1) Only ten data points (annual loads) are available and a twentyyear
load forecast is required.
2) The assumptions for the properties of least squares estimators are
violated i.e.
i) The model error variance is homogeneous (constant
variance) with mean zero. ii) The model errors are uncorrelated from observation to
observation. iii) The model errors are assumed to be normally distributed. 3) Multiple regression models are shortterm forecasting techniques
(at least 5 years), not longterm.
4) Some of the independent variables are highly correlated.
leads to the problem of multicollinearity. That In such cases the variance is overestimated, the confidence levels for the coefficients
become very wide and sometimes the coefficients have a wrong
sign.
5) The model errors (actual minus forecast) are sometimes correlated
and called autocorrelation. With auto correlation the variance is underestimated, the ttests are high and the confidence levels for
the coefficients become very small. 59 6) Heteroschedasticity. The variances are not constant over time, giving forecast results that are either too low or too high (funnel
effect). There are techniques such as ridge regression or principal component
regression to solve the multicollinearity problems, but only spreadsheet
software is available. Artificial neural networks is another technique to consider for forecasting
purposes. In this chapter, a brief overview is given on neural nets, terminology frequently used and appropriate neural net structures for
predicting future loads.
Artificial neural networks are sometimes considered simplified models of the
human brain. Some authors feel that this is misleading, because the human
brain is too complex and is not wellunderstood. There are, nevertheless, a
number of similarities that can be examined. Human nerve cells, called neurons, consist of three parts: the cell body, the
dendrites and the axon. The body (soma) is the large, relatively round central body in which almost all
the logical functions of the neuron are performed. It carries out the biochemical transformations required to synthesise the enzymes and other
molecules necessary to the life of the neuron. Each neuron has a hairlike structure of dendrites (inputs) around it. The
dendrites are the principal receptors of the neuron and serve to connect its
incoming signals. The axon (output) is the outgoing connection for signals emitted by the
neuron. Synapses are speed contacts on a neuron, which are the termination 60 points for the axons from other neurons. Synapses play the role of interfaces
connecting some axons of the neurons to the spines of the input dendrites. d d Neuron = synapses
d = dendrites Axon d o = soma o d
d
Figure 3.6.1  Components of a neuron The principle of the single neuron is the following: a number of inputs (xis) are
applied, each input is multiplied by a weight (ωji ), and then summated. The
output (yj ) of the activation function ( ψj ) is compared with the desired output
(dj ). The difference between the actual and the desired outputs are used to
adjust to weights. The ideal condition would be to have a difference of zero. x0 x1 x2
x3 ωj0 ωj1
ωj2
ωj3 µj
∑ yj
ψ( µj)
dj Figure 3.6.2  Single Layer Perceptron with a neuron in its output layer
The summated output uj is given 61 n u j = ∑ ω ji xi (3.2.9) i =0 e j = d j − y j = d j − ψ (u j ) (3.2.10) The following terminology is frequently used in dealing with neural nets:
• Supervised training is accomplished by presenting a sequence of training
vectors, or patterns, each with an associated target output vector. The
weights are adjusted according to a learning algorithm in order to achieve
the target output vector as closely as possible. • Unsupervised training is, for example, used with selforganising neural
nets. A sequence of input vectors is provided, but no target vectors are
specified. The net modifies the weights so that the most similar input
vectors are assigned to the same output unit. • Fixed weights are used for constrained optimisation problems. The Boltzmann machine (without learning) and the continuous Hopfield net can
be used for these typ es of problems. When these nets are designed, the
weights are set to represent t...
View Full
Document
 Spring '14

Click to edit the document details