酷兔英语

章节正文

E

 

 

entropy
For our purposes, the entropy measure
–Σipilog2pi
gives us the average amount of information in bits in some attribute of an instance. The information referred to is information about what class the instance belongs to, and pi is the probability that an instance belongs to class i.

 

The rationale for this is as follows: –log2(p) is the amount of information in bits associated with an event of probabilityp - for example, with an event of probability ½, like flipping a fair coin, log2((p) is –log2(½) = 1, so there is one bit of information. This should coincide with our intuition of what a bit means (if we have one). If there is a range of possible outcomes with associated probabilities, then to work out the average number of bits, we need to multiply the number of bits for each outcome (–log2(p)) by the probabilityp and sum over all the outcomes. This is where the formula comes from.

Entropy is used in the ID3 decision tree induction algorithm.

epoch
In training a neural net, the term epoch is used to describe a complete pass through all of the training patterns. The weights in the neural net may be updated after each pattern is presented to the net, or they may be updated just once at the end of the epoch. Frequently used as a measure of speed of learning - as in "training was complete after x epochs".
error backpropagation learning algorithm
The error backpropagation learning algorithm is a form of supervised learning used to train mainly feedforward neural networks to perform some task. In outline, the algorithm is as follows:
  1. Initialization: the weights of the network are initialized to small random values.
  2. Forward pass: The inputs of each training pattern are presented to the network. The outputs are computed using the inputs and the current weights of the network. Certain statistics are kept from this computation, and used in the next phase. The target outputs from each training pattern are compared with the actual activation levels of the output units - the difference between the two is termed the error. Training may be pattern-by-pattern or epoch-by-epoch. With pattern-by-pattern training, the pattern error is provided directly to the backward pass. With epoch-by-epoch training, the pattern errors are summed across all training patterns, and the total error is provided to the backward pass.

     

  3. Backward pass: In this phase, the weights of the net are updated. See the main article on the backward pass for some more detail.

     

  4. Go back to step 2. Continue doing forward and backward passes until the stopping criterion is satisfied.

     

See also forward pass, backward pass, delta rule, error surface, local minimum, gradient descent and momentum.

Error backpropagation learning is often familiarly referred to just as backprop.

error surface
When total error of a backpropagation-trained neural network is expressed as a function of the weights, and graphed (to the extent that this is possible with a large number of weights), the result is a surface termed the error surface. The course of learning can be traced on the error surface: as learning is supposed to reduce error, when the learning algorithm causes the weights to change, the current point on the error surface should descend into a valley of the error surface.

The "point" defined by the current set of weights is termed a point in weight space. Thus weight space is the set of all possible values of the weights.

See also local minimum and gradient descent.

excitatory connection
see weight.
expected error estimate
In pruning a decision tree, one needs to be able to estimate the expected error at any node (branch or leaf). This can be done using the Laplace error estimate, which is given by the formula

 

E(S) = (Nn + k – 1) / (N + k).
where
Sis the set of instances in a node
kis the number of classes (e.g. 2 if instances are just being classified into 2 classes: say positive and negative)
Nis the is the number of instances in S
Cis the majority class in S
nout of N examples in S belong to C


文章标签:词典  

章节正文