酷兔英语

章节正文

H

 

hidden layer
Neurons or units in a feedforward net are usually structured into two or more layers. The input units constitute the input layer. The output units constitute the output layer. Layers in between the input and output layers (that is, layers that consist of hidden units) are termed hidden layers.

In layered nets, each neuron in a given layer is connected by trainable weights to each neuron in the next layer.

hidden unit / node
A hidden unit in a neural network is a neuron which is neither an input unit nor an output unit.
hypothesis language
Term used in analysing machine learning methods. The hypothesis language refers to the notation used by the learning method to represent what it has learned so far. For example, in ID3, the hypothesis language would be the notation used to represent the decision tree (including partial descriptions of incomplete decision trees). In backprop, the hypothesis language would be the notation used to represent the current set of weights. In Aq, the hypothesis language would be the notation used to represent the class descriptions (e.g.
class1 ← size=large and colour in {red, orange}).

See also observation language.

 

I

 

ID3
A decision tree induction algorithm, developed by Quinlan. ID3 stands for "Iterative Dichotomizer (version) 3". Later versions include C4.5 and C5.
inhibitory connection
see weight.
input unit
An input unit in a neural network is a neuron with no input connections of its own. Its activation thus comes from outside the neural net. The input unit is said to have its value clamped to the external value.
instance
This term has two, only distantly related, uses:

 

machine learningsymbolic learning algorithmsattributesclassificationsupervisedconnectionistpatterns

 

 

  1. particularly with , to describe a single training or test item, usually in the form of a description of the item in terms of its , along with its intended . With learning algorithms, it is more usual to speak of (training or test) .
  2. In general AI parlance, an instance frame is a frame representing a particular individual, as opposed to a generic frame.

     

J

 

K

 

L

 

Laplace error estimate
this is described in the article on expected error estimates.
layer in a neural network
see article on feedforward networks.

 

learning program
Normal programs P produce the same outputy each time they receive a particular input x. Learning programs are capable of improving their performance so that they may produce different (better) results on second or later times that they receive the same input x.

They achieve this by being able to alter their internal state, q. In effect, they are computing a function of two arguments, P(x | q) = y. When the program is in learning mode, the program computes a new state q' as well as the outputy, as it executes.

In the case of supervised learning, in order to constructq', one needs a set of inputs xi and corresponding target outputs zi (i.e. you want P(xi | q) = zi when learning is complete). The new state functionL is computed as:

L(P, q, ((x1,z1), ..., (xn, zn))) = q'

See also unsupervised learning, observation language, and hypothesis language.

learning rate
a constant used in error backpropagation learning and other artificial neural networklearning algorithms to affect the speed of learning. The mathematics of e.g. backprop are based on small changes being made to the weights at each step: if the changes made to weights are too large, the algorithm may "bounce around" the error surface in a counter-productive fashion. In this case, it is necessary to reduce the learning rate. On the other hand, the small the learning rate, the more steps it takes to get to the stopping criterion. See also momentum.
linear threshold unit (LTU)
A linear threshold unit is a simple artificial neuron whose output is its thresholded total net input. That is, an LTU with thresholdT calculates the weighted sum of its inputs, and then outputs 0 if this sum is less than T, and 1 if the sum is greater than T. LTU's form the basis of perceptrons.
local minimum
Understanding this term depends to some extent on the error surface metaphor.

When an artificial neural networklearning algorithm causes the total error of the net to descend into a valley of the error surface, that valley may or may not lead to the lowest point on the entire error surface. If it does not, the minimum into which the total error will eventually fall is termed a local minimum. The learning algorithm is sometimes referred to in this case as "trapped in a local minimum."

In such cases, it usually helps to restart the algorithm with a new, randomly chosen initial set of weights - i.e. at a new random point in weight space. As this means a new starting point on the error surface, it is likely to lead into a different valley, and hopefully this one will lead to the true (absolute) minimum error, or at least a better minimum error.



文章标签:词典  

章节正文