American library books Β» Other Β» Data Mining by Mehmed Kantardzic (good book recommendations TXT) πŸ“•

Read book online Β«Data Mining by Mehmed Kantardzic (good book recommendations TXT) πŸ“•Β».   Author   -   Mehmed Kantardzic



1 ... 76 77 78 79 80 81 82 83 84 ... 193
Go to page:
simple example in terms of the error ek(n):

The learning process based on a minimization of the cost function is referred to as error-correction learning. In particular, minimization of E(n) leads to a learning rule commonly referred to as the delta rule or Widrow-Hoff rule. Let wkj(n) denote the value of the weight factor for neuron k excited by input xj(n) at time step n. According to the delta rule, the adjustment Ξ”wkj(n) is defined by

where Ξ· is a positive constant that determines the rate of learning. Therefore, the delta rule may be stated as: The adjustment made to a weight factor of an input neuron connection is proportional to the product of the error signal and the input value of the connection in question.

Having computed the adjustment Ξ”wkj(n), the updated value of synaptic weight is determined by

In effect, wkj(n) and wkj(n + 1) may be viewed as the old and new values of synaptic weight wkj, respectively. From Figure 7.6 we recognize that error-correction learning is an example of a closed-loop feedback system. Control theory explains that the stability of such a system is determined by those parameters that constitute the feedback loop. One of those parameters of particular interest is the learning rate Ξ·. This parameter has to be carefully selected to ensure that the stability of convergence of the iterative-learning process is achieved. Therefore, in practice, this parameter plays a key role in determining the performance of error-correction learning.

Figure 7.6. Error-correction learning performed through weights adjustments.

Let us analyze one simple example of the learning process performed on a single artificial neuron in Figure 7.7a, with a set of the three training (or learning) examples given in Figure 7.7b.

Figure 7.7. Initialization of the error correction-learning process for a single neuron. (a) Artificial neuron with the feedback; (b) training data set for a learning process.

The process of adjusting the weight factors for a given neuron will be performed with the learning rate Ξ· = 0.1. The bias value for the neuron is equal 0, and the activation function is linear. The first iteration of a learning process, and only for the first training example, is performed with the following steps:

Similarly, it is possible to continue with the second and third examples (n = 2 and n = 3). The results of the learning corrections Ξ”w together with new weight factors w are given in Table 7.2.

TABLE 7.2. Adjustment of Weight Factors with Training Examples in Figure 7.7bParametern = 2n = 3x1βˆ’10.3x20.70.3x3βˆ’0.5βˆ’0.3yβˆ’1.1555βˆ’0.18d0.20.5e1.35550.68Ξ”w1(n)βˆ’0.140.02Ξ”w2(n)0.0980.02Ξ”w3(n)βˆ’0.07βˆ’0.02w1(n + 1)0.370.39w2(n + 1)βˆ’0.19βˆ’0.17w3(n + 1)0.7350.715

Error-correction learning can be applied on much more complex ANN architecture, and its implementation is discussed in Section 7.5, where the basic principles of multilayer feedforward ANNs with backpropagation are introduced. This example only shows how weight factors change with every training (learning) sample. We gave the results only for the first iteration. The weight-correction process will continue either by using new training samples or by using the same data samples in the next iterations. As to when to finish the iterative process is defined by a special parameter or set of parameters called stopping criteria. A learning algorithm may have different stopping criteria, such as the maximum number of iterations, or the threshold level of the weight factor may change in two consecutive iterations. This parameter of learning is very important for final learning results and it will be discussed in later sections.

7.4 LEARNING TASKS USING ANNS

The choice of a particular learning algorithm is influenced by the learning task that an ANN is required to perform. We identify six basic learning tasks that apply to the use of different ANNs. These tasks are subtypes of general learning tasks introduced in Chapter 4.

7.4.1 Pattern Association

Association has been known to be a prominent feature of human memory since Aristotle, and all models of cognition use association in one form or another as the basic operation. Association takes one of two forms: autoassociation or heteroassociation. In autoassociation, an ANN is required to store a set of patterns by repeatedly presenting them to the network. The network is subsequently presented with a partial description or a distorted, noisy version of an original pattern, and the task is to retrieve and recall that particular pattern. Heteroassociation differs from autoassociation in that an arbitrary set of input patterns is paired with another arbitrary set of output patterns. Autoassociation involves the use of unsupervised learning, whereas heteroassociation learning is supervised. For both, autoassociation and heteroassociation, there are two main phases in the application of an ANN for pattern-association problems:

1. the storage phase, which refers to the training of the network in accordance with given patterns, and

2. the recall phase, which involves the retrieval of a memorized pattern in response to the presentation of a noisy or distorted version of a key pattern to the network.

7.4.2 Pattern Recognition

Pattern recognition is also a task that is performed much better by humans than by the most powerful computers. We receive data from the world around us via our senses and are able to recognize the source of the data. We are often able to do so almost immediately and with practically no effort. Humans perform pattern recognition through a learning process, so it is with ANNs.

Pattern recognition is formally defined as the process whereby a received pattern is assigned to one of a prescribed number of classes. An ANN performs pattern recognition by first undergoing a training session, during which the network is repeatedly presented a set of input patterns along with the category to which each particular pattern belongs. Later, in a testing phase, a new pattern is presented to the network that it has not seen before, but which belongs to the same population of patterns used during training. The network is able to identify the class of that particular pattern because of the information it has extracted from the training data. Graphically, patterns are represented by points in a multidimensional space.

1 ... 76 77 78 79 80 81 82 83 84 ... 193
Go to page:

Free e-book: Β«Data Mining by Mehmed Kantardzic (good book recommendations TXT) πŸ“•Β»   -   read online now on website american library books (americanlibrarybooks.com)

Comments (0)

There are no comments yet. You can be the first!
Add a comment