American library books Β» Other Β» Data Mining by Mehmed Kantardzic (good book recommendations TXT) πŸ“•

Read book online Β«Data Mining by Mehmed Kantardzic (good book recommendations TXT) πŸ“•Β».   Author   -   Mehmed Kantardzic



1 ... 35 36 37 38 39 40 41 42 43 ... 193
Go to page:
and maybe not the only one, of people being older in Florida than elsewhere. Similar misinterpretation could be based on the data analysis of life expectancy for a married versus a single man. Statistics show that the married man lives longer than the single man. But do not hurry with sensational causality and conclusions: that marriage is good for one’s health and increases life expectancy. It can be argued that males with physical problems and/or socially deviant patterns of behavior are less likely to get married, and this could be one of possible explanations why married men live longer. Unobservable factors such as a person’s health and social behavior are more likely the cause of changed life expectancy, and not the observed variable, marriage status. These illustrations should lead us to understand that inductive-learning processes build the model of dependencies but they should not automatically be interpreted as causality relations. Only experts in the domain where the data are collected may suggest additional, deeper semantics of discovered dependencies.

Let us return again to the learning machine and its task of system modeling. The problem encountered by the learning machine is to select a function from the set of functions this machine supports, which best approximates the system’s responses. The learning machine is limited to observing a finite number of samples n in order to make this selection. The finite number of samples, which we call a training data set, is denoted by (Xi, yi), where i = 1, … , n. The quality of an approximation produced by the learning machine is measured by the loss function L(y, f[X, w]), where

y is the output produced by the system,

X is a set of inputs,

f(X, w) is the output produced by the learning machine for a selected approximating function, and

w is the set of parameters in the approximating functions.

L measures the difference between the outputs produced by the system yi and that produced by the learning machine f(Xi,w) for every input point Xi. By convention, the loss function is nonnegative, so that large positive values correspond to poor approximation, and small positive values close to 0 show a good approximation. The expected value of the loss is called the risk functional R(w)

where L(y, f[X, w])is a loss function and p(X, y) is a probability distribution of samples. The R(w) value, for a selected approximating functions, is dependent only on a set of parameters w. Inductive learning can be now defined as the process of estimating the function f(X,wopt), which minimizes the risk functional R(w) over the set of functions supported by the learning machine, using only the training data set, and not knowing the probability distribution p(X, y). With finite data, we cannot expect to find f(X, wopt) exactly, so we denote as the estimate of parameters of the optimal solution wopt obtained with finite training data using some learning procedure.

For common learning problems such as classification or regression, the nature of the loss function and the interpretation of risk functional are different. In a two-class classification problem, where the output of the system takes on only two symbolic values, y = {0, 1}, corresponding to the two classes, a commonly used loss function measures the classification error.

Using this loss function, the risk functional quantifies the probability of misclassification. Inductive learning becomes a problem of finding the classifier function f(X, w), which minimizes the probability of misclassification using only the training data set.

Regression is a process of estimating a real-value function based on a finite data set of noisy samples. A common loss function for regression is the squared error measure

The corresponding risk functional measures the accuracy of the learning machine’s predictions of the system output. Maximum accuracy will be obtained by minimizing the risk functional because, in that case, the approximating function will describe the best set of given samples. Classification and regression are only two of many typical learning tasks. For the other data-mining tasks, different loss functions may be selected and they are supported with different interpretations of a risk functional.

What is a learning procedure? Or how should a learning machine use training data? The answer is given by the concept known as inductive principle. An inductive principle is a general prescription for obtaining an estimate f(X,wopt*) in the class of approximating functions from the available finite training data. An inductive principle tells us what to do with the data, whereas the learning method specifies how to obtain an estimate. Hence a learning method or learning algorithm is a constructive implementation of an inductive principle. For a given inductive principle, there are many learning methods corresponding to a different set of functions of a learning machine. The important issue here is to choose the candidate models (approximating functions of a learning machine) of the right complexity to describe the training data.

The mathematical formulation and formalization of the learning problem explained in this section may give the unintended impression that learning algorithms do not require human intervention, but this is clearly not the case. Even though available literature is concerned with the formal description of learning methods, there is an equally important, informal part of any practical learning system. This part involves such practical and human-oriented issues as selection of the input and output variables, data encoding and representation, and incorporating a priori domain knowledge into the design of a learning system. In many cases, the user also has some influence over the generator in terms of the sampling rate or distribution. The user very often selects the most suitable set of functions for the learning machine based on his/her knowledge of the system. This part is often more critical for an overall success than the design of the learning machine itself. Therefore, all formalizations in a learning theory are useful only if we keep in mind that inductive learning is a process in which there is some overlap between activities that can be formalized and others that are

1 ... 35 36 37 38 39 40 41 42 43 ... 193
Go to page:

Free e-book: Β«Data Mining by Mehmed Kantardzic (good book recommendations TXT) πŸ“•Β»   -   read online now on website american library books (americanlibrarybooks.com)

Comments (0)

There are no comments yet. You can be the first!
Add a comment