American library books Β» Other Β» Data Mining by Mehmed Kantardzic (good book recommendations TXT) πŸ“•

Read book online Β«Data Mining by Mehmed Kantardzic (good book recommendations TXT) πŸ“•Β».   Author   -   Mehmed Kantardzic



1 ... 90 91 92 93 94 95 96 97 98 ... 193
Go to page:
samples, respectively. Output from the clustering system is a partition Ξ› = {G1, G2, … , GN }, where Gk, k = 1, … , N is a crisp subset of X such that

The members G1, G2, … , GN of Ξ› are called clusters. Every cluster may be described with some characteristics. In discovery-based clustering, both the cluster (a separate set of points in X) and its descriptions or characterizations are generated as a result of a clustering procedure. There are several schemata for a formal description of discovered clusters:

1. Represent a cluster of points in an n-dimensional space (samples) by their centroid or by a set of distant (border) points in a cluster.

2. Represent a cluster graphically using nodes in a clustering tree.

3. Represent clusters by using logical expression on sample attributes.

Figure 9.2 illustrates these ideas. Using the centroid to represent a cluster is the most popular schema. It works well when the clusters are compact or isotropic. When the clusters are elongated or non-isotropic, however, this schema fails to represent them properly.

Figure 9.2. Different schemata for cluster representation. (a) Centroid; (b) clustering tree; (c) logical expressions.

The availability of a vast collection of clustering algorithms in the literature and also in different software environments can easily confound a user attempting to select an approach suitable for the problem at hand. It is important to mention that there is no clustering technique that is universally applicable in uncovering the variety of structures present in multidimensional data sets. The user’s understanding of the problem and the corresponding data types will be the best criteria in selecting the appropriate method. Most clustering algorithms are based on the following two popular approaches:

1. hierarchical clustering, and

2. iterative square-error partitional clustering.

Hierarchical techniques organize data in a nested sequence of groups, which can be displayed in the form of a dendrogram or a tree structure. Square-error partitional algorithms attempt to obtain the partition that minimizes the within-cluster scatter or maximizes the between-cluster scatter. These methods are nonhierarchical because all resulting clusters are groups of samples at the same level of partition. To guarantee that an optimum solution has been obtained, one has to examine all possible partitions of the N samples with n dimensions into K clusters (for a given K), but that retrieval process is not computationally feasible. Notice that the number of all possible partitions of a set of N objects into K clusters is given by:

So various heuristics are used to reduce the search space, but then there is no guarantee that the optimal solution will be found.

Hierarchical methods that produce a nested series of partitions are explained in Section 6.3, while partitional methods that produce only one level of data grouping are given with more details in Section 9.4. The next section introduces different measures of similarity between samples; these measures are the core component of every clustering algorithm.

9.2 SIMILARITY MEASURES

To formalize the concept of a similarity measure, the following terms and notation are used throughout this chapter. A sample x (or feature vector, observation) is a single-data vector used by the clustering algorithm in a space of samples X. In many other texts, the term pattern is used. We do not use this term because of a collision in meaning with patterns as in pattern-association analysis, where the term has a totally different meaning. Most data samples for clustering take the form of finite dimensional vectors, and it is unnecessary to distinguish between an object or a sample xi, and the corresponding vector. Accordingly, we assume that each sample xi ∈ X, i = 1, … , n is represented by a vector xi = {xi1, xi2, … , xim}. The value m is the number of dimensions (features) of samples, while n is the total number of samples prepared for a clustering process that belongs to the sample domain X.

A sample can describe either a physical object (a chair) or an abstract object (a style of writing). Samples, represented conventionally as multidimensional vectors, have each dimension as a single feature. These features can be either quantitative or qualitative descriptions of the object. If the individual scalar component xij of a sample xi is a feature or attribute value, then each component xij, j = 1, … , m is an element of a domain Pj, where Pj could belong to different types of data such as binary (Pj = {0,1}), integer (Pj βŠ† Z), real number (Pj βŠ† R), or a categorical set of symbols. In the last case, for example, Pj may be a set of colors: Pj = {white, black, red, blue, green}. If weight and color are two features used to describe samples, then the sample (20, black) is the representation of a black object with 20 units of weight. The first feature is quantitative and the second one is qualitative. In general, both feature types can be further subdivided, and details of this taxonomy are already given in Chapter 1.

Quantitative features can be subdivided as

1. continuous values (e.g., real numbers where Pj βŠ† R),

2. discrete values (e.g., binary numbers Pj = {0,1}, or integers Pj βŠ† Z), and

3. interval values (e.g., Pj = {xij ≀ 20, 20 < xij < 40, xij β‰₯ 40}.

Qualitative features can be

1. nominal or unordered (e.g., color is β€œblue” or β€œred”), and

2. ordinal (e.g., military rank with values β€œgeneral” and β€œcolonel”).

Since similarity is fundamental to the definition of a cluster, a measure of the similarity between two patterns drawn from the same feature space is essential to most clustering algorithms. This measure must be chosen very carefully because the quality of a clustering process depends on this decision. It is most common to calculate, instead of the similarity measure, the dissimilarity between two samples using a distance measure defined on the feature space. A distance measure may be a metric or a quasi-metric on the sample space, and it is used to quantify the dissimilarity of samples.

The word β€œsimilarity” in

1 ... 90 91 92 93 94 95 96 97 98 ... 193
Go to page:

Free e-book: Β«Data Mining by Mehmed Kantardzic (good book recommendations TXT) πŸ“•Β»   -   read online now on website american library books (americanlibrarybooks.com)

Comments (0)

There are no comments yet. You can be the first!
Add a comment