Applying Clustering
Site: | Saylor Academy |
Course: | CS250: Python for Data Science |
Book: | Applying Clustering |
Printed by: | Guest user |
Date: | Thursday, 3 April 2025, 12:59 PM |
Description
This section continues the example presented in the previous section on K-means. In addition to discussing code for implementing agglomerative clustering, it also includes applications of various accuracy measures useful for analyzing clutering performance.
Clustering
The main idea behind clustering is pretty straightforward. Basically, we say to ourselves, "I have these points here, and I can see that they organize into groups. It would be nice to describe these things more concretely, and, when a new point comes in, assign it to the correct group". This general idea encourages exploration and opens up a variety of algorithms for clustering.
*The examples of the outcomes from different algorithms from scikit-learn*
The algorithms listed below do not cover all the clustering methods out there, but they are the most commonly used ones.
Source: Yury Kashnitsky, https://www.kaggle.com/code/kashnitsky/topic-7-unsupervised-learning-pca-and-clustering/notebook This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License.
K-means
K-means algorithm is the most popular and yet simplest of all the clustering algorithms. Here is how it works:- Select the number of clusters
that you think is the optimal number.
- Initialize
points as "centroids" randomly within the space of our data.
- Attribute each observation to its closest centroid.
- Update the centroids to the center of all the attributed sets of observations.
- Repeat steps 3 and 4 a fixed number of times or until all of the centroids are stable (i.e. no longer change in step 4).
This algorithm is easy to describe and visualize. Let's take a look.
# Let's begin by allocation 3 cluster's points X = np.zeros((150, 2)) np.random.seed(seed=42) X[:50, 0] = np.random.normal(loc=0.0, scale=.3, size=50) X[:50, 1] = np.random.normal(loc=0.0, scale=.3, size=50) X[50:100, 0] = np.random.normal(loc=2.0, scale=.5, size=50) X[50:100, 1] = np.random.normal(loc=-1.0, scale=.2, size=50) X[100:150, 0] = np.random.normal(loc=-1.0, scale=.2, size=50) X[100:150, 1] = np.random.normal(loc=2.0, scale=.5, size=50) plt.figure(figsize=(5, 5)) plt.plot(X[:, 0], X[:, 1], 'bo');

# Scipy has function that takes 2 tuples and return # calculated distance between them from scipy.spatial.distance import cdist # Randomly allocate the 3 centroids np.random.seed(seed=42) centroids = np.random.normal(loc=0.0, scale=1., size=6) centroids = centroids.reshape((3, 2)) cent_history = [] cent_history.append(centroids) for i in range(3): # Calculating the distance from a point to a centroid distances = cdist(X, centroids) # Checking what's the closest centroid for the point labels = distances.argmin(axis=1) # Labeling the point according the point's distance centroids = centroids.copy() centroids[0, :] = np.mean(X[labels == 0, :], axis=0) centroids[1, :] = np.mean(X[labels == 1, :], axis=0) centroids[2, :] = np.mean(X[labels == 2, :], axis=0) cent_history.append(centroids)
# Let's plot K-means plt.figure(figsize=(8, 8)) for i in range(4): distances = cdist(X, cent_history[i]) labels = distances.argmin(axis=1) plt.subplot(2, 2, i + 1) plt.plot(X[labels == 0, 0], X[labels == 0, 1], 'bo', label='cluster #1') plt.plot(X[labels == 1, 0], X[labels == 1, 1], 'co', label='cluster #2') plt.plot(X[labels == 2, 0], X[labels == 2, 1], 'mo', label='cluster #3') plt.plot(cent_history[i][:, 0], cent_history[i][:, 1], 'rX') plt.legend(loc=0) plt.title('Step {:}'.format(i + 1));

Here, we used Euclidean distance, but the algorithm will converge with any other metric. You can vary not only the number of steps or the convergence criteria but also the distance measured between the points and cluster centroids.
Another "feature" of this algorithm is its sensitivity to the initial positions of the cluster centroids. You can run the algorithm several times and then average all the centroid results.
Choosing the Number of Clusters for K-means
In contrast to supervised learning tasks such as classification and regression, clustering requires more effort to choose the optimization criterion. Usually, when working with k-means, we optimize the sum of squared distances between the observations and their centroids.
where – is a set of clusters with power
,
is a centroid of a cluster
.
This definition seems reasonable -- we want our observations to be as close to their centroids as possible. But, there is a problem -- the optimum is reached when the number of centroids is equal to the number of observations, so you would end up with every single observation as its own separate cluster.
In order to avoid that case, we should choose a number of clusters after which a function is decreasing less rapidly. More formally,
Let's look at an example.
from sklearn.cluster import KMeans
inertia = [] for k in range(1, 8): kmeans = KMeans(n_clusters=k, random_state=1).fit(X) inertia.append(np.sqrt(kmeans.inertia_))
plt.plot(range(1, 8), inertia, marker='s'); plt.xlabel('$k$') plt.ylabel('$J(C_k)$');

We see that decreases significantly until the number of clusters is 3 and then does not change as much anymore. This means that the optimal number of clusters is 3.
Issues
Inherently, K-means is NP-hard. For dimensions,
clusters, and
observations, we will find a solution in
in time. There are some heuristics to deal with this; an example is MiniBatch K-means, which takes portions (batches) of data instead of fitting the whole dataset and then moves centroids by taking the average of the previous steps. Compare the implementation of K-means and MiniBatch K-means in the sckit-learn documentation.
The implementation of the algorithm using scikit-learn
has its benefits, such as the possibility to state the number of initializations with the n_init
function parameter, which enables us to identify more robust centroids. Moreover, these runs can be done in parallel to decrease the computation time.
Affinity Propagation
Affinity propagation is another example of a clustering algorithm. As opposed to K-means, this approach does not require us to set the number of clusters beforehand. The main idea here is that we would like to cluster our data based on the similarity of the observations (or how they "correspond" to each other).
Let's define a similarity metric such that if an observation
is more similar to observation
and less similar to observation
. A simple example of such a similarity metric is a negative square of distance
.
Now, let's describe "correspondence" by making two zero matrices. One of them, , determines how well the
th observation is as a "role model" for the
th observation with respect to all other possible "role models". Another matrix,
determines how appropriate it would be for
th observation to take the
th observation as a "role model". This may sound confusing, but it becomes more understandable with some hands-on practice.
The matrices are updated sequentially with the following rules:
Spectral Clustering
Spectral clustering combines some of the approaches described above to create a stronger clustering method.
First of all, this algorithm requires us to define the similarity matrix for observations called the adjacency matrix. This can be done in a similar fashion as in the Affinity Propagation algorithm: . This matrix describes a full graph with the observations as vertices and the estimated similarity value between a pair of observations as edge weights for that pair of vertices. For the metric defined above and two-dimensional observations, this is pretty intuitive - two observations are similar if the edge between them is shorter. We'd like to split up the graph into two subgraphs in such a way that each observation in each subgraph would be similar to another observation in that subgraph. Formally, this is a Normalized cuts problem.
Agglomerative Clustering
The following algorithm is the simplest and easiest to understand among all the clustering algorithms without a fixed number of clusters.
The algorithm is fairly simple:
- We start by assigning each observation to its own cluster
- Then sort the pairwise distances between the centers of clusters in descending order
- Take the nearest two neighbor clusters and merge them together, and recompute the centers
- Repeat steps 2 and 3 until all the data is merged into one cluster
The process of searching for the nearest cluster can be conducted with different methods of bounding the observations:
The 3rd one is the most effective in computation time since it does not require recomputing the distances every time the clusters are merged.
The results can be visualized as a beautiful cluster tree (dendrogram) to help recognize the moment the algorithm should be stopped to get optimal results. There are plenty of Python tools to build these dendrograms for agglomerative clustering.
Let's consider an example with the clusters we got from K-means:
from scipy.cluster import hierarchy
from scipy.spatial.distance import pdist
X = np.zeros((150, 2))
np.random.seed(seed=42)
X[:50, 0] = np.random.normal(loc=0.0, scale=.3, size=50)
X[:50, 1] = np.random.normal(loc=0.0, scale=.3, size=50)
X[50:100, 0] = np.random.normal(loc=2.0, scale=.5, size=50)
X[50:100, 1] = np.random.normal(loc=-1.0, scale=.2, size=50)
X[100:150, 0] = np.random.normal(loc=-1.0, scale=.2, size=50)
X[100:150, 1] = np.random.normal(loc=2.0, scale=.5, size=50)
# pdist will calculate the upper triangle of the pairwise distance matrix
distance_mat = pdist(X)
# linkage — is an implementation if agglomerative algorithm
Z = hierarchy.linkage(distance_mat, 'single')
plt.figure(figsize=(10, 5))
dn = hierarchy.dendrogram(Z, color_threshold=0.5)
Accuracy Metrics
As opposed to classification, it is difficult to assess the quality of results from clustering. Here, a metric cannot depend on the labels but only on the goodness of the split. Secondly, we do not usually have true labels of the observations when we use clustering.
There are internal and external goodness metrics. External metrics use the information about the known true split, while internal metrics do not use any external information and assess the goodness of clusters based only on the initial data. The optimal number of clusters is usually defined with respect to some internal metrics.
All the metrics described below are implemented in sklearn.metrics.
Adjusted Rand Index (ARI)
Here, we assume that the true labels of objects are known. This metric does not depend on the labels' values but on the data cluster split. Let be the number of observations in a sample. Let a be the number of observation pairs with the same labels and located in the same cluster, and let b be the number of observations with different labels and located in different clusters. The Rand Index can be calculated using the following formula:
In other words, it evaluates a share of observations for which these splits (initial and clustering results) are consistent. The Rand Index (RI) evaluates the similarity of the two splits of the same sample. In order for this index to be close to zero for any clustering outcomes with any and the number of clusters, it is essential to scale it, hence the Adjusted Rand Index:
This metric is symmetric and does not depend in the label permutation. Therefore, this index is a measure of distances between different sample splits. takes on values in the
range. Negative values indicate the independence of splits, and positive values indicate that these splits are consistent (they match
).
Adjusted Mutual Information (AMI)
This metric is similar to . It is also symmetric and does not depend on the labels' values and permutation. It is defined by the entropy function and interprets a sample split as a discrete distribution (the likelihood of assigning to a cluster is equal to the percent of objects in it). The
index is defined as the mutual information for two distributions corresponding to the sample split into clusters. Intuitively, the mutual information measures the share of information common for both clustering splits i.e. how information about one of them decreases the uncertainty of the other one.
Similarly to the , the
is defined. This allows us to get rid of the
index's increase with the number of clusters. The
lies in the
range. Values close to zero mean the splits are independent, and those close to 1 mean they are similar (with a complete match at
).
Homogeneity, completeness, V-measure
Formally, these metrics are also defined based on the entropy function and the conditional entropy function, interpreting the sample splits as discrete distributions:
where is a clustering result, and
is the initial split. Therefore, h evaluates whether each cluster is composed of the same class objects, and c measures how well the same class objects fit the clusters. These metrics are not symmetric. Both lie in the
range, and values closer to 1 indicate more accurate clustering results. These metrics' values are not scaled as the
or
metrics are and thus depend on the number of clusters. A random clustering result will not have metric values closer to zero when the number of clusters is big enough, and the number of objects is small. In such a case, it would be more reasonable to use
. However, with a large number of observations (more than 100) and the number of clusters less than 10, this issue is less critical and can be ignored.
-measure is a combination of
, and
and is their harmonic mean:
It is symmetric and measures how consistent two clustering results are.
Silhouette
In contrast to the metrics described above, this coefficient does not imply knowledge about the true labels of the objects. It lets us estimate the quality of the clustering using only the initial, unlabeled sample and the clustering result. To start with, for each observation, the silhouette coefficient is computed. Let be the mean of the distance between an object and other objects within one cluster and
be the mean distance from an object to an object from the nearest cluster (different from the one the object belongs to). Then the silhouette measure for this object is
The silhouette of a sample is a mean value of silhouette values from this sample. Therefore, the silhouette distance shows to which extent the distance between the objects of the same class differs from the mean distance between the objects from different clusters. This coefficient takes values in the range. Values close to -1 correspond to bad clustering results, while values closer to 1 correspond to dense, well-defined clusters. Therefore, the higher the silhouette value is, the better the results from clustering.
With the help of silhouette, we can identify the optimal number of clusters (if we don't know it already from the data) by taking the number of clusters that maximizes the silhouette coefficient.
To conclude, let's take a look at how these metrics perform with the MNIST handwritten numbers dataset:
from sklearn import metrics from sklearn import datasets import pandas as pd from sklearn.cluster import KMeans, AgglomerativeClustering, AffinityPropagation, SpectralClustering data = datasets.load_digits() X, y = data.data, data.target algorithms = [] algorithms.append(KMeans(n_clusters=10, random_state=1)) algorithms.append(AffinityPropagation()) algorithms.append(SpectralClustering(n_clusters=10, random_state=1, affinity='nearest_neighbors')) algorithms.append(AgglomerativeClustering(n_clusters=10)) data = [] for algo in algorithms: algo.fit(X) data.append(({</span><span class="pln"> </span><span class="str">'ARI'</span><span class="pun">:</span><span class="pln"> metrics</span><span class="pun">.</span><span class="pln">adjusted_rand_score</span><span class="pun">(</span><span class="pln">y</span><span class="pun">,</span><span class="pln"> algo</span><span class="pun">.</span><span class="pln">labels_</span><span class="pun">),</span><span class="pln"> </span><span class="str">'AMI'</span><span class="pun">:</span><span class="pln"> metrics</span><span class="pun">.</span><span class="pln">adjusted_mutual_info_score</span><span class="pun">(</span><span class="pln">y</span><span class="pun">,</span><span class="pln"> algo</span><span class="pun">.</span><span class="pln">labels_</span><span class="pun">,</span><span class="pln"> average_method</span><span class="pun">=</span><span class="str">'arithmetic'</span><span class="pun">),</span><span class="pln"> </span><span class="str">'Homogenity'</span><span class="pun">:</span><span class="pln"> metrics</span><span class="pun">.</span><span class="pln">homogeneity_score</span><span class="pun">(</span><span class="pln">y</span><span class="pun">,</span><span class="pln"> algo</span><span class="pun">.</span><span class="pln">labels_</span><span class="pun">),</span><span class="pln"> </span><span class="str">'Completeness'</span><span class="pun">:</span><span class="pln"> metrics</span><span class="pun">.</span><span class="pln">completeness_score</span><span class="pun">(</span><span class="pln">y</span><span class="pun">,</span><span class="pln"> algo</span><span class="pun">.</span><span class="pln">labels_</span><span class="pun">),</span><span class="pln"> </span><span class="str">'V-measure'</span><span class="pun">:</span><span class="pln"> metrics</span><span class="pun">.</span><span class="pln">v_measure_score</span><span class="pun">(</span><span class="pln">y</span><span class="pun">,</span><span class="pln"> algo</span><span class="pun">.</span><span class="pln">labels_</span><span class="pun">),</span><span class="pln"> </span><span class="str">'Silhouette'</span><span class="pun">:</span><span class="pln"> metrics</span><span class="pun">.</span><span class="pln">silhouette_score</span><span class="pun">(</span><span class="pln">X</span><span class="pun">,</span><span class="pln"> algo</span><span class="pun">.</span><span class="pln">labels_</span><span class="pun">)})) results = pd.DataFrame(data=data, columns=['ARI', 'AMI', 'Homogenity', 'Completeness', 'V-measure', 'Silhouette'], index=['K-means', 'Affinity', 'Spectral', 'Agglomerative']) results
ARI | AMI | Homogeneity | Completeness | V-measure | Silhouette | |
---|---|---|---|---|---|---|
K-means | 0.662295 | 0.736567 | 0.735448 | 0.742972 | 0.739191 | 0.182097 |
Affinity | 0.175174 | 0.612460 | 0.958907 | 0.486901 | 0.645857 | 0.115197 |
Spectral | 0.756461 | 0.852040 | 0.831691 | 0.876614 | 0.853562 | 0.182729 |
Agglomerative | 0.794003 | 0.866832 | 0.857513 | 0.879096 | 0.868170 |