在数据分析中,层次聚类是一种常用的无监督学习方法,它不需要事先指定簇的数量,而是生成一个由层次嵌套的簇构成的树状结构。本文将探讨几种不同的层次聚类方法,包括单链接、平均链接、完全链接和Ward方法,并通过二维数据集来展示它们的效果。
单链接方法,也称为最近邻方法,是层次聚类中最快的一种。它在非球形数据上表现良好,但在噪声数据上表现较差。这种方法通过计算数据点之间的最短距离来构建簇,但容易受到噪声的影响,导致簇的形状不规则。
平均链接和完全链接方法在处理清晰分离的球形簇时表现较好,但在其他情况下效果参差不齐。平均链接方法通过计算簇内所有点对之间的平均距离来构建簇,而完全链接方法则通过计算簇内所有点对之间的最大距离来构建簇。这两种方法对噪声的敏感度较低,但在处理非球形数据时可能不如单链接方法。
Ward方法在处理噪声数据时最为有效。它通过最小化簇内的方差来构建簇,因此对噪声具有较好的鲁棒性。Ward方法在高维数据上的表现可能不如在低维数据上,但在二维数据集上通常能够获得较好的聚类效果。
为了比较这些聚类方法,生成了不同特点的数据集,包括噪声圆圈、噪声月亮、不同方差的簇以及各向异性分布的数据。使用Python的matplotlib库来绘制数据点,并使用scikit-learn库来实现聚类算法。
import time
import warnings
from itertools import cycle, islice
import matplotlib.pyplot as plt
import numpy as np
from sklearn import cluster, datasets
from sklearn.preprocessing import StandardScaler
# 生成数据集
n_samples = 1500
noisy_circles = datasets.make_circles(n_samples=n_samples, factor=0.5, noise=0.05, random_state=170)
noisy_moons = datasets.make_moons(n_samples=n_samples, noise=0.05, random_state=170)
blobs = datasets.make_blobs(n_samples=n_samples, random_state=170)
rng = np.random.RandomState(170)
no_structure = rng.rand(n_samples, 2), None
X, y = datasets.make_blobs(n_samples=n_samples, random_state=170)
transformation = [[0.6, -0.6], [-0.4, 0.8]]
X_aniso = np.dot(X, transformation)
aniso = (X_aniso, y)
varied = datasets.make_blobs(n_samples=n_samples, cluster_std=[1.0, 2.5, 0.5], random_state=170)
# 设置聚类参数
plt.figure(figsize=(9*1.3+2, 14.5))
plt.subplots_adjust(left=0.02, right=0.98, bottom=0.001, top=0.96, wspace=0.05, hspace=0.01)
plot_num = 1
default_base = {"n_neighbors": 10, "n_clusters": 3}
datasets = [
(noisy_circles, {"n_clusters": 2}),
(noisy_moons, {"n_clusters": 2}),
(varied, {"n_neighbors": 2}),
(aniso, {"n_neighbors": 2}),
(blobs, {}),
(no_structure, {}),
]
for i_dataset, (dataset, algo_params) in enumerate(datasets):
params = default_base.copy()
params.update(algo_params)
X, y = dataset
X = StandardScaler().fit_transform(X)
ward = cluster.AgglomerativeClustering(n_clusters=params["n_clusters"], linkage="ward")
complete = cluster.AgglomerativeClustering(n_clusters=params["n_clusters"], linkage="complete")
average = cluster.AgglomerativeClustering(n_clusters=params["n_clusters"], linkage="average")
single = cluster.AgglomerativeClustering(n_clusters=params["n_clusters"], linkage="single")
clustering_algorithms = (
("Single Linkage", single),
("Average Linkage", average),
("Complete Linkage", complete),
("Ward Linkage", ward),
)
for name, algorithm in clustering_algorithms:
t0 = time.time()
with warnings.catch_warnings():
warnings.filterwarnings("ignore", message="the number of connected components of the connectivity matrix is [0-9]{1,2} > 1. Completing it to avoid stopping the tree early.", category=UserWarning)
algorithm.fit(X)
t1 = time.time()
if hasattr(algorithm, "labels_"):
y_pred = algorithm.labels_.astype(int)
else:
y_pred = algorithm.predict(X)
plt.subplot(len(datasets), len(clustering_algorithms), plot_num)
if i_dataset == 0:
plt.title(name, size=18)
colors = np.array(list(islice(cycle(["#377eb8", "#ff7f00", "#4daf4a", "#f781bf", "#a65628", "#984ea3", "#999999", "#e41a1c", "#dede00"]), int(max(y_pred)+1))))
plt.scatter(X[:,0], X[:,1], s=10, color=colors[y_pred])
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
plt.xticks(())
plt.yticks(())
plt.text(0.99, 0.01, "%.2fs" % (t1-t0).lstrip("0"), transform=plt.gca().transAxes, size=15, horizontalalignment="right")
plot_num += 1
plt.show()