本实例展示了在“有趣”但仍然在二维空间的数据集上,不同层次聚类链接方法的特性。主要观察点包括:单链接方法速度快,对非球形数据表现良好,但在噪声存在时表现不佳;平均链接和完全链接在清晰分离的球形簇上表现良好,但在其他情况下结果参差不齐;Ward方法对噪声数据最有效。虽然这些示例提供了一些关于算法的直观理解,但这种直观理解可能不适用于非常高维的数据。
选择的数据集大小足够大,以便观察算法的可扩展性,但不至于过大以避免运行时间过长。生成的数据集包括噪声圆圈、噪声月亮、球状簇、无结构数据和各向异性分布的数据。
# 导入必要的库
import numpy as np
from sklearn import datasets, cluster
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
# 设置随机数种子以确保结果可复现
rng = np.random.RandomState(170)
# 生成数据集
n_samples = 1500
noisy_circles = datasets.make_circles(n_samples=n_samples, factor=0.5, noise=0.05, random_state=170)
noisy_moons = datasets.make_moons(n_samples=n_samples, noise=0.05, random_state=170)
blobs = datasets.make_blobs(n_samples=n_samples, random_state=170)
no_structure = rng.rand(n_samples, 2), None
X, y = datasets.make_blobs(n_samples=n_samples, random_state=170)
transformation = [[0.6, -0.6], [-0.4, 0.8]]
X_aniso = np.dot(X, transformation)
aniso = (X_aniso, y)
varied = datasets.make_blobs(n_samples=n_samples, cluster_std=[1.0, 2.5, 0.5], random_state=170)
设置聚类参数,包括子图布局,并为每种数据集生成聚类对象。对于每种链接方法,计算聚类时间,并绘制结果。
# 设置聚类参数
plt.figure(figsize=(9*1.3+2, 14.5))
plt.subplots_adjust(left=0.02, right=0.98, bottom=0.001, top=0.96, wspace=0.05, hspace=0.01)
plot_num = 1
default_base = {"n_neighbors": 10, "n_clusters": 3}
datasets = [
(noisy_circles, {"n_clusters": 2}),
(noisy_moons, {"n_clusters": 2}),
(varied, {"n_neighbors": 2}),
(aniso, {"n_neighbors": 2}),
(blobs, {}),
(no_structure, {}),
]
for i_dataset, (dataset, algo_params) in enumerate(datasets):
params = default_base.copy()
params.update(algo_params)
X, y = dataset
X = StandardScaler().fit_transform(X)
# 创建聚类对象
ward = cluster.AgglomerativeClustering(n_clusters=params["n_clusters"], linkage="ward")
complete = cluster.AgglomerativeClustering(n_clusters=params["n_clusters"], linkage="complete")
average = cluster.AgglomerativeClustering(n_clusters=params["n_clusters"], linkage="average")
single = cluster.AgglomerativeClustering(n_clusters=params["n_clusters"], linkage="single")
clustering_algorithms = (
("Single Linkage", single),
("Average Linkage", average),
("Complete Linkage", complete),
("Ward Linkage", ward),
)
for name, algorithm in clustering_algorithms:
t0 = time.time()
algorithm.fit(X)
t1 = time.time()
if hasattr(algorithm, "labels_"):
y_pred = algorithm.labels_.astype(int)
else:
y_pred = algorithm.predict(X)
plt.subplot(len(datasets), len(clustering_algorithms), plot_num)
if i_dataset == 0:
plt.title(name, size=18)
colors = np.array(list(islice(cycle(["#377eb8", "#ff7f00", "#4daf4a", "#f781bf", "#a65628", "#984ea3", "#999999", "#e41a1c", "#dede00"]), int(max(y_pred)+1))))
plt.scatter(X[:,0], X[:,1], s=10, color=colors[y_pred])
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
plt.xticks(())
plt.yticks(())
plt.text(0.99, 0.01, "%.2fs" % (t1-t0).lstrip("0"), transform=plt.gca().transAxes, size=15, horizontalalignment="right")
plot_num += 1
plt.show()