Technology sharing

Python32 Extreme Learning Machine ELM

2024-07-12

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

picturam

Extrema Cognitio Machina (ELM) simplex iacuit ante algorithmum neurale (SLFN) discendi algorithmus. In theoria, apparatus eruditionis extremae algorithms (ELM) tendunt ad bonum faciendum (ad machinam discendi algorithm pertinentes) cum velocitate discendi summa velocitate propositae sunt ab Huang et al. Praecipuum notum ELM est quod celeritas discendi velocissima est. Cum methodis descensus traditis gradientibus (qualia retiacula neural BP), ELM processum iterativam non requirit. Principium fundamentale est passim eligere pondera et bipennes latentis iacuit, et deinde pondera output discere, obscurando errorem output iacuit.

picturam

img

Praecipua vestigia ulmi algorithmi

  1. Passim initialize pondera et bipes initus ad stratum absconditum

    • Pondera et bationes latentium stratorum passim generantur et constant in disciplina permanent.

  2. Computare matrix output iacuit absconditi (i.e. output functionis activationis)

    • Computare output iacuit occulti utens functionis activationis (ut sigmoidea, ReLU, etc.).

  3. Computare pondus output

    • Pondera ab strato occulto ad stratum output minime quadratis methodo computantur.

Formulae mathematicae ulmus est haec:

  • Data in disciplina tradenda , ubi ,

  • Formula calculi output matricis de occulto iacuit est:

    • ubi est input matrix, pondus matrix input ad stratum absconditum, inclinatio vector est et munus activationis.

  • Pondus calculi formula output est:

    • Inter eos, est inversa generativus stratum occulti output matrix et est output matrix.

Application missiones ULMUS algorithmus

  1. Magna eu notitia paro processus: ULMUS bene facit cum magnarum rerum datarum processus, quia celeritas eius disciplinae velocissima est et ad missiones apta est quae velocitatem exemplorum institutionem requirunt, ut magnarum imaginum classificationis, linguae naturalis processus et alia opera.

  2. Industria Data : ULMUS amplis applicationes in agro praenuntiationis industrialis habet, ut qualitas moderaminis et armorum defectus praedictionis in processibus productionis industriae. Potest celeriter exempla predictiva instituere et cito ad real-time data respondere.

  3. Oeconomus sector : ULMUS adhiberi potest pro analysi pecuniaria et praenuntiatione, ut praenuntiatio, periculum, procuratio, credit scoring, etc. Cum data pecuniaria plerumque summus dimensiva est, celeritas ELM disciplinae celeriter expedit ad has notitias expediendas.

  4. medical diagnosis : In re medica, ELM adhiberi potest ad operas morborum praesagia et analysis imaginis medicae. Exemplaria celeriter instituere et in notitia patientis inserere vel procedere, medici adiuvantes citius ac accuratius diagnoses facere possunt.

  5. Ratio intelligentis imperium : ULMUS adhiberi potest in rationum potestate intelligentium, ut dolor domos, systemata translationis intelligentes, etc. Discendo characteres et formas de ambitu, ELM adiuvare potest systema facere prudentes decisiones et emendare systema efficientiae et effectus.

Python instrumentum algorithmus ELM

Utimurmake_moons Dataset, ludibrium dataset vulgo ad apparatus discendi et alta discendi classificatione munia adhibita. Puncta generat in duas figuras dimidia lunae secante distributas, specimen ad demonstrandum effectum et decisionem limites classificationis algorithms.

  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. from sklearn.datasets import make_moons
  4. from sklearn.model_selection import train_test_split
  5. from sklearn.neural_network import MLPClassifier
  6. from sklearn.preprocessing import StandardScaler
  7. from sklearn.metrics import accuracy_score
  8. # 定义极限学习机(ELM)类
  9. class ELM:
  10.     def __init__(self, n_hidden_units):
  11.         # 初始化隐藏层神经元数量
  12.         self.n_hidden_units = n_hidden_units
  13.     def _sigmoid(self, x):
  14.         # 定义Sigmoid激活函数
  15.         return 1 / (1 + np.exp(-x))
  16.     def fit(self, X, y):
  17.         # 随机初始化输入权重
  18.         self.input_weights = np.random.randn(X.shape[1], self.n_hidden_units)
  19.         # 随机初始化偏置
  20.         self.biases = np.random.randn(self.n_hidden_units)
  21.         # 计算隐藏层输出矩阵H
  22.         H = self._sigmoid(np.dot(X, self.input_weights) + self.biases)
  23.         # 计算输出权重
  24.         self.output_weights = np.dot(np.linalg.pinv(H), y)
  25.     def predict(self, X):
  26.         # 计算隐藏层输出矩阵H
  27.         H = self._sigmoid(np.dot(X, self.input_weights) + self.biases)
  28.         # 返回预测结果
  29.         return np.dot(H, self.output_weights)
  30. # 创建数据集并进行预处理
  31. X, y = make_moons(n_samples=1000, noise=0.2random_state=42)
  32. # 将标签转换为二维数组(ELM需要二维数组作为标签)
  33. = y.reshape(-11)
  34. # 标准化数据
  35. scaler = StandardScaler()
  36. X_scaled = scaler.fit_transform(X)
  37. # 拆分训练集和测试集
  38. X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.3random_state=42)
  39. # 训练和比较ELM与MLP
  40. # 训练ELM
  41. elm = ELM(n_hidden_units=10)
  42. elm.fit(X_train, y_train)
  43. y_pred_elm = elm.predict(X_test)
  44. # 将预测结果转换为类别标签
  45. y_pred_elm_class = (y_pred_elm > 0.5).astype(int)
  46. # 计算ELM的准确率
  47. accuracy_elm = accuracy_score(y_test, y_pred_elm_class)
  48. # 训练MLP
  49. mlp = MLPClassifier(hidden_layer_sizes=(10,), max_iter=1000random_state=42)
  50. mlp.fit(X_train, y_train.ravel())
  51. # 预测测试集结果
  52. y_pred_mlp = mlp.predict(X_test)
  53. # 计算MLP的准确率
  54. accuracy_mlp = accuracy_score(y_test, y_pred_mlp)
  55. # 打印ELM和MLP的准确率
  56. print(f"ELM Accuracy: {accuracy_elm}")
  57. print(f"MLP Accuracy: {accuracy_mlp}")
  58. # 可视化结果
  59. def plot_decision_boundary(model, X, y, ax, title):
  60.     # 设置绘图范围
  61.     x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
  62.     y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
  63.     # 创建网格
  64.     xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01),
  65.                          np.arange(y_min, y_max, 0.01))
  66.     # 预测网格中的所有点
  67.     Z = model(np.c_[xx.ravel(), yy.ravel()])
  68.     Z = (Z > 0.5).astype(int)
  69.     Z = Z.reshape(xx.shape)
  70.     # 画出决策边界
  71.     ax.contourf(xx, yy, Z, alpha=0.8)
  72.     # 画出数据点
  73.     ax.scatter(X[:, 0], X[:, 1], c=y.ravel(), edgecolors='k', marker='o')
  74.     ax.set_title(title)
  75. # 创建图形
  76. fig, axs = plt.subplots(12, figsize=(125))
  77. # 画出ELM的决策边界
  78. plot_decision_boundary(lambda x: elm.predict(x), X_test, y_test, axs[0], "ELM Decision Boundary")
  79. # 画出MLP的决策边界
  80. plot_decision_boundary(lambda x: mlp.predict(x), X_test, y_test, axs[1], "MLP Decision Boundary")
  81. # 显示图形
  82. plt.show()
  83. # 输出:
  84. '''
  85. ELM Accuracy: 0.9666666666666667
  86. MLP Accuracy: 0.9766666666666667
  87. '''

Visual output:

picturam

Superius contentum ex interreti recapitulatum est. Si utile est, deinceps te vide.