Technology sharing

Apparatus Discendi (5) -- Supervised Learning (6) -- Logistic Regression

2024-07-12

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Tabula contentorum et nexus ad seriem articulorum

Articulus prior:Apparatus Discens (5) -- Supervised Learning (5)
Articulus proximus:Apparatus Discens (5) -- Supervised Learning (7) --SVM1


Praefatio

tips:标题前有“***”的内容为补充内容,是给好奇心重的宝宝看的,可自行跳过。文章内容被“文章内容”删除线标记的,也可以自行跳过。“!!!”一般需要特别注意或者容易出错的地方。

本系列文章是作者边学习边总结的,内容有不对的地方还请多多指正,同时本系列文章会不断完善,每篇文章不定时会有修改。

由于作者时间不算富裕,有些内容的《算法实现》部分暂未完善,以后有时间再来补充。见谅!

文中为方便理解,会将接口在用到的时候才导入,实际中应在文件开始统一导入。


1. intellectus et definitio

1. Quid est procedere logisticam (Quid)

Logiscus regressus = linearis regressus + munus sigmoidea

Regressio logistica (regressio logistica) simpliciter est invenire lineam rectam ad datam binariam dividere.

2. Ad logisticam regressionem (Quare)

Solvere problema binarii classificationem digerendo ea quae ad certum genus pertinentprobabile valoremDecernere num ad certum categoriam pertineat, hoc categoriam 1 (exempli positivi) per defaltam designatum, et alterum categoria ut 0 (exempli negativum).

3. Quomodo hoc inveniatur versus (Quam)

Re vera, haec est similis regressioni lineari gradui. Differentia est in modis adhibitis "exemplum reprimendi effectum convenientem" et "exemplari locum accommodare angulum".

  1. Agatur recta passim ad primam lineam
  2. Compesce effectum suum;
  3. Si non optimum (limen attigit) rectam positionis angulum accommodare
  4. Repetere vestigia 2 et 3 usque ad effectum optimum (perveniendi limen pone), et tandem exemplar volumus.

Opus est uti functione (sigmoide) ad describendam datam inputationem inter 0-1, et si valor functionis maior est quam 0.5, esse I aestimatur, secus 0 est. Hoc converti potest in repraesentationem probabilisticam.

2. Principium intellectus et formulae

1. Perceptron

1.1.

Sume in exemplum picturae classificationis, imagines divide in verticali et horizontali

Ita haec notitia in grapho monstratur. Ut puncta diversorum colorum (diversi praedicamentorum) in graphe distinguamus, talem lineam ducimus. Propositum classificationis est talem lineam invenire.

Haec est "linea recta quae pondus vector normalem vectorem" facit (vector sit pondus ad lineam perpendiculum)

w est pondus vector;

1.2.

Exemplar, quod multiplices valores accipit, unumquodque valorem suo cuiusque pondere multiplicat, ac tandem summam erigit.

1.3.

Productus interior mensura gradus similitudinis inter vectores est. Effectus positivum similitudinem indicat, valor 0 indicat verticalitatem, effectus negativus dissimilitudinem indicat.

ususMelius est intelligere, quia |w| . et |x| 90 gradus dissimile est

1.4.

Si par est pretii originali pittacii, vector pondus non renovabitur.

Ut patet in figura, si non est aequalis cum originali pittacio, tunc

recta ad renovatio

Post update, aequalis

Gradus: Primum passim rectam lineam determinant (hoc est, vectorem pondus passim determinant) valorem realem datam x in productum interiorem substituunt, et obtinent valorem (1 vel -1) per functionem discriminantem Ad valorem pittacii originalis, pondus vector Update non est, si a valore label originali differt, vector uti addito ad pondus vectoris renovandum.

! ! !Nota: Perceptronum non potest nisi problemata separabilia solvere
Linearibus separabilibus: casibus in quibus lineae rectae adhiberi possunt pro classificatione
Linearibus inseparabilitas: lineae rectis distingui non possunt

2. munus sigmoidea

Nigrum est munus sigmoidea, rubrum est munus gradus (discontinuum)

Functio: Initus regressionis logisticae effectus est regressionis linearis.Valorem praedictum in lineari procedere possumus. Munus Sigmoidea mappis quaevis initus ad [0,1] intervallum, ita conversionem a valore in probabilitatem complentes, quod est munus classificationis.

3. Logistic regressus

3.1.

Logiscus regressus = linearis regressus + munus sigmoidea

Progredior linearis:

munus coion;

Regressio logisticae:

Ut sit y titulus repraesentare, mutare;

Facere probabilia uteris:

3.2.

Genera vero probabilitate distingui possunt

3.3.

Rescriptum esse potest sic:

quando

Substitutus data:

Est talis imago

Recta linea pro notitia partitio est consilium terminus

3.4.

Quod volumus, hoc est;
Cum y = I, P (y = I | x) est maximum
Cum y=0, P(y=0|x) est maximum*

Munus likelihood (probabilitatis iuncturam): hic est probabile volumus maximize

Munus probabilitatis Log: Difficile est munus directe differre verisimilitudinem, et logarithmus primo sumi debet.

Post deformationem fit;

3.4.

Distinctio probabilis functionis;

3. Commoda et Incommoda

3.1.

1. Simplex ad efficiendum: Regressio Logistica est algorithmus simplex qui facile est ad intelligendum et ad efficiendum.
2. Alta efficientia computationalis: Regressio Logistica relative parvam quantitatem calculi habet et ad notitias magnas magnas aptas est.
3. Fortis interpretabilitas: Proventus regressionis logisticae outputa sunt valores probabiles, qui intuitive explicari possunt emissionem exemplaris.

3.2.

1. Separabilitas linearis requisita: Regressio Logistica exemplar est linearis et male agit pro quaestionibus separabilibus nonlinearibus.
2. Problema Feature ratione: Regressio Logistica magis sensitiva est ad relationem inter lineamenta initus. Cum valida relatio inter lineamenta datur, perficiendi exemplar declinare potest.
3. Problema overfitting: Cum nimis multae lineamenta specimen vel numerus exemplorum exiguus sit, regressio logistica propensior est ad problemata superflua.

3. ** Algorithmus implementation

1. Get notitia

  1. import numpy as np
  2. import pandas as pd
  3. import matplotlib.pyplot as plt
  4. %matplotlib notebook
  5. # 读取数据
  6. train=pd.read_csv('csv/images2.csv')
  7. train_x=train.iloc[:,0:2]
  8. train_y=train.iloc[:,2]
  9. # print(train_x)
  10. # print(train_y)
  11. # 绘图
  12. plt.figure()
  13. plt.plot(train_x[train_y ==1].iloc[:,0],train_x[train_y ==1].iloc[:,1],'o')
  14. plt.plot(train_x[train_y == 0].iloc[:,0],train_x[train_y == 0].iloc[:,1],'x')
  15. plt.axis('scaled')
  16. # plt.axis([0,500,0,500])
  17. plt.show()

2. MGE

  1. # 初始化参数
  2. theta=np.random.randn(3)
  3. # 标准化
  4. mu = train_x.mean(axis=0)
  5. sigma = train_x.std(axis=0)
  6. # print(mu,sigma)
  7. def standardize(x):
  8. return (x - mu) / sigma
  9. train_z = standardize(train_x)
  10. # print(train_z)
  11. # 增加 x0
  12. def to_matrix(x):
  13. x0 = np.ones([x.shape[0], 1])
  14. return np.hstack([x0, x])
  15. X = to_matrix(train_z)
  16. # 绘图
  17. plt.figure()
  18. plt.plot(train_z[train_y ==1].iloc[:,0],train_z[train_y ==1].iloc[:,1],'o')
  19. plt.plot(train_z[train_y == 0].iloc[:,0],train_z[train_y == 0].iloc[:,1],'x')
  20. plt.axis('scaled')
  21. # plt.axis([0,500,0,500])
  22. plt.show()

3.Sigmoid munus ac munus acceptabile

  1. # sigmoid 函数
  2. def f(x):
  3. return 1 / (1 + np.exp(-np.dot(x, theta)))
  4. # 分类函数
  5. def classify(x):
  6. return (f(x) >= 0.5).astype(np.int)

4. Parameter occasu et disciplina

  1. # 学习率
  2. ETA = 1e-3
  3. # 重复次数
  4. epoch = 5000
  5. # 更新次数
  6. count = 0
  7. print(f(X))
  8. # 重复学习
  9. for _ in range(epoch):
  10. theta = theta - ETA * np.dot(f(X) - train_y, X)
  11. # 日志输出
  12. count += 1
  13. print('第 {} 次 : theta = {}'.format(count, theta))

5. Tractus confirmationis

  1. # 绘图确认
  2. plt.figure()
  3. x0 = np.linspace(-2, 2, 100)
  4. plt.plot(train_z[train_y ==1].iloc[:,0],train_z[train_y ==1].iloc[:,1],'o')
  5. plt.plot(train_z[train_y == 0].iloc[:,0],train_z[train_y == 0].iloc[:,1],'x')
  6. plt.plot(x0, -(theta[0] + theta[1] * x0) / theta[2], linestyle='dashed')
  7. plt.show()

 

6.Verification

  1. # 验证
  2. text=[[200,100],[500,400],[150,170]]
  3. tt=pd.DataFrame(text,columns=['x1','x2'])
  4. # text=pd.DataFrame({'x1':[200,400,150],'x2':[100,50,170]})
  5. x=to_matrix(standardize(tt))
  6. print(x)
  7. a=f(x)
  8. print(a)
  9. b=classify(x)
  10. print(b)
  11. plt.plot(x[:,1],x[:,2],'ro')

 

4. Interface implementation

1. Ad cancer notitia paro

1.1、API

from sklearn.datasets import load_breast_cancer

1.2.

  1. # 键
  2. print("乳腺癌数据集的键:",breast_cancer.keys())
  3. # 特征值名字、目标值名字
  4. print("乳腺癌数据集的特征数据形状:",breast_cancer.data.shape)
  5. print("乳腺癌数据集的目标数据形状:",breast_cancer.target.shape)
  6. print("乳腺癌数据集的特征值名字:",breast_cancer.feature_names)
  7. print("乳腺癌数据集的目标值名字:",breast_cancer.target_names)
  8. # print("乳腺癌数据集的特征值:",breast_cancer.data)
  9. # print("乳腺癌数据集的目标值:",breast_cancer.target)
  10. # 返回值
  11. # print("乳腺癌数据集的返回值:n", breast_cancer)
  12. # 返回值类型是bunch--是一个字典类型
  13. # 描述
  14. # print("乳腺癌数据集的描述:",breast_cancer.DESCR)
  15. # 每个特征信息
  16. print("最小值:",breast_cancer.data.min(axis=0))
  17. print("最大值:",breast_cancer.data.max(axis=0))
  18. print("平均值:",breast_cancer.data.mean(axis=0))
  19. print("标准差:",breast_cancer.data.std(axis=0))

  1. # 取其中间两列特征
  2. x=breast_cancer.data[0:569,0:2]
  3. y=breast_cancer.target[0:569]
  4. samples_0 = x[y==0, :]
  5. samples_1 = x[y==1, :]
  6. # 实现可视化
  7. plt.figure()
  8. plt.scatter(samples_0[:,0],samples_0[:,1],marker='o',color='r')
  9. plt.scatter(samples_1[:,0],samples_1[:,1],marker='x',color='y')
  10. plt.xlabel('mean radius')
  11. plt.ylabel('mean texture')
  12. plt.show()

  1. # 绘制每个特征直方图,显示特征值的分布情况。
  2. for i, feature_name in enumerate(breast_cancer.feature_names):
  3. plt.figure(figsize=(6, 4))
  4. sns.histplot(breast_cancer.data[:, i], kde=True)
  5. plt.xlabel(feature_name)
  6. plt.ylabel("数量")
  7. plt.title("{}直方图".format(feature_name))
  8. plt.show()

  1. # 绘制箱线图,展示每个特征最小值、第一四分位数、中位数、第三四分位数和最大值概括。
  2. plt.figure(figsize=(10, 6))
  3. sns.boxplot(data=breast_cancer.data, orient="v")
  4. plt.xticks(range(len(breast_cancer.feature_names)), breast_cancer.feature_names, rotation=90)
  5. plt.xlabel("特征")
  6. plt.ylabel("值")
  7. plt.title("特征箱线图")
  8. plt.show()

1.3.

  1. # 创建DataFrame对象
  2. df = pd.DataFrame(breast_cancer.data, columns=breast_cancer.feature_names)
  3. # 检测缺失值
  4. print("缺失值数量:")
  5. print(df.isnull().sum())
  6. # 检测异常值
  7. print("异常值统计信息:")
  8. print(df.describe())
  9. # 使用.describe()方法获取数据集的统计信息,包括计数、均值、标准差、最小值、25%分位数、中位数、75%分位数和最大值。

1.4.

  1. # 创建DataFrame对象
  2. df = pd.DataFrame(breast_cancer.data, columns=breast_cancer.feature_names)
  3. # 计算相关系数
  4. correlation_matrix = df.corr()
  5. # 可视化相关系数热力图
  6. plt.figure(figsize=(10, 8))
  7. sns.heatmap(correlation_matrix, annot=True, cmap="coolwarm")
  8. plt.title("Correlation Heatmap")
  9. plt.show()

2、API

  1. sklearn.linear_model.LogisticRegression
  2. 导入:
  3. from sklearn.linear_model import LogisticRegression
  4. 语法:
  5. LogisticRegression(solver='liblinear', penalty=‘l2’, C = 1.0)
  6. solver可选参数:{'liblinear', 'sag', 'saga','newton-cg', 'lbfgs'},
  7. 默认: 'liblinear';用于优化问题的算法。
  8. 对于小数据集来说,“liblinear”是个不错的选择,而“sag”和'saga'对于大型数据集会更快。
  9. 对于多类问题,只有'newton-cg''sag''saga''lbfgs'可以处理多项损失;“liblinear”仅限于“one-versus-rest”分类。
  10. penalty:正则化的种类
  11. C:正则化力度

2. Processus

2.1.

  1. from sklearn.datasets import load_breast_cancer
  2. from sklearn.model_selection import train_test_split
  3. from sklearn.linear_model import LogisticRegression
  4. # 获取数据
  5. breast_cancer = load_breast_cancer()

2.2.

  1. # 划分数据集
  2. x_train,x_test,y_train,y_test = train_test_split(breast_cancer.data, breast_cancer.target, test_size=0.2, random_state=1473)

2.3.

2.4.

  1. # 实例化学习器
  2. lr = LogisticRegression(max_iter=10000)
  3. # 模型训练
  4. lr.fit(x_train, y_train)
  5. print("建立的逻辑回归模型为:n", lr)

 

2.5.

  1. # 用模型计算测试值,得到预测值
  2. y_pred = lr.predict(x_test)
  3. print('预测前20个结果为:n', y_pred[:20])
  4. # 求出预测结果的准确率和混淆矩阵
  5. from sklearn.metrics import accuracy_score, confusion_matrix,precision_score,recall_score
  6. print("预测结果准确率为:", accuracy_score(y_test, y_pred))
  7. print("预测结果混淆矩阵为:n", confusion_matrix(y_test, y_pred))
  8. print("预测结果查准率为:", precision_score(y_test, y_pred))
  9. print("预测结果召回率为:", recall_score(y_test, y_pred))

  1. from sklearn.metrics import roc_curve,roc_auc_score,auc
  2. fpr,tpr,thresholds=roc_curve(y_test,y_pred)
  3. plt.plot(fpr, tpr)
  4. plt.axis("square")
  5. plt.xlabel("假正例率/False positive rate")
  6. plt.ylabel("正正例率/True positive rate")
  7. plt.title("ROC curve")
  8. plt.show()
  9. print("AUC指标为:",roc_auc_score(y_test,y_pred))

 

  1. # 求出预测取值和真实取值一致的数目
  2. num_accu = np.sum(y_test == y_pred)
  3. print('预测对的结果数目为:', num_accu)
  4. print('预测错的结果数目为:', y_test.shape[0]-num_accu)
  5. print('预测结果准确率为:', num_accu/y_test.shape[0])

2.6

Exemplar, quod post exemplar perpensionis transit, substitui potest in valorem realem praenuntiationis.


Vetera somnia recreari possunt, videamus:Apparatus Discens (5) -- Supervised Learning (5)
Si vis scire quid deinde fiat, vultu vide;Apparatus Discens (5) -- Supervised Learning (7) --SVM1