Technology sharing

Introductio ad exemplum putationis

2024-07-12

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Ref:https://www.cnblogs.com/the-art-of-ai/p/17500399.html

1. Background introductio

Alta exempla discendi egregie consecuti sunt in cognitione imaginum, linguarum naturalium processui, sermonis recognitione et aliis campis, sed exempla saepe magnam vim computandi facultates et spatium repono requirunt. Praesertim in ambitus subsidiis constrictis ut mobiles machinis ac systematis infixae, magnitudo et multiplicitas computationalium horum exemplorum saepe fiunt lagunculae quae eorum applicationem limitant. Quam ob rem, quam ad redigendum magnitudinem et multiplicitatem computationalis exemplaris, quam maxime servato subtiliter exemplaris, factus est momenti investigationis directio.

Exemplar putationis technologiae modus efficax est ad hanc solvendam quaestionem.In structuram optimizing et parametri profundi discendi exemplar reducendo, exemplum minorem habet magnitudinem et velociorem celeritatem currens servato accuratione, ita melius accommodans diversis operibus et ambitus.

2. elementa fundamentalia

        Exemplar putationis technologiae ad technologiam refertur ad optimizationem structurarum ac parametri reductionem altarum litterarum exemplorum. .Putatio technicae dividi potest inconstitutione putatioetParameter putationisDuae formae.

Putatio structuralis ad removendum aliquidnecesse unitas structuram principalem ut neurons, convolutiones nuclei, strata etc., ad redigendum complexionem computationalem et spatium exemplar repono. Communes modos structurarum putationis comprehendunt: putatio canalis, putatio iacuit, putatio nodi, putatio sparguntur etc.

Parameter putatio ad notitias extrahendas ex alta doctrina exemplaAmove pondus parametri ad reducere spatium repositionis et multiplicitatem computationalem exemplaris servato subtiliter exemplaris. Communes moduli putationis modos comprehendunt: L1 regularizationis, L2 regularizationis, sortitio putationis, loci-sensitiva Nullam putatio, etc.

3. Technica principia

        In media idea technologiae exemplaris putationis est reducere spatium repositionis et multiplicitatem computationalem exemplaris quam maxime servato accurate exemplaris.Cum unitates structurae et parametri, sicut neuronae, convolutionis nuclei, et ambitus ponderis in exemplaribus profundis eruditionis saepe habent partes supervacuas et supervacaneas, putatio technologiae adhiberi potest ad has partes redundantes reducere, minuendo exemplar volumen et effectum complexionis computationalis.

Speciatim exsecutio exemplaris technologiae putationis dividi potest in sequentes gradus:

(1) Initialize exemplar.

(2) Selectae aestimationis quantitatis rationes et consilia;Putatio structuralis et modulus putationisCommunia consilia comprehendunt: putatio globalis et putatio iterativa;

(3) Putatio exemplarium;

(4) Retine exemplar; putatio operationum subtiliter exemplaris minui potest;

(5) Bene-tune exemplar;

Codicis:

  1. import torch
  2. import torch.nn as nn
  3. import torch.optim as optim
  4. import torch.nn.functional as F
  5. from torchvision import datasets, transforms
  6. # 定义一个简单的卷积神经网络
  7. class SimpleCNN(nn.Module):
  8. def __init__(self):
  9. super(SimpleCNN, self).__init__()
  10. self.conv1 = nn.Conv2d(1, 4, kernel_size=3, padding=1) # 4个输出通道
  11. self.conv2 = nn.Conv2d(4, 8, kernel_size=3, padding=1) # 8个输出通道
  12. self.fc1 = nn.Linear(8 * 7 * 7, 64)
  13. self.fc2 = nn.Linear(64, 10)
  14. def forward(self, x):
  15. x = F.relu(self.conv1(x)) # 卷积层1 + ReLU激活函数
  16. x = F.max_pool2d(x, 2) # 最大池化层,池化核大小为2x2
  17. x = F.relu(self.conv2(x)) # 卷积层2 + ReLU激活函数
  18. x = F.max_pool2d(x, 2) # 最大池化层,池化核大小为2x2
  19. x = x.view(x.size(0), -1) # 展平操作,将多维张量展平成一维
  20. x = F.relu(self.fc1(x)) # 全连接层1 + ReLU激活函数
  21. x = self.fc2(x) # 全连接层2,输出10个类别
  22. return x
  23. # 实例化模型
  24. model = SimpleCNN()
  25. # 打印剪枝前的模型结构
  26. print("Model before pruning:")
  27. print(model)
  28. # 加载数据
  29. transform = transforms.Compose([
  30. transforms.ToTensor(), # 转换为张量
  31. transforms.Normalize((0.1307,), (0.3081,)) # 归一化
  32. ])
  33. train_dataset = datasets.MNIST('./data', train=True, download=True, transform=transform) # 加载训练数据集
  34. train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True) # 创建数据加载器
  35. # 定义损失函数和优化器
  36. criterion = nn.CrossEntropyLoss() # 交叉熵损失函数
  37. optimizer = optim.Adam(model.parameters(), lr=0.001) # Adam优化器
  38. # 训练模型
  39. model.train() # 将模型设置为训练模式
  40. for epoch in range(1): # 训练一个epoch
  41. running_loss = 0.0
  42. for data, target in train_loader:
  43. optimizer.zero_grad() # 清零梯度
  44. outputs = model(data) # 前向传播
  45. loss = criterion(outputs, target) # 计算损失
  46. loss.backward() # 反向传播
  47. optimizer.step() # 更新参数
  48. running_loss += loss.item() * data.size(0) # 累加损失
  49. epoch_loss = running_loss / len(train_loader.dataset) # 计算平均损失
  50. print(f'Epoch {epoch + 1}, Loss: {epoch_loss:.4f}')
  51. # 通道剪枝
  52. # 获取卷积层的权重
  53. conv1_weights = model.conv1.weight.data.abs().sum(dim=[1, 2, 3]) # 计算每个通道的L1范数
  54. # 按照L1范数对通道进行排序
  55. sorted_channels = torch.argsort(conv1_weights)
  56. # 选择需要删除的通道
  57. num_prune = 2 # 假设我们要删除2个通道
  58. channels_to_prune = sorted_channels[:num_prune]
  59. print("Channels to prune:", channels_to_prune)
  60. # 删除指定通道的权重和偏置
  61. pruned_weights = torch.index_select(model.conv1.weight.data, 0, sorted_channels[num_prune:]) # 获取保留的权重
  62. pruned_bias = torch.index_select(model.conv1.bias.data, 0, sorted_channels[num_prune:]) # 获取保留的偏置
  63. # 创建一个新的卷积层,并将剪枝后的权重和偏置赋值给它
  64. model.conv1 = nn.Conv2d(in_channels=1, out_channels=4 - num_prune, kernel_size=3, padding=1)
  65. model.conv1.weight.data = pruned_weights
  66. model.conv1.bias.data = pruned_bias
  67. # 同时我们还需要调整conv2层的输入通道
  68. # 获取conv2层的权重并调整其输入通道
  69. conv2_weights = model.conv2.weight.data[:, sorted_channels[num_prune:], :, :] # 调整输入通道的权重
  70. # 创建一个新的卷积层,并将剪枝后的权重赋值给它
  71. model.conv2 = nn.Conv2d(in_channels=4 - num_prune, out_channels=8, kernel_size=3, padding=1)
  72. model.conv2.weight.data = conv2_weights
  73. # 打印剪枝后的模型结构
  74. print("Model after pruning:")
  75. print(model)
  76. # 定义新的优化器
  77. optimizer = optim.Adam(model.parameters(), lr=0.001)
  78. # 重新训练模型
  79. model.train() # 将模型设置为训练模式
  80. for epoch in range(1): # 训练一个epoch
  81. running_loss = 0.0
  82. for data, target in train_loader:
  83. optimizer.zero_grad() # 清零梯度
  84. outputs = model(data) # 前向传播
  85. loss = criterion(outputs, target) # 计算损失
  86. loss.backward() # 反向传播
  87. optimizer.step() # 更新参数
  88. running_loss += loss.item() * data.size(0) # 累加损失
  89. epoch_loss = running_loss / len(train_loader.dataset) # 计算平均损失
  90. print(f'Epoch {epoch + 1}, Loss: {epoch_loss:.4f}')
  91. # 加载测试数据
  92. test_dataset = datasets.MNIST('./data', train=False, transform=transform) # 加载测试数据集
  93. test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1000, shuffle=False) # 创建数据加载器
  94. # 评估模型
  95. model.eval() # 将模型设置为评估模式
  96. correct = 0
  97. total = 0
  98. with torch.no_grad(): # 关闭梯度计算
  99. for data, target in test_loader:
  100. outputs = model(data) # 前向传播
  101. _, predicted = torch.max(outputs.data, 1) # 获取预测结果
  102. total += target.size(0) # 总样本数
  103. correct += (predicted == target).sum().item() # 正确预测的样本数
  104. print(f'Accuracy: {100 * correct / total}%') # 打印准确率

Ut ad effectum et efficientiam putationis technologiae emendandam, rationes optimae sequentes considerari possunt:

  • Elige opportunas rationes putationes et algorithmos falces ad meliorandum putationem effectum et diligentiam.

  • Bene-cante vel incrementaliter putari exemplar discat ut accurationem et observantiam exemplaris ulterius emendare possit.

  • Parallela computatione utere et technologia computandi distributa ad processum falcem ac disciplinae accelerandum.