2024-07-12
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
#Datawhale # AIsummercamp# summercamp
Negotium certationis est statuere an imago faciei sit imago Deepfake et outputa probabilis score quod sit imago Deepfake. Participes deprehendendi et explicandi et optimizandi exempla necesse est ut diversis profunditatibus generationis technologiae et multiplici applicatione missionum obeundae, ita ut accurationem et robur deprehensionis imaginis altae augeant.
Tituli musculi train_label.txt disciplinae paroecialis ad exemplar exercendum adhibetur, dum tabella val_label.txt convalidationis institutum solum ad exemplar tuning adhibitum est. Exempli gratia, in train_label.txt vel val_label.txt, unaquaeque linea continet duas partes, commatibus discretas. Prima pars tabellae nomen est (suffi .mp4), secunda pars actualis est.
Scopum valorem 1 indicat altum fictum audio et video, et signum valor ipsius 0 indicat vultum audio realem et video.
Infra sunt exemplaria train_label.txt et val_label.txt:
train_label.txt
video_name,target
96b04c80704f02cb426076b3f624b69e.mp4,0
16fe4cf5ae8b3928c968a5d11e870360.mp4,1
val_label.txt
video_name,target
f859cb3510c69513d5c57c6934bc9968.mp4,0
50ae26b3f3ea85babb2f9dde840830e2.mp4,1
Quaelibet linea in tabella continet duas partes, commatibus discretas. Prima pars est nomen fasciculi video, et secunda pars est score deepfake correspondens exemplaris praedictii (id est, probabilitas valoris exempli ad videndi profundafake pertinens). Exemplum deditionem placere ad infra
prediction.csv
video_name,score
658042526e6d0c199adc7bfeb1f7c888.mp4,0.123456
a20cf2d7dea580d0affc4d85c9932479.mp4,0.123456
Secunda periodus sequitur primam periodum, in qua publicus testis constitutus dimittitur. Participes necesse est ut praedictionem usoris nuntiandi praedictionem test.csv submittere testi ad systema institutum, et videre eventus test score online in tempore reali.
Post secundum gradum, summa 30 iugis progreditur ad tertium gradum. In hac scaena, decertantes necesse est ut codicem spatharium et technicam relationem submittant. Docker requisita includunt originale institutio codicem et test API (munus initus est semita imaginis et output est score deepfake praedixit exemplar). Patrinus codicem algorithmum refrenare et refrenare studebit ut processus disciplinae et eventus experimentum referat.
Unum exemplar tantum exhiberi licet, et parametri retis validi 200M non excedunt (usumthopInstrumentum mutant exemplar parametri).
Tantum praecellens exemplar disciplinae cum ImageNet1K conceditur. Specimina extensa generata ex institutione publica edita (per accessionem instrumentorum/deepfake notitia) ad formationem adhiberi possunt, sed haec instrumenta reproduci in tertio gradu exhiberi debent.
Aestimatio index maxime utitur AUC sub ROC curva ut indicem0.5-1inter, aliter cogitamusNon est bona machina discendi exemplar . Quo propius AUC ad 1, exemplar melius eft. Si AUC ambiguum ordinem consecutum exhibet, tunc utimur **TPR (verum positivum rate) ut relatio auxiliaris. Nempe par modus est FPR.
F1-ScoreEtiam index potest referri ad: est praecisio rate et revocato rateharmonica medium。
F 1 _ S core = 2 - ( TP ) / ( 2 TP + FN + FP ) F1_Score = 2*(TP)/(2TP+FN+FP)F1_Score=2∗(TP)/(2TP+FN+FP)
Antequam apparatus discendi discitur, duos notiones magni momenti recensere debemus:SagaciteretRecordare。
Sagaciter: P recisionem = TPTP + FP Subtilitas = frac{TP}{TP+FP}Precegosegoo******n=TP+FPTP, quod exemplar metiri solebatReprehendo perficienturproportio exemplorum quae praedicantur positiva inter exemplaria recte praedicta.
Recordare: R ecall = TPTP + FN Revoca = frac{TP}{TP+FN}Rec*all*=TP+FNTP, quod exemplar metiri solebatQuaerere effectusproportio exemplorum actu positivorum inter exempla quae praedicantur positiva.
Verum Positivum Rate (TPR);
TPR = TP / (TP + FN)
Falsum positivum rate (FPR);
FPR = FP / (FP + TN)
in:
TP: Impetus specimen recte notus est ut impetus;
TN: Specimina realia recte notantur ut vera;
FP: Vera exemplaria falso notantur impetus;
FN: Impetum specimen recte ens reale notum est.
Editio: Aghajan, H., Augusto, JC, & Delgado, RLC (Eds.).totum librum)
Hic est TPR scriptor calculum meum:
l1 = [0,1,1,1,0,0,0,1]
l2 = [0,1,0,1,0,1,0,0]
def accuracy(y_true, y_pred):
# 正确预测数初始化一个简单计数器
correct_counter = 0
# 遍历y_true, y_pred中所有元素
# zip函数接受多个元组,返回他们组成的列表
for yt, yp in zip(y_true, y_pred):
if yt == yp:
# 如果预测标签与真实标签相同,则增加计数器
correct_counter += 1
# 返回正确率,正确标签数/总标签数
return correct_counter / len(y_true)
def false_positive(y_true, y_pred):
# 初始化假阳性样本计数器
fp = 0
# 遍历y_true,y_pred中所有元素
for yt, yp in zip(y_true, y_pred):
# 若真实标签为负类但预测标签为正类,计数器增加
if yt == 0 and yp == 1:
fp += 1
return fp
def false_negative(y_true, y_pred):
# 初始化假阴性样本计数器
fn = 0
# 遍历y_true,y_pred中所有元素
for yt, yp in zip(y_true, y_pred):
# 若真实标签为正类但预测标签为负类,计数器增加
if yt == 1 and yp == 0:
fn += 1
return fn
def true_positive(y_true, y_pred):
# 初始化真阳性样本计数器
tp = 0
# 遍历y_true,y_pred中所有元素
for yt, yp in zip(y_true, y_pred):
# 若真实标签为正类且预测标签也为正类,计数器增加
if yt == 1 and yp == 1:
tp += 1
return tp
def true_negative(y_true, y_pred):
# 初始化真阴性样本计数器
tn = 0
# 遍历y_true,y_pred中所有元素
for yt, yp in zip(y_true, y_pred):
# 若真实标签为负类且预测标签也为负类,计数器增加
if yt == 0 and yp == 0:
tn += 1
# 返回真阴性样本数
return tn
# 您可以尝试更好的精确度计算方式
def accuracy_v2(y_true, y_pred):
# 真阳性样本数
tp = true_positive(y_true, y_pred)
# 假阳性样本数
fp = false_positive(y_true, y_pred)
# 假阴性样本数
fn = false_negative(y_true, y_pred)
# 真阴性样本数
tn = true_negative(y_true, y_pred)
# 准确率
accuracy_score = (tp + tn) / (tp + tn + fp + fn)
return accuracy_score
# F1-score的计算方法
def f1(y_true,y_pred):
p = precision(y_true, y_pred)
r = recall(y_true,y_pred)
score = 2*p*r/(p+r)
return score
Si genus requiritur, limine opus est. Eius relatio ad valorem praedictum talis est:
P redictio = P robability > T hreshold Predictio = probabilitas > liminaPre-d**egoc*tegoo******n=Probabilegoty*>Th**resh**o******ld**
Post auc eruditione, alia magni momenti metrica discenda est iniuriarum iniuriarum. Pro problematum binarii classificationis, damni iniuriarum definimus ut:
L og L oss = scopus stipes (p) ( 1 scopus ) stipes ( 1 p ) LogLoss = -target* log(p) - (1-scopum) *log(1-p)Lo******gLoss=−tar**g****et∗lo******g****(p)−(1−tar**g****et)∗lo******g****(1−p)
Inter eos, valor scopum 0 vel 1, valorem praedictum probabilis est, quod specimen ad categoriam 1 pertinet. Logi damnum adfligit et certissima et perversa praedictiones. Quo minus truncum amissum, eo propius probabilitatem exemplar praemissum est ad scopum valorem.
His indicibus etiam in classificationibus quaestionibus uti possumus:
# “word count” 的缩写,是一个用于计数的 Unix 命令。-l 只计算行数
!wc -l /kaggle/input/ffdv-sample-dataset/ffdv_phase1_sample/train_label.txt
!wc -l /kaggle/input/ffdv-sample-dataset/ffdv_phase1_sample/val_label.txt
Non solum necesse est numerum ordinum numerare, qui numerum exemplorum indicat.
from IPython.display import Video
Video("/kaggle/input/ffdv-sample-dataset/ffdv_phase1_sample/valset/00882a2832edbcab1d3dfc4cc62cfbb9.mp4", embed=True)
Video objectum video creat, et significat cum ad Verum posito, lusorem video directe in e cellula cinematographici ostendetur.
Post currendum in basi basi kaggle, eventum sic videbis:
!pip install moviepy librosa matplotlib numpy timm
Documenta nexus pro bibliothecis adhibitis:
moviepy
librosa(Librosa est potentissima tertia-pars bibliotheca ad pythonis sermonis processui insignem. In hoc baseline maxime utimur MEL spectrogrammum generationis et spectrogramma conversionis)
matplotlib'
numpy
timm(Imago classificationis exemplar bibliothecae, cito varia exempla sota construis)
Quid est SOTA? Nomen plenum SOTA est res publica artium, quae ad optimum exemplar in hac re spectat. SOTA ustulo altissimo in aliqua velitatione data occidit.
Exemplar non-ad-finem (pipeline): Inprimis intelligendum est quid sit finis. Apparatus traditus discendi processum plurium modulorum constat, qui inter se sunt sine.
Exemplar finis-ad-finem (finis ad finem): Inprimis, intelleges praedictionem ab input fine ad output finem generari. Hoc praedictum eventum habebit errorem comparatum cum reali eventu (memorandum est quod core opus machinae discendi adhuc estpraedico ) Hic error propagatus est ad singulas tabulas in retis neuralis, et pondera ac parametri exemplaris adaptantur usque ad exemplar converges, vel eventus quos expectamus obtinendos. Si secundum rationem rationis spectemus, haec ratio imperii ansa clausa est. (egBP network neural)
Sequentia ad sequentia (seq2seq): Hoc est generalead summum finemPraedictio methodi Sequentiae, eius structura encoder et decoder est. Si interrogationem et responsionem datam ad encode/decode uteris, interrogationem et responsionem roboti accipere potes.
Quaestio ipsa baseline redit, quid est baseline?Plerumque simplex et facilis ad explendum baseline baseline refers to exemplar.
In processu algorithmi appitionis et commensurationis parametri, munus Basiliense se comparare est ut exemplar melius et melius reddat.
Probatio etiam notio magni momenti est, eius significatio estBenchmarks . Solet referre aestimationem normae et comparationis modum algorithmorum, exemplorum seu methodorum faciendi, quae differentias exemplorum metiuntur.
Saepe videre potes in exemplaribus websites benchmarking.exempli gratiaSi Nan。
Cum baselineam cucurri, in CUDA configuratio error occurrit.
import torch
# 设置pytorch的种子
torch.manual_seed(0)
# deterministic当设置为False时,cuDNN将允许一些操作的非确定性优化
torch.backends.cudnn.deterministic = False
# benchmark设置为true允许cuDNN在每个前向传播中自动寻找最适合当前配置的卷积算法,以提高性能。
torch.backends.cudnn.benchmark = True
# 导入必要的库,我们需要用到cv2,glob,os,PIL
import torchvision.models as models
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data.dataset import Dataset
import timm
import time
import pandas as pd
import numpy as np
import cv2, glob, os
from PIL import Image
Parametri ab genera_mel_spectrogram accepti includunt semitam fasciculi video, numerus filorum ad frequentiam Mel frequentiam dividendam, summa frequentia (spectrum range calculi moderantes), et scopum imaginis magnitudo.
import moviepy.editor as mp
import librosa
import numpy as np
import cv2
def generate_mel_spectrogram(video_path, n_mels=128, fmax=8000, target_size=(256, 256)):
# 提取音频
audio_path = 'extracted_audio.wav'
# video_path 应该是之前定义的变量,包含了要处理的视频文件的路径。创建了一个 VideoFileClip 对象,存储在 video 变量中。
video = mp.VideoFileClip(video_path)
# video.audio 访问视频的音频轨道。write_audiofile() 方法将音频写入文件。verbose=False: 设置为False表示不在控制台输出处理进度。logger=None: 设置为None表示不使用日志记录器。实际上我们做这个预测没有这样的需求,也就不消耗占存。
# 其默认参数:write_audiofile(self, filename, fps=None, nbytes=2, buffersize=2000, codec=None, bitrate=None, ffmpeg_params=None, write_logfile=False, verbose=True, logger='bar')
video.audio.write_audiofile(audio_path, verbose=False, logger=None)
# 加载音频文件,加载采样率
y, sr = librosa.load(audio_path)
# 生成MEL频谱图(梅尔频谱图,与之相对应的有mel倒频谱图)
# 默认参数:librosa.feature.melspectrogram(y=None, sr=22050, S=None, n_fft=2048, hop_length=512, power=2.0, **kwargs)
# 参数解释:y:音频时间序列,sr:采样率,n_mels 是指在计算梅尔频谱图时,将频谱图划分为多少个梅尔频率滤波器(Mel filters),其决定了最终生成的梅尔频谱图的分辨率,也可以理解为梅尔频谱图的高度。
S = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=n_mels)
# 将频谱图转换为dB单位,S:输入功率,ref:作为参考,如果是标量,则振幅 abs(S) 相对于 ref: 10 * log10(S / ref) 进行缩放。此处np.max指的是将谱图中的最大值作为参考值,这也是一种常用的参考值取法
S_dB = librosa.power_to_db(S, ref=np.max)
# 归一化到0-255之间,NORM_MINMAX:数组的数值被平移或缩放到一个指定的范围,线性归一化。
S_dB_normalized = cv2.normalize(S_dB, None, 0, 255, cv2.NORM_MINMAX)
# 将浮点数转换为无符号8位整型
S_dB_normalized = S_dB_normalized.astype(np.uint8)
# 缩放到目标大小256,256
img_resized = cv2.resize(S_dB_normalized, target_size, interpolation=cv2.INTER_LINEAR)
return img_resized
# 使用示例
video_path = '/kaggle/input/ffdv-sample-dataset/ffdv_phase1_sample/trainset/001b0680999447348bc9f89efce0f183.mp4' # 替换为您的视频文件路径
mel_spectrogram_image = generate_mel_spectrogram(video_path)
!mkdir ffdv_phase1_sample
!mkdir ffdv_phase1_sample/trainset
!mkdir ffdv_phase1_sample/valset
Moles notitiarum nimis ingens est, ideo picturam hic non ponemus. Sed Mel tabulam in normalibus adiunctis ponemus:
Fontem imaginis:Simon Fraser University
Si ad eam converteris et audieris, particula auditionis est quae paulatim minuitur.
# 使用glob.glob函数查找/kaggle/input/ffdv-sample-dataset/ffdv_phase1_sample/trainset/目录下前400个.mp4视频文件的路径。
for video_path in glob.glob('/kaggle/input/ffdv-sample-dataset/ffdv_phase1_sample/trainset/*.mp4')[:400]:
mel_spectrogram_image = generate_mel_spectrogram(video_path)
cv2.imwrite('./ffdv_phase1_sample/trainset/' + video_path.split('/')[-1][:-4] + '.jpg', mel_spectrogram_image)
# a. 调用generate_mel_spectrogram(video_path)函数生成梅尔频谱图,并将其存储在mel_spectrogram_image变量中。b. 使用cv2.imwrite函数将梅尔频谱图保存为JPEG图像。图像被保存在./ffdv_phase1_sample/trainset/目录下,并使用与原始视频文件相同的名称(但扩展名改为.jpg)。
for video_path in glob.glob('/kaggle/input/ffdv-sample-dataset/ffdv_phase1_sample/valset/*.mp4'):
mel_spectrogram_image = generate_mel_spectrogram(video_path)
cv2.imwrite('./ffdv_phase1_sample/valset/' + video_path.split('/')[-1][:-4] + '.jpg', mel_spectrogram_image)
Classis AverageMeter usus est computare ac reponere valorem mediocris et currentis variabilis.
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self, name, fmt=':f'):
self.name = name
self.fmt = fmt
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def __str__(self):
fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
return fmtstr.format(**self.__dict__)
Classis ProgressMeter usus est ad extrahendas massas notitias et indices statisticos in disciplina processus.
class ProgressMeter(object):
def __init__(self, num_batches, *meters):
self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
self.meters = meters
self.prefix = ""
def pr2int(self, batch):
entries = [self.prefix + self.batch_fmtstr.format(batch)]
entries += [str(meter) for meter in self.meters]
print('t'.join(entries))
def _get_batch_fmtstr(self, num_batches):
num_digits = len(str(num_batches // 1))
fmt = '{:' + str(num_digits) + 'd}'
return '[' + fmt + '/' + fmt.format(num_batches) + ']'
Munus convalidatum regulariter aestimat effectionem exemplaris in convalidatione in processu paedagogico positam, et calculat ac imprimit Top-1 accurate.
def validate(val_loader, model, criterion):
batch_time = AverageMeter('Time', ':6.3f')# 批处理时间
losses = AverageMeter('Loss', ':.4e')# 损失
top1 = AverageMeter('Acc@1', ':6.2f')# Top-1准确率
progress = ProgressMeter(len(val_loader), batch_time, losses, top1)# 输出ProgressMeter
# switch to evaluate mode,eval()为评估函数,关闭训练时使用的一些特定层(如 Dropout),并启用 Batch Normalization 层的运行统计。
model.eval()
with torch.no_grad():# 定时设置requires_grad为False,防止梯度计算并节省内存。
end = time.time()
for i, (input, target) in enumerate(val_loader):
input = input.cuda()# 将输入数据和目标数据转移到GPU计算
target = target.cuda()
# compute output
output = model(input)
loss = criterion(output, target)# 计算训练损失
# measure accuracy and record loss,acc百分比显示
acc = (output.argmax(1).view(-1) == target.float().view(-1)).float().mean() * 100
losses.update(loss.item(), input.size(0))
top1.update(acc, input.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
# TODO: this should also be done with the ProgressMeter
print(' * Acc@1 {top1.avg:.3f}'
.format(top1=top1))
return top1
Munus praedictum adhibetur ut coniecturas in probatione statuto et usui experimenti temporis augmentationis (TTA) augeat ad firmitatem exemplaris praedictae faciente multiplicibus praedictis et fere.
def predict(test_loader, model, tta=10):
# switch to evaluate mode
model.eval()
# TTA(Test Time Augmentation)
test_pred_tta = None
for _ in range(tta):# 执行 TTA 次数的循环,每次循环会生成一个略有不同的输入数据。
test_pred = []
with torch.no_grad():
end = time.time()
for i, (input, target) in enumerate(test_loader):
input = input.cuda()
target = target.cuda()
# compute output
output = model(input)
output = F.softmax(output, dim=1)# 对模型输出进行 softmax 归一化处理,以获得类别概率。
output = output.data.cpu().numpy()
test_pred.append(output)
test_pred = np.vstack(test_pred)
if test_pred_tta is None:
test_pred_tta = test_pred
else:
test_pred_tta += test_pred
return test_pred_tta
Munus hamaxostichum responsabile est ad exemplar formandum, adaequationis exemplar parametri computando damnum munus et accurate, ac backpropagationem et optimizationem gradus faciendo.
def train(train_loader, model, criterion, optimizer, epoch):
batch_time = AverageMeter('Time', ':6.3f')
losses = AverageMeter('Loss', ':.4e')
top1 = AverageMeter('Acc@1', ':6.2f')
progress = ProgressMeter(len(train_loader), batch_time, losses, top1)
# switch to train mode
model.train()
end = time.time()
for i, (input, target) in enumerate(train_loader):
input = input.cuda(non_blocking=True)
target = target.cuda(non_blocking=True)
# compute output
output = model(input)
loss = criterion(output, target)
# measure accuracy and record loss
losses.update(loss.item(), input.size(0))
acc = (output.argmax(1).view(-1) == target.float().view(-1)).float().mean() * 100
top1.update(acc, input.size(0))# 更新 top1 计量器,记录当前批次的准确率。
# compute gradient and do SGD step
optimizer.zero_grad() # 清除之前累积的梯度。
loss.backward()# 计算损失相对于模型参数的梯度
optimizer.step()# 根据 backward() 计算的梯度更新模型参数。
# measure elapsed time
batch_time.update(time.time() - end)# 更新 batch_time 计量器,记录当前批次的处理时间。
end = time.time()
if i % 100 == 0:
progress.pr2int(i)
train_label = pd.read_csv("/kaggle/input/ffdv-sample-dataset/ffdv_phase1_sample/train_label.txt")
val_label = pd.read_csv("/kaggle/input/ffdv-sample-dataset/ffdv_phase1_sample/val_label.txt")
train_label['path'] = '/kaggle/working/ffdv_phase1_sample/trainset/' + train_label['video_name'].apply(lambda x: x[:-4] + '.jpg')
val_label['path'] = '/kaggle/working/ffdv_phase1_sample/valset/' + val_label['video_name'].apply(lambda x: x[:-4] + '.jpg')
train_label = train_label[train_label['path'].apply(os.path.exists)]
val_label = val_label[val_label['path'].apply(os.path.exists)]
modulum transmutare relinquit ad amplificationem sequentium notitiarum, et defectus nullus est.
Imago convertitur ad RGB modum.
Labels redduntur ut torch.Tensor.
class FFDIDataset(Dataset):
def __init__(self, img_path, img_label, transform=None):
self.img_path = img_path
self.img_label = img_label
if transform is not None:
self.transform = transform
else:
self.transform = None
def __getitem__(self, index):
img = Image.open(self.img_path[index]).convert('RGB')
if self.transform is not None:
img = self.transform(img)
return img, torch.from_numpy(np.array(self.img_label[index]))
def __len__(self):
return len(self.img_path)
Notae FFDID classis supra positae.
train_loader = torch.utils.data.DataLoader(
FFDIDataset(train_label['path'].values, train_label['target'].values,
transforms.Compose([
transforms.Resize((256, 256)),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
), batch_size=40, shuffle=True, num_workers=12, pin_memory=True
)
val_loader = torch.utils.data.DataLoader(
FFDIDataset(val_label['path'].values, val_label['target'].values,
transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
), batch_size=40, shuffle=False, num_workers=10, pin_memory=True
)
# 重点:这里调用timm提供的resnet18模型,因为分类为0/1(真视频/假视频),可以在后续改进,比如换用更深的网络ResNet-34、ResNet-50或是其他变体
model = timm.create_model('resnet18', pretrained=True, num_classes=2)
model = model.cuda()
# 交叉熵损失,针对多类别
criterion = nn.CrossEntropyLoss().cuda()
# Adam优化器,学习率设置为0.003。
optimizer = torch.optim.Adam(model.parameters(), 0.003)
# 每4个epoch将学习率按0.85的因子进行调整。
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.85)
# 初始化最优acc
best_acc = 0.0
for epoch in range(10):
scheduler.step()
print('Epoch: ', epoch)
# 调用train函数
train(train_loader, model, criterion, optimizer, epoch)
# 调用validate函数
val_acc = validate(val_loader, model, criterion)
if val_acc.avg.item() > best_acc:
best_acc = round(val_acc.avg.item(), 2)
torch.save(model.state_dict(), f'./model_{best_acc}.pt')
Output:
Epoch: 0
[ 0/10] Time 6.482 ( 6.482) Loss 7.1626e-01 (7.1626e-01) Acc@1 35.00 ( 35.00)
* Acc@1 64.000
Epoch: 1
[ 0/10] Time 0.819 ( 0.819) Loss 4.6079e-01 (4.6079e-01) Acc@1 80.00 ( 80.00)
* Acc@1 75.500
Epoch: 2
[ 0/10] Time 0.914 ( 0.914) Loss 1.4983e-01 (1.4983e-01) Acc@1 97.50 ( 97.50)
* Acc@1 88.500
Epoch: 3
[ 0/10] Time 0.884 ( 0.884) Loss 2.4681e-01 (2.4681e-01) Acc@1 87.50 ( 87.50)
* Acc@1 84.000
Epoch: 4
[ 0/10] Time 0.854 ( 0.854) Loss 5.3736e-02 (5.3736e-02) Acc@1 100.00 (100.00)
* Acc@1 90.500
Epoch: 5
[ 0/10] Time 0.849 ( 0.849) Loss 5.9881e-02 (5.9881e-02) Acc@1 97.50 ( 97.50)
* Acc@1 89.500
Epoch: 6
[ 0/10] Time 0.715 ( 0.715) Loss 1.6215e-01 (1.6215e-01) Acc@1 92.50 ( 92.50)
* Acc@1 65.000
Epoch: 7
[ 0/10] Time 0.652 ( 0.652) Loss 5.3892e-01 (5.3892e-01) Acc@1 80.00 ( 80.00)
* Acc@1 78.500
Epoch: 8
[ 0/10] Time 0.847 ( 0.847) Loss 6.6098e-02 (6.6098e-02) Acc@1 97.50 ( 97.50)
* Acc@1 81.000
Epoch: 9
[ 0/10] Time 0.844 ( 0.844) Loss 9.4254e-02 (9.4254e-02) Acc@1 97.50 ( 97.50)
* Acc@1 81.500
Retiacula profundiora: Si opus est altioris effectus et magis implicatae extraction facultates, considerare potes utentes retiacula altiores sicut ResNet-34, ResNet-50 vel etiam maiores ResNet variantes (ut ResNet-101 vel ResNet-152).
Alia exempla praeexercita: Praeter seriem ResNet, multa alia exempla praeelaborata sumo e, ut:
EfficientNet: egregius effectus et efficientia modulus.
DenseNet: Dense connexum structuram retis adiuvat ad melius features utendum.
VGG series: Architectura simplex et classica, ad usum accommodata sub subsidiis angustiis.
Consuetudinis exemplar: Secundum certas notitias certas notas et opus requisita, excogitare et instituere potes exemplar architecturae nativum, quae plus debugging et experimenta requirere potest.
Cognitionem ensemble: Considera utendo modos discendi coadunandos ut iactas vel boosting ut plurium exemplorum praedictiones coniungas ad ulteriorem observantiam et firmitatem meliorem.
Hyperparameter tuning: Praeter exemplar delectu, exemplar effectus potest etiam optimized adaequatione doctrinae rate, massam magnitudine, optimizer lectio, et notitia augmentationis consilia.
Considera applicando Dice Damnum ad meliorem iacturam functionis in futuro. Munus amissi est quod melius praestat in vaticinio pixel-gradu.
Item oculum observare in Focal Loss. Speciatim ordinatur ad solvendum problema inaequalitatis genus, augere potest exemplar effectus in categoriis minoritatibus minuendo pondus facilium ad exempla referenda ad exempla difficilia intendunt.
RAdam emendatio est in Adam, quae stabilitatem et observantiam meliorem praestat, disciplinae rate correctionem alacriter accommodans.
AdamW variabilis est Adami qui pondus corruptionis inducit ad solvendas difficultates perficiendas quas Adam in aliquibus casibus introducere potest, praesertim cum numerus exemplarium parametri magnus sit.
AdamW variabilis est Adami qui pondus corruptionis inducit ad solvendas difficultates perficiendas quas Adam in aliquibus casibus introducere potest, praesertim cum numerus exemplarium parametri magnus sit.
# 用模型 (model) 对验证数据集 (val_loader) 进行预测。这部分假设 [:, 1] 给出了类别1的概率。
val_pred = predict(val_loader, model, 1)[:, 1]
# 赋值,预测的概率(或者预测值)赋给了 val_label 数据框中名为 "y_pred" 的列
val_label["y_pred"] = val_pred
submit = pd.read_csv("/kaggle/input/multi-ffdv/prediction.txt.csv")
# 使用 merge 函数将提交文件 (submit) 中的数据与验证数据集标签 (val_label) 中的 video_name 和 y_pred 列合并
merged_df = submit.merge(val_label[['video_name', 'y_pred']], on='video_name', suffixes=('', '_df2'), how='left', )
# 将合并后的数据中 y_pred_df2 列(从验证集中获取的预测结果)的值填充到 y_pred 列中
merged_df['y_pred'] = merged_df['y_pred_df2'].combine_first(merged_df['y_pred'])
merged_df[['video_name', 'y_pred']].to_csv('submit.csv', index=None)
Non accipit 10 minuta ad perficiendum cursum Baseline. Sed exploratio patientis 5 horas capit. Hic tacet summarium clavium processuum:
Negotium definitionis profundae doctrinae actu compendiari potest "propagationis posterioris", quia nucleus eius est algorithmus propagationis posterioris uti ad exemplar parametri accommodandi ut munus definitum damnum minuat.
Aptissimum est ut alta doctrina ad talia audiendi et videndi negotia pertractanda existimo primum unum esse ingentem copiam notitiarum rerum ac video notitiarum, ac necessitatem complexi harum notitiarum classificationem.Mechanismus altae eruditionis postulat ut immensam copiam notitiarum exigat, et re vera momentum maximum est quod ipse profundus requirit.ClassificationCogitatio essentialiter munus classificationis est, et alta doctrina magnas utilitates habet in amplis voluminibus datarum et exquisitarum classificationis.
Secundo statistically similis notitia creatur in magna quantitate, et discere potest ingentes notitias distributio. Potest etiam e contra facere negotium.
AIGC complectitur Deepfake. Ex prospectu evolutionis, technologiae Deepfake progressio et dimensio certa cum AIGC potestate processui augebitur. Adversus nos erimus cum vasto mari notitiae reales et fictae Haec visa est.
Una cogitationum nostrarum est quod, si una cum hodierna insania brevium dramatum coniungitur, Deepfake possit facere rem publicam ad experientiam oblectamenti ultra-low-cost audio-visualem obtinere, sed etiam provocat traditum cinematographicum et televisificum industriam ac etiam quales histriones. et stimulatione audio-visualis oculis humanis egent. Locus hic lis est.