Shortcuts

注意

您正在阅读 MMSelfSup 0.x 版本的文档,而 MMSelfSup 0.x 版本将会在 2022 年末 开始逐步停止维护。我们建议您及时升级到 MMSelfSup 1.0.0rc 版本,享受由 OpenMMLab 2.0 带来的更多新特性和更佳的性能表现。阅读 MMSelfSup 1.0.0rc 的 发版日志, 代码文档 获取更多信息。

mmselfsup.models.heads.simmim_head 源代码

# Copyright (c) OpenMMLab. All rights reserved.
import torch
from mmcv.runner import BaseModule
from torch.nn import functional as F

from ..builder import HEADS


[文档]@HEADS.register_module() class SimMIMHead(BaseModule): """Pretrain Head for SimMIM. Args: patch_size (int): Patch size of each token. encoder_in_channels (int): Number of input channels for encoder. """ def __init__(self, patch_size: int, encoder_in_channels: int) -> None: super(SimMIMHead, self).__init__() self.patch_size = patch_size self.encoder_in_channels = encoder_in_channels
[文档] def forward(self, x: torch.Tensor, x_rec: torch.Tensor, mask: torch.Tensor) -> dict: losses = dict() mask = mask.repeat_interleave(self.patch_size, 1).repeat_interleave( self.patch_size, 2).unsqueeze(1).contiguous() loss_rec = F.l1_loss(x, x_rec, reduction='none') loss = (loss_rec * mask).sum() / (mask.sum() + 1e-5) / self.encoder_in_channels losses['loss'] = loss return losses
Read the Docs v: 0.x
Versions
latest
stable
1.x
dev-1.x
0.x
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.