Shortcuts

Note

You are reading the documentation for MMSelfSup 0.x, which will soon be deprecated by the end of 2022. We recommend you upgrade to MMSelfSup 1.0.0rc versions to enjoy fruitful new features and better performance brought by OpenMMLab 2.0. Check out the changelog, code and documentation of MMSelfSup 1.0.0rc for more details.

Source code for mmselfsup.models.utils.position_embedding

# Copyright (c) OpenMMLab. All rights reserved.
import torch


[docs]def build_2d_sincos_position_embedding(patches_resolution, embed_dims, temperature=10000., cls_token=False): """The function is to build position embedding for model to obtain the position information of the image patches.""" if isinstance(patches_resolution, int): patches_resolution = (patches_resolution, patches_resolution) h, w = patches_resolution grid_w = torch.arange(w, dtype=torch.float32) grid_h = torch.arange(h, dtype=torch.float32) grid_w, grid_h = torch.meshgrid(grid_w, grid_h) assert embed_dims % 4 == 0, \ 'Embed dimension must be divisible by 4.' pos_dim = embed_dims // 4 omega = torch.arange(pos_dim, dtype=torch.float32) / pos_dim omega = 1. / (temperature**omega) out_w = torch.einsum('m,d->md', [grid_w.flatten(), omega]) out_h = torch.einsum('m,d->md', [grid_h.flatten(), omega]) pos_emb = torch.cat( [ torch.sin(out_w), torch.cos(out_w), torch.sin(out_h), torch.cos(out_h) ], dim=1, )[None, :, :] if cls_token: cls_token_pe = torch.zeros([1, 1, embed_dims], dtype=torch.float32) pos_emb = torch.cat([cls_token_pe, pos_emb], dim=1) return pos_emb
Read the Docs v: 0.x
Versions
latest
stable
1.x
dev-1.x
0.x
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.