HF多模态
ai-forever/sbert_large_mt_nlu_ru
BERT large model multitask (cased) for Sentence Embeddings in Russian language.
The model is described in this article
Russian SuperGLUE metrics
For better quality, use mean token embeddings.
Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
from Transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['Привет! Как твои дела?',
'А правда, что 42 твое любимое число?']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("sberbank-ai/sbert_large_mt_nlu_ru")
model = AutoModel.from_pretrained("sberbank-ai/sbert_large_mt_nlu_ru")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=24, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
Authors
- SberDevices Team.
- Aleksandr Abramov: Github, Kaggle Competitions Master;
- Denis Antykhov: Github;
数据统计
数据评估
关于ai-forever/sbert_large_mt_nlu_ru特别声明
本站Ai导航提供的ai-forever/sbert_large_mt_nlu_ru都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由Ai导航实际控制,在2023年5月9日 下午7:12收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,Ai导航不承担任何责任。
相关导航
暂无评论...