? AutoTrain是一个无代码工具,用于训练最先进的自然语言处理(NLP)模型、计算机视觉(CV)任务、语音任务甚至是表格任务。它建立在Hugging Face团队开发的出色工具之上,并且旨在易于使用。
模型描述 Vision Transformer...
Vision Transformer (base-si...
DiffCSE: Difference-based C...
Model card for CLAP Model...
Releasing Hindi ELECTRA mod...
DistilBert for Dense Passag...
SimLM: Pre-training with Re...
dpr-question_encoder-single...
Erlangshen-SimCSE-110M-Chin...
rubert-base-cased-conversat...
Overview Language model: ...
https://github.com/BM-K/Sen...
KoBART-base-v1 from trans...
DRAGON+ is a BERT-base size...
Motivation This model is ...
SEW-tiny SEW by ASAPP Res...
BART (large-sized model) ...
SciNCL SciNCL is a pre-tr...
X-CLIP (base-sized model) ...
This is a Japanese sentence...
IndoBERT Base Model (phase2...
WavLM-Base-Plus Microsoft...
E5-small Text Embeddings ...
SPECTER 2.0 SPECTER 2.0 i...
Model Card for sup-simcse-r...
This is a copy of the origi...
CODER: Knowledge infused cr...
Please refer here. https://...
(B AR出口 T )是论文中使用...
dpr-ctx_encoder-bert-base-m...
LiLT + XLM-RoBERTa-base T...
bert-base-cased-conversatio...
密集预测变压器(DPT)模型是在140万张图像上进行单目深度估计训练的。它是由Ranftl等人在2021年的论文“用于密集预测的视觉变压器”中介绍的,并首次在此存储库中发布。DPT使用视觉变压器(ViT)作为骨干,并在其上方添加了一个颈部+头部,用于单目深度估计。
Model Details: DPT-Large ...
Model Details: DPT-Hybrid ...
GLPN fine-tuned on KITTI ...
⚠️ This model is deprecated...
all-MiniLM-L6-v2 This is ...
GLPN fine-tuned on NYUv2 ...
glpn-nyu-finetuned-diode-22...
glpn-kitti-finetuned-diode ...
glpn-nyu-finetuned-diode-23...
MiniLM: 6 Layer Version T...
This is the General_TinyBER...
Test model To test this m...
language: multilingual tags...
glpn-nyu-finetuned-diode ...
glpn-kitti-finetuned-diode-...
Non Factoid Question Catego...
一个使用 Vicuna13B 基础的完...
Emotion English DistilRoBER...
DistilBERT base uncased fin...
DeBERTa: Decoding-enhanced ...
Twitter-roBERTa-base for Se...
Cross-Encoder for MS Marco ...
xlm-roberta-base-language-d...
bert-base-multilingual-unca...
Twitter-roBERTa-base for Em...
roberta-large-mnli Tab...
CodeBERT fine-tuned for Ins...
Distilbert-base-uncased-emo...
distilbert-imdb This mode...
Model description This mo...
FinBERT is a pre-trained NL...
RoBERTa Base OpenAI Detecto...
Parrot THIS IS AN ANCILLARY...
Sentiment Analysis in Spani...
BERT codemixed base model f...
German Sentiment Classifica...
Model Trained Using AutoNLP...
FinBERT is a BERT model pre...
SiEBERT - English-Language ...
distilbert-base-uncased-go-...
Fine-tuned DistilRoBERTa-ba...
BERT是一个transformers模型,它是在一个大型英文语料库上进行自监督预训练的。这意味着它仅在原始文本上进行预训练,没有任何人类以任何方式对其进行标注(这就是为什么它可以使用大量公开可用的数据),并使用自动过程从这些文本中生成输入和标签。更准确地说,它是通过两个目标进行预训练的:
BERT base model (uncased) ...
twitter-XLM-roBERTa-base fo...
tts_transformer-zh-cv7_css1...
ESPnet2 TTS model laka...
ESPnet2 TTS model mio/...
原项目链接如下: mio/amad...
该模型由 kan-bayashi 使用 espnet 中的 ljspeech/tts1 配方训练。
fastspeech2-en-200_speaker-...
Text-to-Speech (TTS) with T...
ESPnet2 TTS pretrained mode...
tts_transformer-ar-cv7 Tr...
tts_transformer-fr-cv7_css1...
SpeechT5 (TTS task) Speec...
unit_hifigan_mhubert_vp_en_...
unit_hifigan_HK_layer12.km2...
license: cc-by-4.0
Example ESPnet2 TTS model ...
fastspeech2-en-ljspeech F...
ESPnet JETS Text-to-Speech ...
Vocoder with HiFIGAN traine...
tts_transformer-ru-cv7_css1...
tts_transformer-es-css10 ...
Wine Quality classification...
Model description This re...
Model Trained Using AutoTra...
Model Description Kera...
Flowformer Automatic dete...
Model description [More I...
TensorFlow's Gradient Boost...
Load the data from datase...
Keras Implementation of Str...
How to use import joblib...
Titanic (Survived/Not Survi...
Decision Transformer model ...
PPO Agent playing CartPole-...
poca Agent playing SoccerTw...
PPO Agent playing LunarLand...
PPO Agent playing Pendulum-...
RL Zoo 是 Stable Baselines3...
PPO Agent playing seals/Mou...
DQN Agent playing LunarLand...
PPO Agent playing PongNoFra...
PPO Agent playing BreakoutN...
DQN Agent playing CartPole-...