? AutoTrain是一个无代码工具,用于训练最先进的自然语言处理(NLP)模型、计算机视觉(CV)任务、语音任务甚至是表格任务。它建立在Hugging Face团队开发的出色工具之上,并且旨在易于使用。
模型描述 Vision Transformer...
Vision Transformer (base-si...
DiffCSE: Difference-based C...
Model card for CLAP Model...
SimLM: Pre-training with Re...
Releasing Hindi ELECTRA mod...
Erlangshen-SimCSE-110M-Chin...
DistilBert for Dense Passag...
dpr-question_encoder-single...
Overview Language model: ...
rubert-base-cased-conversat...
https://github.com/BM-K/Sen...
DRAGON+ is a BERT-base size...
KoBART-base-v1 from trans...
SEW-tiny SEW by ASAPP Res...
Motivation This model is ...
X-CLIP (base-sized model) ...
BART (large-sized model) ...
SciNCL SciNCL is a pre-tr...
IndoBERT Base Model (phase2...
E5-small Text Embeddings ...
WavLM-Base-Plus Microsoft...
This is a Japanese sentence...
SPECTER 2.0 SPECTER 2.0 i...
Model Card for sup-simcse-r...
CODER: Knowledge infused cr...
This is a copy of the origi...
Please refer here. https://...
(B AR出口 T )是论文中使用...
dpr-ctx_encoder-bert-base-m...
LiLT + XLM-RoBERTa-base T...
bert-base-cased-conversatio...
密集预测变压器(DPT)模型是在140万张图像上进行单目深度估计训练的。它是由Ranftl等人在2021年的论文“用于密集预测的视觉变压器”中介绍的,并首次在此存储库中发布。DPT使用视觉变压器(ViT)作为骨干,并在其上方添加了一个颈部+头部,用于单目深度估计。
Model Details: DPT-Hybrid ...
Model Details: DPT-Large ...
GLPN fine-tuned on KITTI ...
all-MiniLM-L6-v2 This is ...
⚠️ This model is deprecated...
GLPN fine-tuned on NYUv2 ...
glpn-nyu-finetuned-diode-22...
glpn-nyu-finetuned-diode-23...
glpn-kitti-finetuned-diode ...
MiniLM: 6 Layer Version T...
This is the General_TinyBER...
Test model To test this m...
language: multilingual tags...
glpn-nyu-finetuned-diode ...
glpn-kitti-finetuned-diode-...
Non Factoid Question Catego...
Emotion English DistilRoBER...
一个使用 Vicuna13B 基础的完...
DistilBERT base uncased fin...
DeBERTa: Decoding-enhanced ...
Cross-Encoder for MS Marco ...
Twitter-roBERTa-base for Se...
xlm-roberta-base-language-d...
bert-base-multilingual-unca...
Twitter-roBERTa-base for Em...
CodeBERT fine-tuned for Ins...
roberta-large-mnli Tab...
Model description This mo...
Distilbert-base-uncased-emo...
distilbert-imdb This mode...
FinBERT is a pre-trained NL...
RoBERTa Base OpenAI Detecto...
Parrot THIS IS AN ANCILLARY...
Sentiment Analysis in Spani...
BERT codemixed base model f...
FinBERT is a BERT model pre...
Model Trained Using AutoNLP...
German Sentiment Classifica...
distilbert-base-uncased-go-...
SiEBERT - English-Language ...
Fine-tuned DistilRoBERTa-ba...
BERT base model (uncased) ...
BERT是一个transformers模型,它是在一个大型英文语料库上进行自监督预训练的。这意味着它仅在原始文本上进行预训练,没有任何人类以任何方式对其进行标注(这就是为什么它可以使用大量公开可用的数据),并使用自动过程从这些文本中生成输入和标签。更准确地说,它是通过两个目标进行预训练的:
twitter-XLM-roBERTa-base fo...
tts_transformer-zh-cv7_css1...
ESPnet2 TTS model laka...
ESPnet2 TTS model mio/...
该模型由 kan-bayashi 使用 espnet 中的 ljspeech/tts1 配方训练。
原项目链接如下: mio/amad...
fastspeech2-en-200_speaker-...
Text-to-Speech (TTS) with T...
ESPnet2 TTS pretrained mode...
tts_transformer-ar-cv7 Tr...
license: cc-by-4.0
tts_transformer-fr-cv7_css1...
SpeechT5 (TTS task) Speec...
unit_hifigan_mhubert_vp_en_...
fastspeech2-en-ljspeech F...
unit_hifigan_HK_layer12.km2...
Example ESPnet2 TTS model ...
ESPnet JETS Text-to-Speech ...
tts_transformer-ru-cv7_css1...
Vocoder with HiFIGAN traine...
tts_transformer-es-css10 ...
Model Trained Using AutoTra...
Wine Quality classification...
Model description This re...
Model Description Kera...
Flowformer Automatic dete...
Model description [More I...
TensorFlow's Gradient Boost...
Load the data from datase...
Keras Implementation of Str...
How to use import joblib...
Titanic (Survived/Not Survi...
Decision Transformer model ...
PPO Agent playing CartPole-...
poca Agent playing SoccerTw...
PPO Agent playing LunarLand...
PPO Agent playing Pendulum-...
RL Zoo 是 Stable Baselines3...
PPO Agent playing seals/Mou...
DQN Agent playing LunarLand...
PPO Agent playing BreakoutN...
PPO Agent playing PongNoFra...
DQN Agent playing CartPole-...