pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
tokens_length
listlengths
1
723
input_texts
listlengths
1
1
fill-mask
transformers
# ALBERT Base v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make...
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
albert/albert-base-v1
null
[ "transformers", "pytorch", "tf", "safetensors", "albert", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT Base v1 ============== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team re...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\...
[ 69, 46, 114, 38, 135, 34 ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nH...
fill-mask
transformers
# ALBERT Base v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
albert/albert-base-v2
null
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT Base v2 ============== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team re...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modelin...
[ 70, 46, 114, 38, 135, 10 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n...
fill-mask
transformers
# ALBERT Large v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not mak...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
albert/albert-large-v1
null
[ "transformers", "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT Large v1 =============== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team ...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to u...
[ 62, 46, 114, 38, 135, 10 ]
[ "TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use thi...
fill-mask
transformers
# ALBERT Large v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not mak...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
albert/albert-large-v2
null
[ "transformers", "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT Large v2 =============== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team ...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHer...
[ 66, 46, 114, 38, 135, 10 ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is h...
fill-mask
transformers
# ALBERT XLarge v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not ma...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
albert/albert-xlarge-v1
null
[ "transformers", "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT XLarge v1 ================ Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The tea...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHer...
[ 66, 46, 114, 38, 135, 10 ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is h...
fill-mask
transformers
# ALBERT XLarge v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not ma...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
albert/albert-xlarge-v2
null
[ "transformers", "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT XLarge v2 ================ Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The tea...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to u...
[ 62, 46, 114, 38, 135, 10 ]
[ "TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use thi...
fill-mask
transformers
# ALBERT XXLarge v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not m...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
albert/albert-xxlarge-v1
null
[ "transformers", "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT XXLarge v1 ================= Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The t...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to u...
[ 62, 46, 114, 38, 135, 10 ]
[ "TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use thi...
fill-mask
transformers
# ALBERT XXLarge v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not m...
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
albert/albert-xxlarge-v2
null
[ "transformers", "pytorch", "tf", "rust", "safetensors", "albert", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #rust #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
ALBERT XXLarge v2 ================= Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The t...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #rust #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked la...
[ 75, 46, 114, 38, 135, 34 ]
[ "TAGS\n#transformers #pytorch #tf #rust #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language...
fill-mask
transformers
# BERT base model (cased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference bet...
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
google-bert/bert-base-cased
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
BERT base model (cased) ======================= Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English. Disclaimer: The team releasin...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked langu...
[ 76, 46, 114, 205, 174, 34 ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language mo...
fill-mask
transformers
# Bert-base-chinese ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details ### Model Descri...
{"language": "zh"}
google-bert/bert-base-chinese
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "zh", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #fill-mask #zh #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Bert-base-chinese ## Table of Contents - Model Details - Uses - Risks, Limitations and Biases - Training - Evaluation - How to Get Started With the Model ## Model Details ### Model Description This model has been pre-trained for Chinese, training and random input masking has been applied independently to word p...
[ "# Bert-base-chinese", "## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- How to Get Started With the Model", "## Model Details", "### Model Description\n\nThis model has been pre-trained for Chinese, training and random input masking has been applied ...
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #zh #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Bert-base-chinese", "## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- How to Get Started...
[ 54, 6, 29, 4, 86, 9, 3, 15, 71, 3, 34, 6, 3, 5, 9 ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #zh #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# Bert-base-chinese## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- How to Get Started With the Mo...
fill-mask
transformers
<a href="https://huggingface.co/exbert/?model=bert-base-german-cased"> <img width="300px" src="https://huggingface.co/proxy/cdn-media.huggingface.co/exbert/button.png"> </a> # German BERT ![bert_image](https://static.tildacdn.com/tild6438-3730-4164-b266-613634323466/german_bert.png) ## Overview **Language model:** bert-base-cased **L...
{"language": "de", "license": "mit", "tags": ["exbert"], "thumbnail": "https://static.tildacdn.com/tild6438-3730-4164-b266-613634323466/german_bert.png"}
google-bert/bert-base-german-cased
null
[ "transformers", "pytorch", "tf", "jax", "onnx", "safetensors", "bert", "fill-mask", "exbert", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #tf #jax #onnx #safetensors #bert #fill-mask #exbert #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
<a href="URL <img width="300px" src="URL </a> # German BERT !bert_image ## Overview Language model: bert-base-cased Language: German Training data: Wiki, OpenLegalData, News (~ 12GB) Eval data: Conll03 (NER), GermEval14 (NER), GermEval18 (Classification), GNAD (Classification) Infrastructure: 1x TPU v2 Pu...
[ "# German BERT\n!bert_image", "## Overview\nLanguage model: bert-base-cased \nLanguage: German \nTraining data: Wiki, OpenLegalData, News (~ 12GB) \nEval data: Conll03 (NER), GermEval14 (NER), GermEval18 (Classification), GNAD (Classification) \nInfrastructure: 1x TPU v2 \nPublished: Jun 14th, 2019\n\nUpdat...
[ "TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #bert #fill-mask #exbert #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# German BERT\n!bert_image", "## Overview\nLanguage model: bert-base-cased \nLanguage: German \nTraining data: Wiki, OpenLegalData, News (~ 12GB) \...
[ 49, 7, 158, 161, 6, 259, 70, 70 ]
[ "TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #bert #fill-mask #exbert #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# German BERT\n!bert_image## Overview\nLanguage model: bert-base-cased \nLanguage: German \nTraining data: Wiki, OpenLegalData, News (~ 12GB) \nEval data: ...
fill-mask
transformers
This model is the same as [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased). See the [dbmdz/bert-base-german-cased model card](https://huggingface.co/dbmdz/bert-base-german-cased) for details on the model.
{"language": "de", "license": "mit"}
google-bert/bert-base-german-dbmdz-cased
null
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #jax #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
This model is the same as dbmdz/bert-base-german-cased. See the dbmdz/bert-base-german-cased model card for details on the model.
[]
[ "TAGS\n#transformers #pytorch #jax #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 36 ]
[ "TAGS\n#transformers #pytorch #jax #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
This model is the same as [dbmdz/bert-base-german-uncased](https://huggingface.co/dbmdz/bert-base-german-uncased). See the [dbmdz/bert-base-german-cased model card](https://huggingface.co/dbmdz/bert-base-german-uncased) for details on the model.
{"language": "de", "license": "mit"}
google-bert/bert-base-german-dbmdz-uncased
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "fill-mask", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #jax #safetensors #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
This model is the same as dbmdz/bert-base-german-uncased. See the dbmdz/bert-base-german-cased model card for details on the model.
[]
[ "TAGS\n#transformers #pytorch #jax #safetensors #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 40 ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
# BERT multilingual base model (cased) Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model...
{"language": ["multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk"...
google-bert/bert-base-multilingual-cased
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl"...
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", ...
TAGS #transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #...
# BERT multilingual base model (cased) Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case sensitive: it makes a difference between english and English. Disclai...
[ "# BERT multilingual base model (cased)\n\nPretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.\nIt was introduced in this paper and first released in\nthis repository. This model is case sensitive: it makes a difference\nbetween english and English....
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk...
[ 296, 93, 308, 110, 46, 29, 4, 260, 10 ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk...
fill-mask
transformers
# BERT multilingual base model (uncased) Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This mod...
{"language": ["multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk"...
google-bert/bert-base-multilingual-uncased
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl"...
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", ...
TAGS #transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #...
# BERT multilingual base model (uncased) Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference between english and English. Disc...
[ "# BERT multilingual base model (uncased)\n\nPretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective.\nIt was introduced in this paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and Engli...
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk...
[ 288, 95, 308, 110, 46, 43, 29, 4, 260, 10 ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk...
fill-mask
transformers
# BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference ...
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
google-bert/bert-base-uncased
null
[ "transformers", "pytorch", "tf", "jax", "rust", "coreml", "onnx", "safetensors", "bert", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #coreml #onnx #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
BERT base model (uncased) ========================= Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference between english and English. Disclaimer: The team rel...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #coreml #onnx #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipel...
[ 84, 46, 114, 209, 174, 34 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #coreml #onnx #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline fo...
question-answering
transformers
# BERT large model (cased) whole word masking finetuned on SQuAD Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is ca...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
google-bert/bert-large-cased-whole-word-masking-finetuned-squad
null
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "bert", "question-answering", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #safetensors #bert #question-answering #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #endpoints_compatible #region-us
# BERT large model (cased) whole word masking finetuned on SQuAD Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is cased: it makes a difference between english and English. Differently to other B...
[ "# BERT large model (cased) whole word masking finetuned on SQuAD\n\nPretrained model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is cased: it makes a difference between english and English.\n\nDifferently ...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #bert #question-answering #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #endpoints_compatible #region-us \n", "# BERT large model (cased) whole word masking finetuned on SQuAD\n\nPretrained model on English language using...
[ 66, 191, 328, 97, 4, 208, 136, 45, 10 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #bert #question-answering #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #endpoints_compatible #region-us \n# BERT large model (cased) whole word masking finetuned on SQuAD\n\nPretrained model on English language using a mas...
fill-mask
transformers
# BERT large model (cased) whole word masking Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is cased: it makes a dif...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
google-bert/bert-large-cased-whole-word-masking
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
BERT large model (cased) whole word masking =========================================== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is cased: it makes a difference between english and English. ...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how t...
[ 65, 46, 114, 208, 170, 10 ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use ...
fill-mask
transformers
# BERT large model (cased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is cased: it makes a difference between eng...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
google-bert/bert-large-cased
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
BERT large model (cased) ======================== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is cased: it makes a difference between english and English. Disclaimer: The team releasing BERT ...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n...
[ 69, 46, 114, 208, 170, 10 ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere i...
question-answering
transformers
# BERT large model (uncased) whole word masking finetuned on SQuAD Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is ...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
google-bert/bert-large-uncased-whole-word-masking-finetuned-squad
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "question-answering", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #question-answering #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# BERT large model (uncased) whole word masking finetuned on SQuAD Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference between english and English. Differentl...
[ "# BERT large model (uncased) whole word masking finetuned on SQuAD\n\nPretrained model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\...
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #question-answering #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# BERT large model (uncased) whole word masking finetuned on SQuAD\n\nPretrained model on English languag...
[ 68, 193, 328, 97, 4, 208, 136, 45, 11, 10 ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #question-answering #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n# BERT large model (uncased) whole word masking finetuned on SQuAD\n\nPretrained model on English language usin...
fill-mask
transformers
# BERT large model (uncased) whole word masking Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does no...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
google-bert/bert-large-uncased-whole-word-masking
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
BERT large model (uncased) whole word masking ============================================= Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference between english...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n...
[ 69, 46, 114, 208, 170, 10 ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere i...
fill-mask
transformers
# BERT large model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
google-bert/bert-large-uncased
null
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
BERT large model (uncased) ========================== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference between english and English. Disclaimer: The team r...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:...
[ 71, 46, 114, 208, 170, 10 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n...
fill-mask
transformers
# CamemBERT: a Tasty French Language Model ## Introduction [CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretrain...
{"language": "fr", "license": "mit", "datasets": ["oscar"]}
almanach/camembert-base
null
[ "transformers", "pytorch", "tf", "safetensors", "camembert", "fill-mask", "fr", "dataset:oscar", "arxiv:1911.03894", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1911.03894" ]
[ "fr" ]
TAGS #transformers #pytorch #tf #safetensors #camembert #fill-mask #fr #dataset-oscar #arxiv-1911.03894 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
CamemBERT: a Tasty French Language Model ======================================== Introduction ------------ CamemBERT is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining dat...
[ "##### Load CamemBERT and its sub-word tokenizer :", "##### Filling masks using pipeline", "##### Extract contextual embedding features from Camembert output", "##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\...
[ "TAGS\n#transformers #pytorch #tf #safetensors #camembert #fill-mask #fr #dataset-oscar #arxiv-1911.03894 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "##### Load CamemBERT and its sub-word tokenizer :", "##### Filling masks using pipeline", "##### Extract contextual emb...
[ 63, 17, 9, 17, 90 ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #camembert #fill-mask #fr #dataset-oscar #arxiv-1911.03894 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n##### Load CamemBERT and its sub-word tokenizer :##### Filling masks using pipeline##### Extract contextual embedding features fr...
text-generation
transformers
# ctrl # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical-specifications) 8. [Citation](...
{"language": "en", "license": "bsd-3-clause", "pipeline_tag": "text-generation"}
Salesforce/ctrl
null
[ "transformers", "pytorch", "tf", "ctrl", "text-generation", "en", "arxiv:1909.05858", "arxiv:1910.09700", "license:bsd-3-clause", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1909.05858", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #ctrl #text-generation #en #arxiv-1909.05858 #arxiv-1910.09700 #license-bsd-3-clause #endpoints_compatible #has_space #region-us
# ctrl # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model # Model Details ## Model Description The CTRL model was proposed in CTRL: A Co...
[ "# ctrl", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model", "# Model Details", "## Model Description\n\nThe CTRL mode...
[ "TAGS\n#transformers #pytorch #tf #ctrl #text-generation #en #arxiv-1909.05858 #arxiv-1910.09700 #license-bsd-3-clause #endpoints_compatible #has_space #region-us \n", "# ctrl", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impa...
[ 62, 3, 50, 3, 263, 2, 20, 163, 101, 419, 150, 2, 149, 4, 197, 98, 2, 47, 76, 310, 25, 48 ]
[ "TAGS\n#transformers #pytorch #tf #ctrl #text-generation #en #arxiv-1909.05858 #arxiv-1910.09700 #license-bsd-3-clause #endpoints_compatible #has_space #region-us \n# ctrl# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Techn...
question-answering
transformers
# DistilBERT base cased distilled SQuAD ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental...
{"language": "en", "license": "apache-2.0", "datasets": ["squad"], "metrics": ["squad"], "model-index": [{"name": "distilbert-base-cased-distilled-squad", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "va...
distilbert/distilbert-base-cased-distilled-squad
null
[ "transformers", "pytorch", "tf", "rust", "safetensors", "openvino", "distilbert", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "arxiv:1910.09700", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1910.01108", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #rust #safetensors #openvino #distilbert #question-answering #en #dataset-squad #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
# DistilBERT base cased distilled SQuAD ## Table of Contents - Model Details - How To Get Started With the Model - Uses - Risks, Limitations and Biases - Training - Evaluation - Environmental Impact - Technical Specifications - Citation Information - Model Card Authors ## Model Details Model Description: The Distil...
[ "# DistilBERT base cased distilled SQuAD", "## Table of Contents\n- Model Details\n- How To Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors", "## Model Details\n\nModel...
[ "TAGS\n#transformers #pytorch #tf #rust #safetensors #openvino #distilbert #question-answering #en #dataset-squad #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "# DistilBERT base cased distilled SQuAD", "## Table of Contents\n- Model Detail...
[ 80, 11, 42, 254, 38, 12, 71, 127, 3, 120, 6, 25, 24, 53, 96, 91, 16 ]
[ "TAGS\n#transformers #pytorch #tf #rust #safetensors #openvino #distilbert #question-answering #en #dataset-squad #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n# DistilBERT base cased distilled SQuAD## Table of Contents\n- Model Details\n- How To ...
fill-mask
transformers
# Model Card for DistilBERT base model (cased) This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-cased). It was introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found [here](https://github.com/huggingface/transformers/...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
distilbert/distilbert-base-cased
null
[ "transformers", "pytorch", "tf", "onnx", "safetensors", "distilbert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1910.01108" ]
[ "en" ]
TAGS #transformers #pytorch #tf #onnx #safetensors #distilbert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Model Card for DistilBERT base model (cased) ============================================ This model is a distilled version of the BERT base model. It was introduced in this paper. The code for the distillation process can be found here. This model is cased: it does make a difference between english and English. Al...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #onnx #safetensors #distilbert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked langua...
[ 76, 46, 134, 208, 72, 34 ]
[ "TAGS\n#transformers #pytorch #tf #onnx #safetensors #distilbert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language mod...
fill-mask
transformers
## distilbert-base-german-cased
{"language": "de", "license": "apache-2.0"}
distilbert/distilbert-base-german-cased
null
[ "transformers", "pytorch", "safetensors", "distilbert", "fill-mask", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #safetensors #distilbert #fill-mask #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
## distilbert-base-german-cased
[ "## distilbert-base-german-cased" ]
[ "TAGS\n#transformers #pytorch #safetensors #distilbert #fill-mask #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "## distilbert-base-german-cased" ]
[ 44, 12 ]
[ "TAGS\n#transformers #pytorch #safetensors #distilbert #fill-mask #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n## distilbert-base-german-cased" ]
fill-mask
transformers
# Model Card for DistilBERT base multilingual (cased) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Citat...
{"language": ["multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk"...
distilbert/distilbert-base-multilingual-cased
null
[ "transformers", "pytorch", "tf", "onnx", "safetensors", "distilbert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da",...
null
2022-03-02T23:29:04+00:00
[ "1910.01108", "1910.09700" ]
[ "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", ...
TAGS #transformers #pytorch #tf #onnx #safetensors #distilbert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #m...
Model Card for DistilBERT base multilingual (cased) =================================================== Table of Contents ================= 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training Details 5. Evaluation 6. Environmental Impact 7. Citation 8. How To Get Started With the Model Model Deta...
[]
[ "TAGS\n#transformers #pytorch #tf #onnx #safetensors #distilbert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds ...
[ 305 ]
[ "TAGS\n#transformers #pytorch #tf #onnx #safetensors #distilbert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds ...
question-answering
transformers
# DistilBERT base uncased distilled SQuAD ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environment...
{"language": "en", "license": "apache-2.0", "datasets": ["squad"], "widget": [{"text": "Which name is also used to describe the Amazon rainforest in English?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; Frenc...
distilbert/distilbert-base-uncased-distilled-squad
null
[ "transformers", "pytorch", "tf", "tflite", "coreml", "safetensors", "distilbert", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "arxiv:1910.09700", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1910.01108", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #tflite #coreml #safetensors #distilbert #question-answering #en #dataset-squad #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# DistilBERT base uncased distilled SQuAD ## Table of Contents - Model Details - How To Get Started With the Model - Uses - Risks, Limitations and Biases - Training - Evaluation - Environmental Impact - Technical Specifications - Citation Information - Model Card Authors ## Model Details Model Description: The Dist...
[ "# DistilBERT base uncased distilled SQuAD", "## Table of Contents\n- Model Details\n- How To Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors", "## Model Details\n\nMod...
[ "TAGS\n#transformers #pytorch #tf #tflite #coreml #safetensors #distilbert #question-answering #en #dataset-squad #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# DistilBERT base uncased distilled SQuAD", "## Table of Contents\n- Model Details\n- How To...
[ 78, 11, 42, 254, 38, 12, 71, 127, 3, 92, 6, 25, 24, 53, 96, 91, 16 ]
[ "TAGS\n#transformers #pytorch #tf #tflite #coreml #safetensors #distilbert #question-answering #en #dataset-squad #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n# DistilBERT base uncased distilled SQuAD## Table of Contents\n- Model Details\n- How To Get Started...
text-classification
transformers
# DistilBERT base uncased finetuned SST-2 ## Table of Contents - [Model Details](#model-details) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) ## Model Details **Model Description:** T...
{"language": "en", "license": "apache-2.0", "datasets": ["sst2", "glue"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst-2-english", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "sst2", "split": "validation"},...
distilbert/distilbert-base-uncased-finetuned-sst-2-english
null
[ "transformers", "pytorch", "tf", "rust", "onnx", "safetensors", "distilbert", "text-classification", "en", "dataset:sst2", "dataset:glue", "arxiv:1910.01108", "doi:10.57967/hf/0181", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", ...
null
2022-03-02T23:29:04+00:00
[ "1910.01108" ]
[ "en" ]
TAGS #transformers #pytorch #tf #rust #onnx #safetensors #distilbert #text-classification #en #dataset-sst2 #dataset-glue #arxiv-1910.01108 #doi-10.57967/hf/0181 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
# DistilBERT base uncased finetuned SST-2 ## Table of Contents - Model Details - How to Get Started With the Model - Uses - Risks, Limitations and Biases - Training ## Model Details Model Description: This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2. This model reaches an accuracy...
[ "# DistilBERT base uncased finetuned SST-2", "## Table of Contents\n- Model Details\n- How to Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training", "## Model Details\nModel Description: This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2.\nThis model ...
[ "TAGS\n#transformers #pytorch #tf #rust #onnx #safetensors #distilbert #text-classification #en #dataset-sst2 #dataset-glue #arxiv-1910.01108 #doi-10.57967/hf/0181 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# DistilBERT base uncased finetuned SST-2", ...
[ 98, 14, 27, 131, 16, 3, 65, 71, 202, 2, 26, 6, 53 ]
[ "TAGS\n#transformers #pytorch #tf #rust #onnx #safetensors #distilbert #text-classification #en #dataset-sst2 #dataset-glue #arxiv-1910.01108 #doi-10.57967/hf/0181 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n# DistilBERT base uncased finetuned SST-2## Table o...
fill-mask
transformers
# DistilBERT base model (uncased) This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/main/e...
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
distilbert/distilbert-base-uncased
null
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "distilbert", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1910.01108" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #safetensors #distilbert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
DistilBERT base model (uncased) =============================== This model is a distilled version of the BERT base model. It was introduced in this paper. The code for the distillation process can be found here. This model is uncased: it does not make a difference between english and English. Model description ----...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #distilbert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for ...
[ 80, 46, 134, 208, 72, 34 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #distilbert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked...
text-generation
transformers
# DistilGPT2 DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, tra...
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["openwebtext"], "co2_eq_emissions": 149200, "model-index": [{"name": "distilgpt2", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "WikiText-103", "type": "wikitext"}, "metrics": [{"type": "perp...
distilbert/distilgpt2
null
[ "transformers", "pytorch", "tf", "jax", "tflite", "rust", "coreml", "safetensors", "gpt2", "text-generation", "exbert", "en", "dataset:openwebtext", "arxiv:1910.01108", "arxiv:2201.08542", "arxiv:2203.12574", "arxiv:1910.09700", "arxiv:1503.02531", "license:apache-2.0", "model-...
null
2022-03-02T23:29:04+00:00
[ "1910.01108", "2201.08542", "2203.12574", "1910.09700", "1503.02531" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #tflite #rust #coreml #safetensors #gpt2 #text-generation #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #license-apache-2.0 #model-index #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_sp...
# DistilGPT2 DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, tra...
[ "# DistilGPT2\n\nDistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the desig...
[ "TAGS\n#transformers #pytorch #tf #jax #tflite #rust #coreml #safetensors #gpt2 #text-generation #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #license-apache-2.0 #model-index #co2_eq_emissions #autotrain_compatible #endpoints_compatible #...
[ 145, 98, 166, 7, 377, 195, 117, 139, 86, 75, 67, 114, 126 ]
[ "TAGS\n#transformers #pytorch #tf #jax #tflite #rust #coreml #safetensors #gpt2 #text-generation #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #license-apache-2.0 #model-index #co2_eq_emissions #autotrain_compatible #endpoints_compatible #...
fill-mask
transformers
# Model Card for DistilRoBERTa base # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Citation](#citation) 8....
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["openwebtext"]}
distilbert/distilroberta-base
null
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "roberta", "fill-mask", "exbert", "en", "dataset:openwebtext", "arxiv:1910.01108", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1910.01108", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #safetensors #roberta #fill-mask #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Model Card for DistilRoBERTa base ================================= Table of Contents ================= 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training Details 5. Evaluation 6. Environmental Impact 7. Citation 8. How To Get Started With the Model Model Details ============= Model Descriptio...
[]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #roberta #fill-mask #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
[ 85 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #roberta #fill-mask #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-generation
transformers
# GPT-2 Large ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-im...
{"language": "en", "license": "mit"}
openai-community/gpt2-large
null
[ "transformers", "pytorch", "tf", "jax", "rust", "onnx", "safetensors", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #onnx #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
GPT-2 Large =========== Table of Contents ----------------- * Model Details * How To Get Started With the Model * Uses * Risks, Limitations and Biases * Training * Evaluation * Environmental Impact * Technical Specifications * Citation Information * Model Card Authors Model Details ------------- Model Descripti...
[ "#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constrain...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #onnx #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary ...
[ 70, 74, 98, 399, 110, 254, 271, 173 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #onnx #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intend...
text-generation
transformers
# GPT-2 Medium ## Model Details **Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [a...
{"language": "en", "license": "mit"}
openai-community/gpt2-medium
null
[ "transformers", "pytorch", "tf", "jax", "rust", "onnx", "safetensors", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #onnx #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
GPT-2 Medium ============ Model Details ------------- Model Description: GPT-2 Medium is the 355M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. * Developed by: Ope...
[ "#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constrain...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #onnx #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary ...
[ 70, 74, 98, 399, 110, 254, 271, 173 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #onnx #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intend...
text-generation
transformers
# GPT-2 XL ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impac...
{"language": "en", "license": "mit"}
openai-community/gpt2-xl
null
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
GPT-2 XL ======== Table of Contents ----------------- * Model Details * How To Get Started With the Model * Uses * Risks, Limitations and Biases * Training * Evaluation * Environmental Impact * Technical Specifications * Citation Information * Model Card Authors Model Details ------------- Model Description: GP...
[ "#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constrain...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intend...
[ 67, 74, 98, 245, 151, 198, 110, 254, 271, 196 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended use...
text-generation
transformers
# GPT-2 Test the whole generation capabilities here: https://huggingface.co/proxy/transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_...
{"language": "en", "license": "mit", "tags": ["exbert"]}
openai-community/gpt2
null
[ "transformers", "pytorch", "tf", "jax", "tflite", "rust", "onnx", "safetensors", "gpt2", "text-generation", "exbert", "en", "doi:10.57967/hf/0039", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #tflite #rust #onnx #safetensors #gpt2 #text-generation #exbert #en #doi-10.57967/hf/0039 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
GPT-2 ===== Test the whole generation capabilities here: URL Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this paper and first released at this page. Disclaimer: The team releasing GPT-2 also wrote a model card for their model. Content from this model...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\n...
[ "TAGS\n#transformers #pytorch #tf #jax #tflite #rust #onnx #safetensors #gpt2 #text-generation #exbert #en #doi-10.57967/hf/0039 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for t...
[ 82, 64, 390, 118, 34 ]
[ "TAGS\n#transformers #pytorch #tf #jax #tflite #rust #onnx #safetensors #gpt2 #text-generation #exbert #en #doi-10.57967/hf/0039 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for text ge...
text-generation
transformers
# OpenAI GPT 1 ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-i...
{"language": "en", "license": "mit"}
openai-community/openai-gpt
null
[ "transformers", "pytorch", "tf", "rust", "safetensors", "openai-gpt", "text-generation", "en", "arxiv:1705.11168", "arxiv:1803.02324", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1705.11168", "1803.02324", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #rust #safetensors #openai-gpt #text-generation #en #arxiv-1705.11168 #arxiv-1803.02324 #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
OpenAI GPT 1 ============ Table of Contents ----------------- * Model Details * How To Get Started With the Model * Uses * Risks, Limitations and Biases * Training * Evaluation * Environmental Impact * Technical Specifications * Citation Information * Model Card Authors Model Details ------------- Model Descrip...
[ "#### Direct Use\n\n\nThis model can be used for language modeling tasks.", "#### Downstream Use\n\n\nPotential downstream uses of this model include tasks that leverage language models. In the associated paper, the model developers discuss evaluations of the model for tasks including natural language inference (...
[ "TAGS\n#transformers #pytorch #tf #rust #safetensors #openai-gpt #text-generation #en #arxiv-1705.11168 #arxiv-1803.02324 #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "#### Direct Use\n\n\nThis model can be used for language modeling tasks.", "#### Downst...
[ 81, 16, 54, 86, 138, 395, 87, 372, 189, 304 ]
[ "TAGS\n#transformers #pytorch #tf #rust #safetensors #openai-gpt #text-generation #en #arxiv-1705.11168 #arxiv-1803.02324 #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n#### Direct Use\n\n\nThis model can be used for language modeling tasks.#### Downstream Use\n\n...
text-classification
transformers
# RoBERTa Base OpenAI Detector ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specificati...
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
openai-community/roberta-base-openai-detector
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "roberta", "text-classification", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1904.09751", "arxiv:1910.09700", "arxiv:1908.09203", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space"...
null
2022-03-02T23:29:04+00:00
[ "1904.09751", "1910.09700", "1908.09203" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #safetensors #roberta #text-classification #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1904.09751 #arxiv-1910.09700 #arxiv-1908.09203 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# RoBERTa Base OpenAI Detector ## Table of Contents - Model Details - Uses - Risks, Limitations and Biases - Training - Evaluation - Environmental Impact - Technical Specifications - Citation Information - Model Card Authors - How To Get Started With the Model ## Model Details Model Description: RoBERTa base OpenAI...
[ "# RoBERTa Base OpenAI Detector", "## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors\n- How To Get Started With the Model", "## Model Details\n\nModel Descript...
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #roberta #text-classification #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1904.09751 #arxiv-1910.09700 #arxiv-1908.09203 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# RoBERTa Base OpenAI Detector", "## Tab...
[ 92, 6, 42, 247, 3, 77, 54, 80, 59, 211, 119, 3, 58, 112, 14, 87, 141, 54, 106, 17, 22 ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #roberta #text-classification #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1904.09751 #arxiv-1910.09700 #arxiv-1908.09203 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n# RoBERTa Base OpenAI Detector## Table of Conten...
fill-mask
transformers
# RoBERTa base model Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it mak...
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
FacebookAI/roberta-base
null
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "roberta", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1907.11692", "arxiv:1806.02847", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692", "1806.02847" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #safetensors #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #arxiv-1806.02847 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
RoBERTa base model ================== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English. Disclaimer: The team releasing RoBERTa ...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content ...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #arxiv-1806.02847 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipel...
[ 84, 46, 225, 193, 162, 34 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #arxiv-1806.02847 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline fo...
text-classification
transformers
# roberta-large-mnli ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation-results) - [Environmental Impact](#e...
{"language": ["en"], "license": "mit", "tags": ["autogenerated-modelcard"], "datasets": ["multi_nli", "wikipedia", "bookcorpus"]}
FacebookAI/roberta-large-mnli
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "roberta", "text-classification", "autogenerated-modelcard", "en", "dataset:multi_nli", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:1806.02847", "arxiv:1804.07461", "arxiv:1704.05426", "arxiv:1508.05326"...
null
2022-03-02T23:29:04+00:00
[ "1907.11692", "1806.02847", "1804.07461", "1704.05426", "1508.05326", "1809.05053", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #safetensors #roberta #text-classification #autogenerated-modelcard #en #dataset-multi_nli #dataset-wikipedia #dataset-bookcorpus #arxiv-1907.11692 #arxiv-1806.02847 #arxiv-1804.07461 #arxiv-1704.05426 #arxiv-1508.05326 #arxiv-1809.05053 #arxiv-1910.09700 #license-mit #autotrain_com...
roberta-large-mnli ================== Table of Contents ----------------- * Model Details * How To Get Started With the Model * Uses * Risks, Limitations and Biases * Training * Evaluation * Environmental Impact * Technical Specifications * Citation Information * Model Card Authors Model Details ------------- M...
[ "#### Direct Use\n\n\nThis fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the GitHub repo for examples) and zero-shot sequence classification.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hosti...
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #roberta #text-classification #autogenerated-modelcard #en #dataset-multi_nli #dataset-wikipedia #dataset-bookcorpus #arxiv-1907.11692 #arxiv-1806.02847 #arxiv-1804.07461 #arxiv-1704.05426 #arxiv-1508.05326 #arxiv-1809.05053 #arxiv-1910.09700 #license-mit #autotra...
[ 144, 47, 267, 215, 6, 222, 170, 426, 177 ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #roberta #text-classification #autogenerated-modelcard #en #dataset-multi_nli #dataset-wikipedia #dataset-bookcorpus #arxiv-1907.11692 #arxiv-1806.02847 #arxiv-1804.07461 #arxiv-1704.05426 #arxiv-1508.05326 #arxiv-1809.05053 #arxiv-1910.09700 #license-mit #autotra...
text-classification
transformers
# RoBERTa Large OpenAI Detector ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specificat...
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
openai-community/roberta-large-openai-detector
null
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1904.09751", "arxiv:1910.09700", "arxiv:1908.09203", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1904.09751", "1910.09700", "1908.09203" ]
[ "en" ]
TAGS #transformers #pytorch #jax #roberta #text-classification #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1904.09751 #arxiv-1910.09700 #arxiv-1908.09203 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# RoBERTa Large OpenAI Detector ## Table of Contents - Model Details - Uses - Risks, Limitations and Biases - Training - Evaluation - Environmental Impact - Technical Specifications - Citation Information - Model Card Authors - How To Get Started With the Model ## Model Details Model Description: RoBERTa large Open...
[ "# RoBERTa Large OpenAI Detector", "## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors\n- How To Get Started With the Model", "## Model Details\n\nModel Descrip...
[ "TAGS\n#transformers #pytorch #jax #roberta #text-classification #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1904.09751 #arxiv-1910.09700 #arxiv-1908.09203 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# RoBERTa Large OpenAI Detector", "## Table of Contents\n...
[ 85, 6, 42, 247, 3, 27, 54, 80, 59, 211, 119, 3, 58, 113, 14, 87, 141, 54, 106, 17, 12 ]
[ "TAGS\n#transformers #pytorch #jax #roberta #text-classification #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1904.09751 #arxiv-1910.09700 #arxiv-1908.09203 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n# RoBERTa Large OpenAI Detector## Table of Contents\n- Model Deta...
fill-mask
transformers
# RoBERTa large model Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: ...
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
FacebookAI/roberta-large
null
[ "transformers", "pytorch", "tf", "jax", "onnx", "safetensors", "roberta", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1907.11692", "arxiv:1806.02847", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692", "1806.02847" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #onnx #safetensors #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #arxiv-1806.02847 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
RoBERTa large model =================== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English. Disclaimer: The team releasing RoBERT...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content ...
[ "TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #arxiv-1806.02847 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipel...
[ 85, 46, 226, 192, 162, 34 ]
[ "TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #arxiv-1806.02847 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline fo...
translation
transformers
# Model Card for T5 11B ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [...
{"language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"], "inference": false}
google-t5/t5-11b
null
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "summarization", "translation", "en", "fr", "ro", "de", "multilingual", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12885", "arxiv:19...
null
2022-03-02T23:29:04+00:00
[ "1805.12471", "1708.00055", "1704.05426", "1606.05250", "1808.09121", "1810.12885", "1905.10044", "1910.09700" ]
[ "en", "fr", "ro", "de", "multilingual" ]
TAGS #transformers #pytorch #tf #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #has_s...
# Model Card for T5 11B !model image # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training Details 5. Evaluation 6. Environmental Impact 7. Citation 8. Model Card Authors 9. How To Get Started With the Model # Model Details ## Model Description The developers of the Text-To-Te...
[ "# Model Card for T5 11B\n\n!model image", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model", "# Model Details", "## Model Description\n\nTh...
[ "TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible ...
[ 149, 11, 47, 3, 240, 2, 110, 12, 11, 7, 3, 315, 112, 2, 26, 21, 157, 16, 8, 168 ]
[ "TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible ...
translation
transformers
# Model Card for T5-3B ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [B...
{"language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"]}
google-t5/t5-3b
null
[ "transformers", "pytorch", "tf", "safetensors", "t5", "text2text-generation", "summarization", "translation", "en", "fr", "ro", "de", "multilingual", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12...
null
2022-03-02T23:29:04+00:00
[ "1805.12471", "1708.00055", "1704.05426", "1606.05250", "1808.09121", "1810.12885", "1905.10044", "1910.09700" ]
[ "en", "fr", "ro", "de", "multilingual" ]
TAGS #transformers #pytorch #tf #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_comp...
# Model Card for T5-3B !model image # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training Details 5. Evaluation 6. Environmental Impact 7. Citation 8. Model Card Authors 9. How To Get Started With the Model # Model Details ## Model Description The developers of the Text-To-Tex...
[ "# Model Card for T5-3B\n\n!model image", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model", "# Model Details", "## Model Description\n\nThe...
[ "TAGS\n#transformers #pytorch #tf #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrai...
[ 158, 12, 47, 3, 240, 2, 110, 12, 11, 7, 3, 315, 112, 2, 26, 21, 157, 16, 38 ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrai...
translation
transformers
# Model Card for T5 Base ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. ...
{"language": ["en", "fr", "ro", "de"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"], "pipeline_tag": "translation"}
google-t5/t5-base
null
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "t5", "text2text-generation", "summarization", "translation", "en", "fr", "ro", "de", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.1...
null
2022-03-02T23:29:04+00:00
[ "1805.12471", "1708.00055", "1704.05426", "1606.05250", "1808.09121", "1810.12885", "1905.10044", "1910.09700" ]
[ "en", "fr", "ro", "de" ]
TAGS #transformers #pytorch #tf #jax #rust #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compati...
# Model Card for T5 Base !model image # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training Details 5. Evaluation 6. Environmental Impact 7. Citation 8. Model Card Authors 9. How To Get Started With the Model # Model Details ## Model Description The developers of the Text-To-T...
[ "# Model Card for T5 Base\n\n!model image", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model", "# Model Details", "## Model Description\n\nT...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_c...
[ 158, 10, 47, 3, 239, 2, 110, 12, 11, 7, 3, 315, 112, 2, 26, 20, 157, 16, 58 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_c...
translation
transformers
# Model Card for T5 Large ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3....
{"language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"]}
google-t5/t5-large
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "t5", "text2text-generation", "summarization", "translation", "en", "fr", "ro", "de", "multilingual", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxi...
null
2022-03-02T23:29:04+00:00
[ "1805.12471", "1708.00055", "1704.05426", "1606.05250", "1808.09121", "1810.12885", "1905.10044", "1910.09700" ]
[ "en", "fr", "ro", "de", "multilingual" ]
TAGS #transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain...
# Model Card for T5 Large !model image # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training Details 5. Evaluation 6. Environmental Impact 7. Citation 8. Model Card Authors 9. How To Get Started With the Model # Model Details ## Model Description The developers of the Text-To-...
[ "# Model Card for T5 Large\n\n!model image", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model", "# Model Details", "## Model Description\n\n...
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #aut...
[ 160, 10, 47, 3, 239, 2, 110, 12, 11, 7, 3, 315, 112, 2, 26, 20, 157, 16, 58 ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #aut...
translation
transformers
# Model Card for T5 Small ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3....
{"language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"]}
google-t5/t5-small
null
[ "transformers", "pytorch", "tf", "jax", "rust", "onnx", "safetensors", "t5", "text2text-generation", "summarization", "translation", "en", "fr", "ro", "de", "multilingual", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:...
null
2022-03-02T23:29:04+00:00
[ "1805.12471", "1708.00055", "1704.05426", "1606.05250", "1808.09121", "1810.12885", "1905.10044", "1910.09700" ]
[ "en", "fr", "ro", "de", "multilingual" ]
TAGS #transformers #pytorch #tf #jax #rust #onnx #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2....
# Model Card for T5 Small !model image # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training Details 5. Evaluation 6. Environmental Impact 7. Citation 8. Model Card Authors 9. How To Get Started With the Model # Model Details ## Model Description The developers of the Text-To-...
[ "# Model Card for T5 Small\n\n!model image", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model", "# Model Details", "## Model Description\n\n...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #onnx #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apa...
[ 165, 10, 47, 3, 239, 2, 110, 12, 11, 7, 3, 315, 112, 2, 26, 20, 157, 16, 58 ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #onnx #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apa...
text-generation
transformers
# Transfo-xl-wt103 ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [How to Get Started With the Model](#how-to-get-started-with...
{"language": "en", "tags": ["text-generation"], "datasets": ["wikitext-103"], "task": {"name": "Text Generation", "type": "text-generation"}, "model-index": [{"name": "transfo-xl-wt103", "results": []}]}
transfo-xl/transfo-xl-wt103
null
[ "transformers", "pytorch", "tf", "transfo-xl", "text-generation", "en", "dataset:wikitext-103", "arxiv:1901.02860", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1901.02860" ]
[ "en" ]
TAGS #transformers #pytorch #tf #transfo-xl #text-generation #en #dataset-wikitext-103 #arxiv-1901.02860 #autotrain_compatible #endpoints_compatible #region-us
Transfo-xl-wt103 ================ Table of Contents ----------------- * Model Details * Uses * Risks, Limitations and Biases * Training * Evaluation * Citation Information * How to Get Started With the Model Model Details ------------- Model Description: The Transformer-XL model is a causal (uni-directional) tr...
[ "#### Direct Use\n\n\nThis model can be used for text generation.\nThe authors provide additionally notes about the vocabulary used, in the associated paper:\n\n\n\n> \n> We envision interesting applications of Transformer-XL in the fields of text generation, unsupervised feature learning, image and speech modeling...
[ "TAGS\n#transformers #pytorch #tf #transfo-xl #text-generation #en #dataset-wikitext-103 #arxiv-1901.02860 #autotrain_compatible #endpoints_compatible #region-us \n", "#### Direct Use\n\n\nThis model can be used for text generation.\nThe authors provide additionally notes about the vocabulary used, in the associa...
[ 56, 65, 178, 195, 6, 118, 45 ]
[ "TAGS\n#transformers #pytorch #tf #transfo-xl #text-generation #en #dataset-wikitext-103 #arxiv-1901.02860 #autotrain_compatible #endpoints_compatible #region-us \n#### Direct Use\n\n\nThis model can be used for text generation.\nThe authors provide additionally notes about the vocabulary used, in the associated pa...
fill-mask
transformers
# xlm-clm-ende-1024 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical-specifications) 8...
{"language": ["multilingual", "en", "de"]}
FacebookAI/xlm-clm-ende-1024
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm", "fill-mask", "multilingual", "en", "de", "arxiv:1901.07291", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1901.07291", "1910.09700" ]
[ "multilingual", "en", "de" ]
TAGS #transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #de #arxiv-1901.07291 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# xlm-clm-ende-1024 # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model # Model Details The XLM model was proposed in Cross-lingual Langua...
[ "# xlm-clm-ende-1024", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model", "# Model Details\n\nThe XLM model was proposed...
[ "TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #de #arxiv-1901.07291 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# xlm-clm-ende-1024", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluat...
[ 64, 12, 50, 67, 120, 2, 21, 28, 25, 43, 26, 16, 2, 25, 25, 63, 131, 16, 36 ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #de #arxiv-1901.07291 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# xlm-clm-ende-1024# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Envi...
fill-mask
transformers
# xlm-clm-enfr-1024 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical-specifications) 8...
{"language": ["multilingual", "en", "fr"]}
FacebookAI/xlm-clm-enfr-1024
null
[ "transformers", "pytorch", "tf", "xlm", "fill-mask", "multilingual", "en", "fr", "arxiv:1901.07291", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1901.07291", "1910.09700" ]
[ "multilingual", "en", "fr" ]
TAGS #transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #arxiv-1901.07291 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# xlm-clm-enfr-1024 # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model # Model Details The XLM model was proposed in Cross-lingual Langua...
[ "# xlm-clm-enfr-1024", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model", "# Model Details\n\nThe XLM model was proposed...
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #arxiv-1901.07291 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# xlm-clm-enfr-1024", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Envir...
[ 60, 12, 50, 67, 120, 2, 26, 28, 25, 43, 26, 16, 2, 25, 25, 63, 131, 16, 36 ]
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #arxiv-1901.07291 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# xlm-clm-enfr-1024# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Imp...
fill-mask
transformers
# xlm-mlm-100-1280 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical-specifications) 8....
{"language": ["multilingual", "en", "es", "fr", "de", "zh", "ru", "pt", "it", "ar", "ja", "id", "tr", "nl", "pl", "fa", "vi", "sv", "ko", "he", "ro", false, "hi", "uk", "cs", "fi", "hu", "th", "da", "ca", "el", "bg", "sr", "ms", "bn", "hr", "sl", "az", "sk", "eo", "ta", "sh", "lt", "et", "ml", "la", "bs", "sq", "arz", ...
FacebookAI/xlm-mlm-100-1280
null
[ "transformers", "pytorch", "tf", "xlm", "fill-mask", "multilingual", "en", "es", "fr", "de", "zh", "ru", "pt", "it", "ar", "ja", "id", "tr", "nl", "pl", "fa", "vi", "sv", "ko", "he", "ro", "no", "hi", "uk", "cs", "fi", "hu", "th", "da", "ca", "el...
null
2022-03-02T23:29:04+00:00
[ "1901.07291", "1911.02116", "1910.09700" ]
[ "multilingual", "en", "es", "fr", "de", "zh", "ru", "pt", "it", "ar", "ja", "id", "tr", "nl", "pl", "fa", "vi", "sv", "ko", "he", "ro", "no", "hi", "uk", "cs", "fi", "hu", "th", "da", "ca", "el", "bg", "sr", "ms", "bn", "hr", "sl", "az", "s...
TAGS #transformers #pytorch #tf #xlm #fill-mask #multilingual #en #es #fr #de #zh #ru #pt #it #ar #ja #id #tr #nl #pl #fa #vi #sv #ko #he #ro #no #hi #uk #cs #fi #hu #th #da #ca #el #bg #sr #ms #bn #hr #sl #az #sk #eo #ta #sh #lt #et #ml #la #bs #sq #arz #af #ka #mr #eu #tl #ang #gl #nn #ur #kk #be #hy #te #lv #mk #als...
xlm-mlm-100-1280 ================ Table of Contents ================= 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model Model Details ============= xlm-mlm...
[]
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #es #fr #de #zh #ru #pt #it #ar #ja #id #tr #nl #pl #fa #vi #sv #ko #he #ro #no #hi #uk #cs #fi #hu #th #da #ca #el #bg #sr #ms #bn #hr #sl #az #sk #eo #ta #sh #lt #et #ml #la #bs #sq #arz #af #ka #mr #eu #tl #ang #gl #nn #ur #kk #be #hy #te #lv #m...
[ 291 ]
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #es #fr #de #zh #ru #pt #it #ar #ja #id #tr #nl #pl #fa #vi #sv #ko #he #ro #no #hi #uk #cs #fi #hu #th #da #ca #el #bg #sr #ms #bn #hr #sl #az #sk #eo #ta #sh #lt #et #ml #la #bs #sq #arz #af #ka #mr #eu #tl #ang #gl #nn #ur #kk #be #hy #te #lv #m...
fill-mask
transformers
# xlm-mlm-17-1280 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical-specifications) 8. ...
{"language": ["multilingual", "en", "fr", "es", "de", "it", "pt", "nl", "sv", "pl", "ru", "ar", "tr", "zh", "ja", "ko", "hi", "vi"], "license": "cc-by-nc-4.0"}
FacebookAI/xlm-mlm-17-1280
null
[ "transformers", "pytorch", "tf", "xlm", "fill-mask", "multilingual", "en", "fr", "es", "de", "it", "pt", "nl", "sv", "pl", "ru", "ar", "tr", "zh", "ja", "ko", "hi", "vi", "arxiv:1901.07291", "arxiv:1911.02116", "arxiv:1910.09700", "license:cc-by-nc-4.0", "autotr...
null
2022-03-02T23:29:04+00:00
[ "1901.07291", "1911.02116", "1910.09700" ]
[ "multilingual", "en", "fr", "es", "de", "it", "pt", "nl", "sv", "pl", "ru", "ar", "tr", "zh", "ja", "ko", "hi", "vi" ]
TAGS #transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #es #de #it #pt #nl #sv #pl #ru #ar #tr #zh #ja #ko #hi #vi #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
xlm-mlm-17-1280 =============== Table of Contents ================= 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model Model Details ============= xlm-mlm-1...
[]
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #es #de #it #pt #nl #sv #pl #ru #ar #tr #zh #ja #ko #hi #vi #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 113 ]
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #es #de #it #pt #nl #sv #pl #ru #ar #tr #zh #ja #ko #hi #vi #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
# xlm-mlm-en-2048 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Citation](#citation) 8. [Model Card Authors](#model-card...
{"language": "en", "license": "cc-by-nc-4.0", "tags": ["exbert"]}
FacebookAI/xlm-mlm-en-2048
null
[ "transformers", "pytorch", "tf", "xlm", "fill-mask", "exbert", "en", "arxiv:1901.07291", "arxiv:1911.02116", "arxiv:1910.09700", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1901.07291", "1911.02116", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #xlm #fill-mask #exbert #en #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
# xlm-mlm-en-2048 # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Citation 8. Model Card Authors 9. How To Get Started With the Model # Model Details The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillau...
[ "# xlm-mlm-en-2048", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model", "# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model...
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #exbert #en #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# xlm-mlm-en-2048", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Train...
[ 79, 11, 46, 108, 125, 2, 21, 42, 25, 43, 26, 15, 15, 114, 16, 59 ]
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #exbert #en #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n# xlm-mlm-en-2048# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Eval...
fill-mask
transformers
# xlm-mlm-ende-1024 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical-specifications) 8...
{"language": ["multilingual", "en", "de"], "license": "cc-by-nc-4.0"}
FacebookAI/xlm-mlm-ende-1024
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm", "fill-mask", "multilingual", "en", "de", "arxiv:1901.07291", "arxiv:1910.09700", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1901.07291", "1910.09700" ]
[ "multilingual", "en", "de" ]
TAGS #transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #de #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
# xlm-mlm-ende-1024 # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model # Model Details The XLM model was proposed in Cross-lingual Langua...
[ "# xlm-mlm-ende-1024", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model", "# Model Details\n\nThe XLM model was proposed...
[ "TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #de #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# xlm-mlm-ende-1024", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4...
[ 76, 12, 50, 94, 128, 2, 21, 36, 25, 43, 26, 180, 2, 50, 28, 63, 131, 16, 44 ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #de #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n# xlm-mlm-ende-1024# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n...
fill-mask
transformers
# xlm-mlm-enfr-1024 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical-specifications) 8...
{"language": ["multilingual", "en", "fr"], "license": "cc-by-nc-4.0"}
FacebookAI/xlm-mlm-enfr-1024
null
[ "transformers", "pytorch", "tf", "xlm", "fill-mask", "multilingual", "en", "fr", "arxiv:1901.07291", "arxiv:1910.09700", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1901.07291", "1910.09700" ]
[ "multilingual", "en", "fr" ]
TAGS #transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# xlm-mlm-enfr-1024 # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model # Model Details The XLM model was proposed in Cross-lingual Langua...
[ "# xlm-mlm-enfr-1024", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model", "# Model Details\n\nThe XLM model was proposed...
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# xlm-mlm-enfr-1024", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. ...
[ 76, 12, 50, 94, 128, 2, 21, 36, 25, 43, 26, 180, 2, 50, 28, 63, 131, 16, 44 ]
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# xlm-mlm-enfr-1024# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5....
fill-mask
transformers
# xlm-mlm-enro-1024 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical-specifications) 8...
{"language": ["multilingual", "en", "ro"], "license": "cc-by-nc-4.0"}
FacebookAI/xlm-mlm-enro-1024
null
[ "transformers", "pytorch", "tf", "xlm", "fill-mask", "multilingual", "en", "ro", "arxiv:1901.07291", "arxiv:1910.09700", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1901.07291", "1910.09700" ]
[ "multilingual", "en", "ro" ]
TAGS #transformers #pytorch #tf #xlm #fill-mask #multilingual #en #ro #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
# xlm-mlm-enro-1024 # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model # Model Details The XLM model was proposed in Cross-lingual Langua...
[ "# xlm-mlm-enro-1024", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model", "# Model Details\n\nThe XLM model was proposed...
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #ro #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# xlm-mlm-enro-1024", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5...
[ 72, 12, 50, 94, 130, 2, 21, 36, 25, 43, 26, 180, 2, 50, 27, 63, 131, 16, 44 ]
[ "TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #ro #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n# xlm-mlm-enro-1024# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation...
fill-mask
transformers
# xlm-mlm-tlm-xnli15-1024 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#techn...
{"language": ["multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur"], "license": "cc-by-nc-4.0"}
FacebookAI/xlm-mlm-tlm-xnli15-1024
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm", "fill-mask", "multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur", "arxiv:1901.07291", "arxiv:1910.09700", "license:cc-by-nc-4.0", "autotrain_compatible", "e...
null
2022-03-02T23:29:04+00:00
[ "1901.07291", "1910.09700" ]
[ "multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur" ]
TAGS #transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
xlm-mlm-tlm-xnli15-1024 ======================= Table of Contents ================= 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training Details 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model Model Details =...
[ "### Preprocessing\n\n\nThe model developers write:\n\n\n\n> \n> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.\n> \n> \n>", "### Speeds, Sizes, Times\n\n\n...
[ "TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Preprocessing\n\n\nThe model developers write:\n\n\n\n...
[ 104, 63, 964, 234 ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n### Preprocessing\n\n\nThe model developers write:\n\n\n\n> \n> ...
fill-mask
transformers
# xlm-mlm-xnli15-1024 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical...
{"language": ["multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur"], "license": "cc-by-nc-4.0"}
FacebookAI/xlm-mlm-xnli15-1024
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm", "fill-mask", "multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur", "arxiv:1901.07291", "arxiv:1910.09700", "license:cc-by-nc-4.0", "autotrain_compatible", "e...
null
2022-03-02T23:29:04+00:00
[ "1901.07291", "1910.09700" ]
[ "multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur" ]
TAGS #transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
xlm-mlm-xnli15-1024 =================== Table of Contents ================= 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training Details 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model Model Details =========...
[ "### Preprocessing\n\n\nThe model developers write:\n\n\n\n> \n> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.\n> \n> \n>", "### Speeds, Sizes, Times\n\n\n...
[ "TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Preprocessing\n\n\nThe model developers write:\n\n\n\n...
[ 104, 63, 1082, 234 ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n### Preprocessing\n\n\nThe model developers write:\n\n\n\n> \n> ...
fill-mask
transformers
# XLM-RoBERTa (base-sized model) XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](https...
{"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "l...
FacebookAI/xlm-roberta-base
null
[ "transformers", "pytorch", "tf", "jax", "onnx", "safetensors", "xlm-roberta", "fill-mask", "exbert", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi...
null
2022-03-02T23:29:04+00:00
[ "1911.02116" ]
[ "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "i...
TAGS #transformers #pytorch #tf #jax #onnx #safetensors #xlm-roberta #fill-mask #exbert #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #m...
# XLM-RoBERTa (base-sized model) XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. Disclaimer: The team releasing XLM-RoBER...
[ "# XLM-RoBERTa (base-sized model) \n\nXLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. \n\nDisclaimer: The team releasing X...
[ "TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #xlm-roberta #fill-mask #exbert #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk ...
[ 268, 102, 274, 106, 38, 38 ]
[ "TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #xlm-roberta #fill-mask #exbert #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk ...
fill-mask
transformers
# xlm-roberta-large-finetuned-conll02-dutch # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#tec...
{"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "l...
FacebookAI/xlm-roberta-large-finetuned-conll02-dutch
null
[ "transformers", "pytorch", "rust", "xlm-roberta", "fill-mask", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "...
null
2022-03-02T23:29:04+00:00
[ "1911.02116", "1910.09700" ]
[ "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "i...
TAGS #transformers #pytorch #rust #xlm-roberta #fill-mask #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om ...
# xlm-roberta-large-finetuned-conll02-dutch # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model # Model Details ## Model Description The ...
[ "# xlm-roberta-large-finetuned-conll02-dutch", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model", "# Model Details", "...
[ "TAGS\n#transformers #pytorch #rust #xlm-roberta #fill-mask #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #n...
[ 257, 18, 50, 3, 253, 2, 39, 51, 25, 74, 26, 33, 10, 76, 102, 16, 49 ]
[ "TAGS\n#transformers #pytorch #rust #xlm-roberta #fill-mask #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #n...
fill-mask
transformers
# xlm-roberta-large-finetuned-conll02-spanish # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#t...
{"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "l...
FacebookAI/xlm-roberta-large-finetuned-conll02-spanish
null
[ "transformers", "pytorch", "rust", "safetensors", "xlm-roberta", "fill-mask", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", ...
null
2022-03-02T23:29:04+00:00
[ "1911.02116", "1910.09700" ]
[ "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "i...
TAGS #transformers #pytorch #rust #safetensors #xlm-roberta #fill-mask #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne...
# xlm-roberta-large-finetuned-conll02-spanish # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model # Model Details ## Model Description Th...
[ "# xlm-roberta-large-finetuned-conll02-spanish", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model", "# Model Details", ...
[ "TAGS\n#transformers #pytorch #rust #safetensors #xlm-roberta #fill-mask #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #...
[ 261, 18, 50, 3, 254, 2, 39, 51, 25, 74, 26, 33, 10, 76, 102, 16, 49 ]
[ "TAGS\n#transformers #pytorch #rust #safetensors #xlm-roberta #fill-mask #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #...
token-classification
transformers
# xlm-roberta-large-finetuned-conll03-english # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#t...
{"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "l...
FacebookAI/xlm-roberta-large-finetuned-conll03-english
null
[ "transformers", "pytorch", "rust", "onnx", "safetensors", "xlm-roberta", "token-classification", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr...
null
2022-03-02T23:29:04+00:00
[ "1911.02116", "2008.03415", "1910.09700" ]
[ "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "i...
TAGS #transformers #pytorch #rust #onnx #safetensors #xlm-roberta #token-classification #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #m...
# xlm-roberta-large-finetuned-conll03-english # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model # Model Details ## Model Description Th...
[ "# xlm-roberta-large-finetuned-conll03-english", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model", "# Model Details", ...
[ "TAGS\n#transformers #pytorch #rust #onnx #safetensors #xlm-roberta #token-classification #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk ...
[ 278, 18, 50, 3, 251, 2, 39, 51, 25, 154, 26, 33, 10, 76, 102, 16, 49 ]
[ "TAGS\n#transformers #pytorch #rust #onnx #safetensors #xlm-roberta #token-classification #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk ...
token-classification
transformers
# xlm-roberta-large-finetuned-conll03-german # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#te...
{"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "l...
FacebookAI/xlm-roberta-large-finetuned-conll03-german
null
[ "transformers", "pytorch", "rust", "onnx", "xlm-roberta", "token-classification", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga"...
null
2022-03-02T23:29:04+00:00
[ "1911.02116", "1910.09700" ]
[ "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "i...
TAGS #transformers #pytorch #rust #onnx #xlm-roberta #token-classification #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my...
# xlm-roberta-large-finetuned-conll03-german # Table of Contents 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Training 5. Evaluation 6. Environmental Impact 7. Technical Specifications 8. Citation 9. Model Card Authors 10. How To Get Started With the Model # Model Details ## Model Description The...
[ "# xlm-roberta-large-finetuned-conll03-german", "# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model", "# Model Details", ...
[ "TAGS\n#transformers #pytorch #rust #onnx #xlm-roberta #token-classification #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #...
[ 260, 18, 50, 3, 246, 2, 39, 51, 25, 74, 26, 33, 10, 76, 102, 16, 49 ]
[ "TAGS\n#transformers #pytorch #rust #onnx #xlm-roberta #token-classification #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #...
fill-mask
transformers
# XLM-RoBERTa (large-sized model) XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](http...
{"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "l...
FacebookAI/xlm-roberta-large
null
[ "transformers", "pytorch", "tf", "jax", "onnx", "safetensors", "xlm-roberta", "fill-mask", "exbert", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi...
null
2022-03-02T23:29:04+00:00
[ "1911.02116" ]
[ "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "i...
TAGS #transformers #pytorch #tf #jax #onnx #safetensors #xlm-roberta #fill-mask #exbert #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #m...
# XLM-RoBERTa (large-sized model) XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. Disclaimer: The team releasing XLM-RoBE...
[ "# XLM-RoBERTa (large-sized model) \n\nXLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. \n\nDisclaimer: The team releasing ...
[ "TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #xlm-roberta #fill-mask #exbert #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk ...
[ 268, 102, 274, 106, 38, 38 ]
[ "TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #xlm-roberta #fill-mask #exbert #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk ...
text-generation
transformers
# XLNet (base-sized model) XLNet model pre-trained on English language. It was introduced in the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Yang et al. and first released in [this repository](https://github.com/zihangdai/xlnet/). Disclaimer...
{"language": "en", "license": "mit", "datasets": ["bookcorpus", "wikipedia"]}
xlnet/xlnet-base-cased
null
[ "transformers", "pytorch", "tf", "rust", "xlnet", "text-generation", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1906.08237", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1906.08237" ]
[ "en" ]
TAGS #transformers #pytorch #tf #rust #xlnet #text-generation #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1906.08237 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# XLNet (base-sized model) XLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. Disclaimer: The team releasing XLNet did not write a model card for this model so thi...
[ "# XLNet (base-sized model) \n\nXLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. \n\nDisclaimer: The team releasing XLNet did not write a model card for this mode...
[ "TAGS\n#transformers #pytorch #tf #rust #xlnet #text-generation #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1906.08237 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# XLNet (base-sized model) \n\nXLNet model pre-trained on English language. It was introduced in the pap...
[ 66, 83, 92, 94, 24, 10 ]
[ "TAGS\n#transformers #pytorch #tf #rust #xlnet #text-generation #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1906.08237 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n# XLNet (base-sized model) \n\nXLNet model pre-trained on English language. It was introduced in the paper XLN...
text-generation
transformers
# XLNet (large-sized model) XLNet model pre-trained on English language. It was introduced in the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Yang et al. and first released in [this repository](https://github.com/zihangdai/xlnet/). Disclaime...
{"language": "en", "license": "mit", "datasets": ["bookcorpus", "wikipedia"]}
xlnet/xlnet-large-cased
null
[ "transformers", "pytorch", "tf", "xlnet", "text-generation", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1906.08237", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1906.08237" ]
[ "en" ]
TAGS #transformers #pytorch #tf #xlnet #text-generation #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1906.08237 #license-mit #autotrain_compatible #endpoints_compatible #region-us
# XLNet (large-sized model) XLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. Disclaimer: The team releasing XLNet did not write a model card for this model so th...
[ "# XLNet (large-sized model) \n\nXLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. \n\nDisclaimer: The team releasing XLNet did not write a model card for this mod...
[ "TAGS\n#transformers #pytorch #tf #xlnet #text-generation #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1906.08237 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# XLNet (large-sized model) \n\nXLNet model pre-trained on English language. It was introduced in the paper XLNet: Genera...
[ 60, 83, 92, 94, 24, 10 ]
[ "TAGS\n#transformers #pytorch #tf #xlnet #text-generation #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1906.08237 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# XLNet (large-sized model) \n\nXLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized ...
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di...
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "ar...
09panesara/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.7580 * Matthews Correlation: 0.5406 Model description ----------------- More informa...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning...
[ 56, 101, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rat...
text2text-generation
transformers
## keyT5. Base (small) version [![0x7o - text2keywords](https://img.shields.io/static/v1?label=0x7o&message=text2keywords&color=blue&logo=github)](https://github.com/0x7o/text2keywords "Go to GitHub repo") [![stars - text2keywords](https://img.shields.io/github/stars/0x7o/text2keywords?style=social)](https://github.com...
{"language": ["ru"], "license": "mit", "inference": {"parameters": {"top_p": 0.9}}, "widget": [{"text": "\u0412 \u0420\u043e\u0441\u0441\u0438\u0438 \u043c\u043e\u0436\u0435\u0442 \u043f\u043e\u044f\u0432\u0438\u0442\u044c\u0441\u044f \u043d\u043e\u0432\u044b\u0439 \u0448\u0442\u0430\u043c\u043c \u043a\u043e\u0440\u043...
0x7o/keyt5-base
null
[ "transformers", "pytorch", "t5", "text2text-generation", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #t5 #text2text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
## keyT5. Base (small) version ![0x7o - text2keywords](URL "Go to GitHub repo") ![stars - text2keywords](URL ![forks - text2keywords](URL Supported languages: ru Github - text2keywords Pretraining Large version | Pretraining Base version # Usage Example usage (the code returns a list with keywords. duplicates are ...
[ "## keyT5. Base (small) version\n![0x7o - text2keywords](URL \"Go to GitHub repo\")\n![stars - text2keywords](URL\n![forks - text2keywords](URL\n\nSupported languages: ru\n\nGithub - text2keywords\n\n\nPretraining Large version\n|\nPretraining Base version", "# Usage\nExample usage (the code returns a list with k...
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## keyT5. Base (small) version\n![0x7o - text2keywords](URL \"Go to GitHub repo\")\n![stars - text2keywords](URL\n![forks - text2keywords](URL\n\nSupport...
[ 43, 83, 33, 26 ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n## keyT5. Base (small) version\n![0x7o - text2keywords](URL \"Go to GitHub repo\")\n![stars - text2keywords](URL\n![forks - text2keywords](URL\n\nSupported lan...
text2text-generation
transformers
## keyT5. Large version [![0x7o - text2keywords](https://img.shields.io/static/v1?label=0x7o&message=text2keywords&color=blue&logo=github)](https://github.com/0x7o/text2keywords "Go to GitHub repo") [![stars - text2keywords](https://img.shields.io/github/stars/0x7o/text2keywords?style=social)](https://github.com/0x7o/t...
{"language": ["ru"], "license": "mit", "inference": {"parameters": {"top_p": 1.0}}, "widget": [{"text": "\u0412 \u0420\u043e\u0441\u0441\u0438\u0438 \u043c\u043e\u0436\u0435\u0442 \u043f\u043e\u044f\u0432\u0438\u0442\u044c\u0441\u044f \u043d\u043e\u0432\u044b\u0439 \u0448\u0442\u0430\u043c\u043c \u043a\u043e\u0440\u043...
0x7o/keyt5-large
null
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #t5 #text2text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
## keyT5. Large version ![0x7o - text2keywords](URL "Go to GitHub repo") ![stars - text2keywords](URL ![forks - text2keywords](URL Supported languages: ru Github - text2keywords Pretraining Large version | Pretraining Base version # Usage Example usage (the code returns a list with keywords. duplicates are possibl...
[ "## keyT5. Large version\n![0x7o - text2keywords](URL \"Go to GitHub repo\")\n![stars - text2keywords](URL\n![forks - text2keywords](URL\n\nSupported languages: ru\n\nGithub - text2keywords\n\n\nPretraining Large version\n|\nPretraining Base version", "# Usage\nExample usage (the code returns a list with keywords...
[ "TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## keyT5. Large version\n![0x7o - text2keywords](URL \"Go to GitHub repo\")\n![stars - text2keywords](URL\n![forks - text2keywords](URL\n\nS...
[ 47, 80, 33, 26 ]
[ "TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n## keyT5. Large version\n![0x7o - text2keywords](URL \"Go to GitHub repo\")\n![stars - text2keywords](URL\n![forks - text2keywords](URL\n\nSupport...
text-generation
transformers
# Rick n Morty DialoGPT Model
{"tags": ["conversational"]}
0xDEADBEA7/DialoGPT-small-rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# Rick n Morty DialoGPT Model
[ "# Rick n Morty DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# Rick n Morty DialoGPT Model" ]
[ 43, 9 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# Rick n Morty DialoGPT Model" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di...
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "ar...
123abhiALFLKFO/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.8628 * Matthews Correlation: 0.5331 Model description ----------------- More informa...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-0...
[ 52, 101, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* t...
text-generation
transformers
#Jake Peralta DialoGPT Model
{"tags": ["conversational"]}
1Basco/DialoGPT-small-jake
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Jake Peralta DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wa...
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
202015004/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-base-timit-demo-colab ============================== This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.6259 * Wer: 0.3544 Model description ----------------- More information needed Intended uses & limi...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps:...
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4...
[ 47, 128, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* ev...
text-generation
transformers
# Deadpool DialoGPT Model
{"tags": ["conversational"]}
2early4coffee/DialoGPT-medium-deadpool
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Deadpool DialoGPT Model
[ "# Deadpool DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Deadpool DialoGPT Model" ]
[ 39, 7 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Deadpool DialoGPT Model" ]
text-generation
transformers
# Deadpool DialoGPT Model
{"tags": ["conversational"]}
2early4coffee/DialoGPT-small-deadpool
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Deadpool DialoGPT Model
[ "# Deadpool DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Deadpool DialoGPT Model" ]
[ 39, 7 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Deadpool DialoGPT Model" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di...
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "ar...
2umm3r/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.7816 * Matthews Correlation: 0.5156 Model description ----------------- More informa...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning...
[ 56, 101, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rat...
feature-extraction
transformers
this is a fine tuned GPT2 text generation model on a Hunter x Hunter TV anime series dataset.\ you can find a link to the used dataset here : https://www.kaggle.com/bkoozy/hunter-x-hunter-subtitles you can find a colab notebook for fine-tuning the gpt2 model here : https://github.com/3koozy/fine-tune-gpt2-HxH/
{}
3koozy/gpt2-HxH
null
[ "transformers", "pytorch", "gpt2", "feature-extraction", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us
this is a fine tuned GPT2 text generation model on a Hunter x Hunter TV anime series dataset.\ you can find a link to the used dataset here : URL you can find a colab notebook for fine-tuning the gpt2 model here : URL
[]
[ "TAGS\n#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 31 ]
[ "TAGS\n#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n" ]
token-classification
transformers
## Model description This model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding n...
{"language": "zh", "license": "apache-2.0", "tags": ["Token Classification"], "metrics": ["precision", "recall", "f1", "accuracy"]}
9pinus/macbert-base-chinese-medical-collation
null
[ "transformers", "pytorch", "bert", "token-classification", "Token Classification", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #bert #token-classification #Token Classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
## Model description This model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding n...
[ "## Model description\r\n\r\nThis model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by a...
[ "TAGS\n#transformers #pytorch #bert #token-classification #Token Classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "## Model description\r\n\r\nThis model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine...
[ 42, 93, 19, 44 ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #Token Classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n## Model description\r\n\r\nThis model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned...
token-classification
transformers
## Model description This model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset. ...
{"language": ["zh"], "license": "apache-2.0", "tags": ["Token Classification"]}
9pinus/macbert-base-chinese-medicine-recognition
null
[ "transformers", "pytorch", "bert", "token-classification", "Token Classification", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #bert #token-classification #Token Classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
## Model description This model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset. ...
[ "## Model description\r\nThis model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset...
[ "TAGS\n#transformers #pytorch #bert #token-classification #Token Classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "## Model description\r\nThis model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base...
[ 42, 70, 4, 6, 44 ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #Token Classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n## Model description\r\nThis model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chine...
text-classification
transformers
bert-base-cased model trained on quora question pair dataset. The task requires to predict whether the two given sentences (or questions) are `not_duplicate` (label 0) or `duplicate` (label 1). The model achieves 89% evaluation accuracy
{"datasets": ["qqp"], "inference": false}
A-bhimany-u08/bert-base-cased-qqp
null
[ "transformers", "pytorch", "bert", "text-classification", "dataset:qqp", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #dataset-qqp #autotrain_compatible #region-us
bert-base-cased model trained on quora question pair dataset. The task requires to predict whether the two given sentences (or questions) are 'not_duplicate' (label 0) or 'duplicate' (label 1). The model achieves 89% evaluation accuracy
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #dataset-qqp #autotrain_compatible #region-us \n" ]
[ 30 ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #dataset-qqp #autotrain_compatible #region-us \n" ]
text-generation
transformers
@Harry Potter DialoGPT model
{"tags": ["conversational"]}
ABBHISHEK/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
@Harry Potter DialoGPT model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
feature-extraction
transformers
Pre trained on clus_ chapter only.
{}
AG/pretraining
null
[ "transformers", "pytorch", "roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us
Pre trained on clus_ chapter only.
[]
[ "TAGS\n#transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us \n" ]
[ 23 ]
[ "TAGS\n#transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us \n" ]
sentence-similarity
sentence-transformers
# PatentSBERTa ## PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT ### Aalborg University Business School, AI: Growth-Lab https://arxiv.org/abs/2103.11933 https://github.com/AI-Growth-Lab/PatentSBERTa This is a [sentence-transformers](https://www.SBERT.ne...
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
AI-Growth-Lab/PatentSBERTa
null
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "arxiv:2103.11933", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2103.11933" ]
[]
TAGS #sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #transformers #arxiv-2103.11933 #endpoints_compatible #has_space #region-us
# PatentSBERTa ## PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT ### Aalborg University Business School, AI: Growth-Lab URL URL This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used...
[ "# PatentSBERTa", "## PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT", "### Aalborg University Business School, AI: Growth-Lab \n\nURL\n\nURL\n\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector spa...
[ "TAGS\n#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #transformers #arxiv-2103.11933 #endpoints_compatible #has_space #region-us \n", "# PatentSBERTa", "## PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT", "### Aalborg ...
[ 46, 4, 22, 54, 30, 58, 26, 69, 5, 5 ]
[ "TAGS\n#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #transformers #arxiv-2103.11933 #endpoints_compatible #has_space #region-us \n# PatentSBERTa## PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT### Aalborg University Busines...
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Machine Translation - Model ID: 474612462 - CO2 Emissions (in grams): 133.0219882109991 ## Validation Metrics - Loss: 1.336498737335205 - Rouge1: 52.5404 - Rouge2: 31.6639 - RougeL: 50.1696 - RougeLsum: 50.3398 - Gen Len: 39.046 ## Usage You can use cURL to access thi...
{"language": "unk", "tags": "autonlp", "datasets": ["Eric Peter/autonlp-data-EN-LUG"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 133.0219882109991}
AI-Lab-Makerere/en_lg
null
[ "transformers", "pytorch", "marian", "text2text-generation", "autonlp", "unk", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "unk" ]
TAGS #transformers #pytorch #marian #text2text-generation #autonlp #unk #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Machine Translation - Model ID: 474612462 - CO2 Emissions (in grams): 133.0219882109991 ## Validation Metrics - Loss: 1.336498737335205 - Rouge1: 52.5404 - Rouge2: 31.6639 - RougeL: 50.1696 - RougeLsum: 50.3398 - Gen Len: 39.046 ## Usage You can use cURL to access thi...
[ "# Model Trained Using AutoNLP\n\n- Problem type: Machine Translation\n- Model ID: 474612462\n- CO2 Emissions (in grams): 133.0219882109991", "## Validation Metrics\n\n- Loss: 1.336498737335205\n- Rouge1: 52.5404\n- Rouge2: 31.6639\n- RougeL: 50.1696\n- RougeLsum: 50.3398\n- Gen Len: 39.046", "## Usage\n\nYou c...
[ "TAGS\n#transformers #pytorch #marian #text2text-generation #autonlp #unk #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Machine Translation\n- Model ID: 474612462\n- CO2 Emissions (in grams): 133.0219882109991", "## Validation Met...
[ 45, 43, 60, 12 ]
[ "TAGS\n#transformers #pytorch #marian #text2text-generation #autonlp #unk #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Machine Translation\n- Model ID: 474612462\n- CO2 Emissions (in grams): 133.0219882109991## Validation Metrics\n\n- Lo...
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Machine Translation - Model ID: 475112539 - CO2 Emissions (in grams): 126.34446293851818 ## Validation Metrics - Loss: 1.5376628637313843 - Rouge1: 62.4613 - Rouge2: 39.4759 - RougeL: 58.183 - RougeLsum: 58.226 - Gen Len: 26.5644 ## Usage You can use cURL to access th...
{"language": "unk", "tags": "autonlp", "datasets": ["EricPeter/autonlp-data-MarianMT_lg_en"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 126.34446293851818}
AI-Lab-Makerere/lg_en
null
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "autonlp", "unk", "dataset:EricPeter/autonlp-data-MarianMT_lg_en", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "unk" ]
TAGS #transformers #pytorch #safetensors #marian #text2text-generation #autonlp #unk #dataset-EricPeter/autonlp-data-MarianMT_lg_en #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Machine Translation - Model ID: 475112539 - CO2 Emissions (in grams): 126.34446293851818 ## Validation Metrics - Loss: 1.5376628637313843 - Rouge1: 62.4613 - Rouge2: 39.4759 - RougeL: 58.183 - RougeLsum: 58.226 - Gen Len: 26.5644 ## Usage You can use cURL to access th...
[ "# Model Trained Using AutoNLP\n\n- Problem type: Machine Translation\n- Model ID: 475112539\n- CO2 Emissions (in grams): 126.34446293851818", "## Validation Metrics\n\n- Loss: 1.5376628637313843\n- Rouge1: 62.4613\n- Rouge2: 39.4759\n- RougeL: 58.183\n- RougeLsum: 58.226\n- Gen Len: 26.5644", "## Usage\n\nYou ...
[ "TAGS\n#transformers #pytorch #safetensors #marian #text2text-generation #autonlp #unk #dataset-EricPeter/autonlp-data-MarianMT_lg_en #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Machine Translation\n- Model ID: 475112539\n- CO2 Em...
[ 70, 39, 58, 12 ]
[ "TAGS\n#transformers #pytorch #safetensors #marian #text2text-generation #autonlp #unk #dataset-EricPeter/autonlp-data-MarianMT_lg_en #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Machine Translation\n- Model ID: 475112539\n- CO2 Emission...
fill-mask
transformers
# A Swedish Bert model ## Model description This model follows the Bert Large model architecture as implemented in [Megatron-LM framework](https://github.com/NVIDIA/Megatron-LM). It was trained with a batch size of 512 in 600k steps. The model contains following parameters: <figure> | Hyperparameter | Value ...
{"language": "sv"}
AI-Nordics/bert-large-swedish-cased
null
[ "transformers", "pytorch", "megatron-bert", "fill-mask", "sv", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "sv" ]
TAGS #transformers #pytorch #megatron-bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us
A Swedish Bert model ==================== Model description ----------------- This model follows the Bert Large model architecture as implemented in Megatron-LM framework. It was trained with a batch size of 512 in 600k steps. The model contains following parameters: Training data ------------- The model is p...
[]
[ "TAGS\n#transformers #pytorch #megatron-bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 33 ]
[ "TAGS\n#transformers #pytorch #megatron-bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us \n" ]
sentence-similarity
sentence-transformers
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when ...
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
AIDA-UPM/MSTSb_paraphrase-multilingual-MiniLM-L12-v2
null
[ "sentence-transformers", "pytorch", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #sentence-transformers #pytorch #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
# {MODEL_NAME} This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can u...
[ "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\n...
[ "TAGS\n#sentence-transformers #pytorch #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n", "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic...
[ 29, 41, 30, 58, 26, 70, 5, 5 ]
[ "TAGS\n#sentence-transformers #pytorch #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic searc...
sentence-similarity
sentence-transformers
# AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) ...
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1
null
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
# AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1 This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transf...
[ "# AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sente...
[ "TAGS\n#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n", "# AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and c...
[ 34, 61, 30, 58, 26, 70, 5, 5 ]
[ "TAGS\n#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be ...
sentence-similarity
sentence-transformers
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when ...
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
AIDA-UPM/MSTSb_stsb-xlm-r-multilingual
null
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
# {MODEL_NAME} This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can u...
[ "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\n...
[ "TAGS\n#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n", "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clusterin...
[ 34, 41, 30, 58, 26, 70, 5, 5 ]
[ "TAGS\n#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or s...
text-classification
transformers
# bertweet-base-multi-mami This is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels. # Multilabels label2id={ "misogynous": 0, "shaming": 1, "stereotype": 2, "objectification": 3, "violence": 4,...
{"language": "en", "license": "apache-2.0", "tags": ["text-classification", "misogyny"], "pipeline_tag": "text-classification", "widget": [{"text": "Women wear yoga pants because men don't stare at their personality", "example_title": "Misogyny detection"}]}
AIDA-UPM/bertweet-base-multi-mami
null
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "misogyny", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #safetensors #roberta #text-classification #misogyny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# bertweet-base-multi-mami This is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels. # Multilabels label2id={ "misogynous": 0, "shaming": 1, "stereotype": 2, "objectification": 3, "violence": 4,...
[ "# bertweet-base-multi-mami\nThis is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.", "# Multilabels\n label2id={\n \"misogynous\": 0,\n \"shaming\": 1,\n \"stereotype\": 2,\n \"objectification\": 3,\...
[ "TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #misogyny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# bertweet-base-multi-mami\nThis is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them i...
[ 46, 42, 46 ]
[ "TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #misogyny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bertweet-base-multi-mami\nThis is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 ...
sentence-similarity
transformers
# mstsb-paraphrase-multilingual-mpnet-base-v2 This is a fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` from [sentence-transformers](https://www.SBERT.net) model with [Semantic Textual Similarity Benchmark](http://ixa2.si.ehu.eus/stswiki/index.php/Main_Page) extended to 15 languages: It maps sentences &...
{"language": "multilingual", "tags": ["feature-extraction", "sentence-similarity", "transformers", "multilingual"], "pipeline_tag": "sentence-similarity"}
AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "multilingual", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "multilingual" ]
TAGS #transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #multilingual #endpoints_compatible #region-us
mstsb-paraphrase-multilingual-mpnet-base-v2 =========================================== This is a fine-tuned version of 'paraphrase-multilingual-mpnet-base-v2' from sentence-transformers model with Semantic Textual Similarity Benchmark extended to 15 languages: It maps sentences & paragraphs to a 768 dimensional dens...
[]
[ "TAGS\n#transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #multilingual #endpoints_compatible #region-us \n" ]
[ 34 ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #multilingual #endpoints_compatible #region-us \n" ]
text-classification
transformers
This is a finetuned XLM-RoBERTA model for natural language inference. It has been trained with a massive ammount of data following the ANLI pipeline training. We include data from: - [mnli](https://cims.nyu.edu/~sbowman/multinli/) {train, dev and test} - [snli](https://nlp.stanford.edu/projects/snli/) {train, dev and ...
{"language": "en", "license": "apache-2.0", "tags": ["natural-language-inference", "misogyny"], "pipeline_tag": "text-classification", "widget": [{"text": "Las mascarillas causan hipoxia. Wearing masks is harmful to human health", "example_title": "Natural Language Inference"}]}
AIDA-UPM/xlm-roberta-large-snli_mnli_xnli_fever_r1_r2_r3
null
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "natural-language-inference", "misogyny", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #xlm-roberta #text-classification #natural-language-inference #misogyny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
This is a finetuned XLM-RoBERTA model for natural language inference. It has been trained with a massive ammount of data following the ANLI pipeline training. We include data from: * mnli {train, dev and test} * snli {train, dev and test} * xnli {train, dev and test} * fever {train, dev and test} * anli {train} The...
[]
[ "TAGS\n#transformers #pytorch #xlm-roberta #text-classification #natural-language-inference #misogyny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 51 ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #text-classification #natural-language-inference #misogyny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
# tests
{"tags": ["conversational"]}
AIDynamics/DialoGPT-medium-MentorDealerGuy
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# tests
[ "# tests" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# tests" ]
[ 39, 2 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# tests" ]
text-generation
transformers
# Uses DialoGPT
{"tags": ["conversational"]}
AJ/DialoGPT-small-ricksanchez
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Uses DialoGPT
[ "# Uses DialoGPT" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Uses DialoGPT" ]
[ 39, 5 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Uses DialoGPT" ]
text-generation
transformers
# its rick from rick and morty
{"tags": ["conversational", "humor"]}
AJ/rick-discord-bot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "humor", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #humor #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# its rick from rick and morty
[ "# its rick from rick and morty" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #humor #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# its rick from rick and morty" ]
[ 41, 8 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #humor #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# its rick from rick and morty" ]
text-generation
null
# uses dialogpt
{"tags": ["conversational", "funny"]}
AJ/rick-sanchez-bot
null
[ "conversational", "funny", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #conversational #funny #region-us
# uses dialogpt
[ "# uses dialogpt" ]
[ "TAGS\n#conversational #funny #region-us \n", "# uses dialogpt" ]
[ 10, 5 ]
[ "TAGS\n#conversational #funny #region-us \n# uses dialogpt" ]
text-generation
transformers
# Harry Potter DialoGPT model
{"tags": ["conversational"]}
AJ-Dude/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT model
[ "# Harry Potter DialoGPT model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT model" ]
[ 39, 7 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT model" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
AK270802/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
[ 39, 7 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-epochs10 This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs5](https://huggingface.co/AK...
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-epochs10", "results": []}]}
AKulk/wav2vec2-base-timit-epochs10
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
# wav2vec2-base-timit-epochs10 This model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs5 on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Tr...
[ "# wav2vec2-base-timit-epochs10\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs5 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## ...
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "# wav2vec2-base-timit-epochs10\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs5 on the None dataset.", "## Model descrip...
[ 47, 50, 7, 9, 9, 4, 133, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n# wav2vec2-base-timit-epochs10\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs5 on the None dataset.## Model description\n\nMore...