Loading...
HF自然语言处理

martin-ha/toxic-comment-model


Model description

This model is a fine-tuned version of the DistilBERT model to classify toxic comments.


How to use

You can use the model with the following code.

from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline
model_path = "martin-ha/toxic-comment-model"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
pipeline =  TextClassificationPipeline(model=model, tokenizer=tokenizer)
print(pipeline('This is a test text.'))


Limitations and Bias

This model is intended to use for classify toxic online classifications. However, one limitation of the model is that it performs poorly for some comments that mention a specific identity subgroup, like Muslim. The following table shows a evaluation score for different identity group. You can learn the specific meaning of this metrics here. But basically, those metrics shows how well a model performs for a specific group. The larger the number, the better.

subgroup subgroup_size subgroup_auc bpsn_auc bnsp_auc
muslim 108 0.689 0.811 0.88
jewish 40 0.749 0.86 0.825
homosexual_gay_or_lesbian 56 0.795 0.706 0.972
black 84 0.866 0.758 0.975
white 112 0.876 0.784 0.97
female 306 0.898 0.887 0.948
christian 231 0.904 0.917 0.93
male 225 0.922 0.862 0.967
psychiatric_or_mental_illness 26 0.924 0.907 0.95

The table above shows that the model performs poorly for the muslim and jewish group. In fact, you pass the sentence “Muslims are people who follow or practice Islam, an Abrahamic monotheistic religion.” Into the model, the model will classify it as toxic. Be mindful for this type of potential bias.


Training data

The training data comes this Kaggle competition. We use 10% of the train.csv data to train the model.


Training procedure

You can see this documentation and codes for how we train the model. It takes about 3 hours in a P-100 GPU.


Evaluation results

The model achieves 94% accuracy and 0.59 f1-score in a 10000 rows held-out test set.

数据统计

数据评估

martin-ha/toxic-comment-model浏览人数已经达到806,如你需要查询该站的相关权重信息,可以点击"5118数据""爱站数据""Chinaz数据"进入;以目前的网站数据参考,建议大家请以爱站数据为准,更多网站价值评估因素如:martin-ha/toxic-comment-model的访问速度、搜索引擎收录以及索引量、用户体验等;当然要评估一个站的价值,最主要还是需要根据您自身的需求以及需要,一些确切的数据则需要找martin-ha/toxic-comment-model的站长进行洽谈提供。如该站的IP、PV、跳出率等!

关于martin-ha/toxic-comment-model特别声明

本站Ai导航提供的martin-ha/toxic-comment-model都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由Ai导航实际控制,在2023年5月15日 下午3:14收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,Ai导航不承担任何责任。

相关导航

暂无评论

暂无评论...