Loading...
Hugging Face--有趣的Hugging Face模型HF自然语言处理

mrm8488/codebert-base-finetuned-detect-insecure-code


CodeBERT fine-tuned for Insecure Code Detection ?⛔

codebert-base fine-tuned on codexglue — Defect Detection dataset for Insecure Code Detection downstream task.


Details of CodeBERT

We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing.


Details of the downstream task (code classification) – Dataset ?

Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code.

The dataset used comes from the paper Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks. All projects are combined and splitted 80%/10%/10% for training/dev/test.

Data statistics of the dataset are shown in the below table:

#Examples
Train 21,854
Dev 2,732
Test 2,732


Test set metrics ?

Methods ACC
BiLSTM 59.37
TextCNN 60.69
RoBERTa 61.05
CodeBERT 62.08
Ours 65.30


Model in Action ?

from Transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
tokenizer = AutoTokenizer.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code')
model = AutoModelForSequenceClassification.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code')
inputs = tokenizer("your code here", return_tensors="pt", truncation=True, padding='max_length')
labels = torch.tensor([1]).unsqueeze(0)  # Batch size 1
outputs = model(**inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
print(np.argmax(logits.detach().numpy()))

Created by Manuel Romero/@mrm8488 | LinkedIn

Made with in Spain

数据统计

数据评估

mrm8488/codebert-base-finetuned-detect-insecure-code浏览人数已经达到881,如你需要查询该站的相关权重信息,可以点击"5118数据""爱站数据""Chinaz数据"进入;以目前的网站数据参考,建议大家请以爱站数据为准,更多网站价值评估因素如:mrm8488/codebert-base-finetuned-detect-insecure-code的访问速度、搜索引擎收录以及索引量、用户体验等;当然要评估一个站的价值,最主要还是需要根据您自身的需求以及需要,一些确切的数据则需要找mrm8488/codebert-base-finetuned-detect-insecure-code的站长进行洽谈提供。如该站的IP、PV、跳出率等!

关于mrm8488/codebert-base-finetuned-detect-insecure-code特别声明

本站Ai导航提供的mrm8488/codebert-base-finetuned-detect-insecure-code都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由Ai导航实际控制,在2023年5月15日 下午3:14收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,Ai导航不承担任何责任。

相关导航

暂无评论

暂无评论...