Keras Implementation of Structured data learning with Tabtransformer
This repo contains the trained model of Structured data learning with TabTransformer.
The full credit goes to: Khalid Salama
Spaces Link:
Model summary:
- The trained model uses self-attention based Transformers structure following by multiple feed forward layers in order to serve supervised and semi-supervised learning.
 - The model’s inputs can contain both numerical and categorical features.
 - All the categorical features will be encoded into embedding vector with the same number of embedding dimensions, before adding (point-wise) with each other and feeding into a stack of Transformer blocks.
 - The contextual embeddings of the categorical features after the final Transformer layer, are concatenated with the input numerical features, and fed into a final MLP block.
 - A SoftMax function is applied at the end of the model.
 
Intended uses & limitations:
- This model can be used for both supervised and semi-supervised tasks on tabular data.
 
Training and evaluation data:
- This model was trained using the United States Census Income Dataset provided by the UC Irvine Machine Learning Repository. The task of the dataset is to predict whether a person is likely to be making over USD 50,000 a year (binary classification).
 - The dataset consists of 14 input features: 5 numerical features and 9 categorical features.
 
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: ‘AdamW’
 - learning_rate: 0.001
 - weight decay: 1e-04
 - loss: ‘sparse_categorical_crossentropy’
 - beta_1: 0.9
 - beta_2: 0.999
 - epsilon: 1e-07
 - epochs: 50
 - batch_size: 16
 - training_precision: float32
 
Training Metrics
Model history needed
Model Plot
            View Model Plot

数据统计
数据评估
关于keras-io/tab_transformer特别声明
            本站Ai导航提供的keras-io/tab_transformer都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由Ai导航实际控制,在2023年5月15日 下午3:20收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,Ai导航不承担任何责任。
相关导航
暂无评论...

