HF多模态
microsoft/bloom-deepspeed-inference-fp16
This is a copy of the original bloom weights that is more efficient to use with the DeepSpeed-MII and DeepSpeed-Inference. In this repo the original tensors are split into 8 shards to target 8 GPUs, this allows the user to run the model with DeepSpeed-inference Tensor Parallelism.
For specific details about the BLOOM model itself, please see the original BLOOM model card.
For examples on using this repo please see the following:
- https://github.com/huggingface/Transformers-bloom-inference
- https://github.com/microsoft/DeepSpeed-MII
数据统计
数据评估
关于microsoft/bloom-deepspeed-inference-fp16特别声明
本站Ai导航提供的microsoft/bloom-deepspeed-inference-fp16都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由Ai导航实际控制,在2023年5月9日 下午7:15收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,Ai导航不承担任何责任。
相关导航
暂无评论...