Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n","updatedAt":"2023-11-03T16:44:43.050Z","author":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isMod":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.75470370054245},"editors":["librarian-bot"],"reactions":[{"reaction":"๐Ÿ‘","users":["neuralink"],"count":1}],"isReport":false}},{"id":"6665391a9b7d75d2edfa359b","author":{"avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isMod":false},"createdAt":"2024-06-09T05:09:46.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"# Unleashing FP8 Power: Efficiently Training Massive LLMs\n\nhttps://cdn-uploads.huggingface.co/production/uploads/6186ddf6a7717cb375090c01/oX3hPRGxH62ITBKzrgoQf.mp4 \n\n## Links ๐Ÿ”—:\n๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix\n๐Ÿ‘‰ Twitter: https://x.com/arxflix\n๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/\n\n\nBy Arxflix\n![9t4iCUHx_400x400-1.jpg](https://cdn-uploads.huggingface.co/production/uploads/6186ddf6a7717cb375090c01/v4S5zBurs0ouGNwYj1GEd.jpeg)","html":"

Unleashing FP8 Power: Efficiently Training Massive LLMs

\n

\n\n

Links ๐Ÿ”—:

\n

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-09T05:09:46.456Z","author":{"avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isMod":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5364299416542053},"editors":["blanchon"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2310.18313","authors":[{"_id":"653f175be8ed050cb3513a27","user":{"_id":"631952b6f18d0b5d999ca397","avatarUrl":"/avatars/eb87dfc142a7602f8fb888f7b7b60d38.svg","isPro":false,"fullname":"Peng","user":"Houwen","type":"user"},"name":"Houwen Peng","status":"admin_assigned","statusLastChangedAt":"2023-10-30T13:20:04.911Z","hidden":false},{"_id":"653f175be8ed050cb3513a28","user":{"_id":"64acfc3d7a9df1bd414824d5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64acfc3d7a9df1bd414824d5/cMnQJ5ObNeHhmXvW7GMfc.png","isPro":false,"fullname":"Jackie Wu","user":"wkcn","type":"user"},"name":"Kan Wu","status":"claimed_verified","statusLastChangedAt":"2023-10-30T13:20:14.684Z","hidden":false},{"_id":"653f175be8ed050cb3513a29","user":{"_id":"640efe54a92fedb0e84ee598","avatarUrl":"/avatars/cb1c6720525bfdbf59a9d3eac93a879e.svg","isPro":false,"fullname":"Yixuan Wei","user":"EasonWei","type":"user"},"name":"Yixuan Wei","status":"admin_assigned","statusLastChangedAt":"2023-10-30T13:20:24.646Z","hidden":false},{"_id":"653f175be8ed050cb3513a2a","name":"Guoshuai Zhao","hidden":false},{"_id":"653f175be8ed050cb3513a2b","name":"Yuxiang Yang","hidden":false},{"_id":"653f175be8ed050cb3513a2c","user":{"_id":"61ff8a090fec6e1502ca2d89","avatarUrl":"/avatars/1f72372d4e4264592a4a573c30c3ab3a.svg","isPro":false,"fullname":"Ze Liu","user":"zeliu98","type":"user"},"name":"Ze Liu","status":"admin_assigned","statusLastChangedAt":"2023-10-30T13:21:44.217Z","hidden":false},{"_id":"653f175be8ed050cb3513a2d","user":{"_id":"6501267aaf1e4021439cbc1d","avatarUrl":"/avatars/cd107d1a887e292c2d107748eadcac25.svg","isPro":false,"fullname":"YIfan Xiong","user":"AGcwh","type":"user"},"name":"Yifan Xiong","status":"admin_assigned","statusLastChangedAt":"2023-10-30T13:21:53.757Z","hidden":false},{"_id":"653f175be8ed050cb3513a2e","name":"Ziyue Yang","hidden":false},{"_id":"653f175be8ed050cb3513a2f","user":{"_id":"642149592cc2b3c39e7ef6c9","avatarUrl":"/avatars/fed5d447375cd8c3b2bc9e096fe681de.svg","isPro":false,"fullname":"Bolin Ni","user":"nbl97","type":"user"},"name":"Bolin Ni","status":"admin_assigned","statusLastChangedAt":"2023-10-30T13:22:40.163Z","hidden":false},{"_id":"653f175be8ed050cb3513a30","user":{"_id":"625026b7d2d191ac43320c5e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/625026b7d2d191ac43320c5e/K-Fn3v2KwNyg9QzhKB4vH.jpeg","isPro":false,"fullname":"Jingcheng Hu","user":"reign12","type":"user"},"name":"Jingcheng Hu","status":"admin_assigned","statusLastChangedAt":"2023-10-30T13:22:50.943Z","hidden":false},{"_id":"653f175be8ed050cb3513a31","user":{"_id":"647d6d52adaf5cc26dab2a23","avatarUrl":"/avatars/55c535d6498f81db7fee29ecea626293.svg","isPro":false,"fullname":"Ruihang Liu","user":"LeoLRH","type":"user"},"name":"Ruihang Li","status":"admin_assigned","statusLastChangedAt":"2023-10-30T13:23:11.480Z","hidden":false},{"_id":"653f175be8ed050cb3513a32","user":{"_id":"62a30bf72dac39c2173c0a8c","avatarUrl":"/avatars/15fb1ea3dcc7ccd8bc8002ce282e27b3.svg","isPro":false,"fullname":"Miaosen Zhang","user":"Miaosen","type":"user"},"name":"Miaosen Zhang","status":"admin_assigned","statusLastChangedAt":"2023-10-30T13:23:20.099Z","hidden":false},{"_id":"653f175be8ed050cb3513a33","user":{"_id":"64e0debc02fa032de402feb0","avatarUrl":"/avatars/6c28a13216f7ae4c047cf6ea9846f6e3.svg","isPro":false,"fullname":"Chen Li","user":"LC-Edward","type":"user"},"name":"Chen Li","status":"claimed_verified","statusLastChangedAt":"2023-11-02T16:15:02.895Z","hidden":false},{"_id":"653f175be8ed050cb3513a34","name":"Jia Ning","hidden":false},{"_id":"653f175be8ed050cb3513a35","user":{"_id":"63203d4e260e691cfc19fcb1","avatarUrl":"/avatars/72437259c73cc4a950a2e84141097310.svg","isPro":false,"fullname":"Ruizhe Wang","user":"Mr-Philo","type":"user"},"name":"Ruizhe Wang","status":"claimed_verified","statusLastChangedAt":"2024-06-11T07:32:15.278Z","hidden":false},{"_id":"653f175be8ed050cb3513a36","name":"Zheng Zhang","hidden":false},{"_id":"653f175be8ed050cb3513a37","name":"Shuguang Liu","hidden":false},{"_id":"653f175be8ed050cb3513a38","name":"Joe Chau","hidden":false},{"_id":"653f175be8ed050cb3513a39","name":"Han Hu","hidden":false},{"_id":"653f175be8ed050cb3513a3a","user":{"_id":"653feb7ccf1f9c88f4928910","avatarUrl":"/avatars/23a6a6818116683ea9485e1470a0062f.svg","isPro":false,"fullname":"Peng Cheng","user":"cp5555","type":"user"},"name":"Peng Cheng","status":"claimed_verified","statusLastChangedAt":"2023-10-31T08:23:33.140Z","hidden":false}],"publishedAt":"2023-10-27T17:59:51.000Z","submittedOnDailyAt":"2023-10-30T01:09:23.500Z","title":"FP8-LM: Training FP8 Large Language Models","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"In this paper, we explore FP8 low-bit data formats for efficient training of\nlarge language models (LLMs). Our key insight is that most variables, such as\ngradients and optimizer states, in LLM training can employ low-precision data\nformats without compromising model accuracy and requiring no changes to\nhyper-parameters. Specifically, we propose a new FP8 automatic mixed-precision\nframework for training LLMs. This framework offers three levels of FP8\nutilization to streamline mixed-precision and distributed parallel training for\nLLMs. It gradually incorporates 8-bit gradients, optimizer states, and\ndistributed learning in an incremental manner. Experiment results show that,\nduring the training of GPT-175B model on H100 GPU platform, our FP8\nmixed-precision training framework not only achieved a remarkable 42% reduction\nin real memory usage but also ran 64% faster than the widely adopted BF16\nframework (i.e., Megatron-LM), surpassing the speed of Nvidia Transformer\nEngine by 17%. This largely reduces the training costs for large foundation\nmodels. Furthermore, our FP8 mixed-precision training methodology is generic.\nIt can be seamlessly applied to other tasks such as LLM instruction tuning and\nreinforcement learning with human feedback, offering savings in fine-tuning\nexpenses. Our FP8 low-precision training framework is open-sourced at\n{https://github.com/Azure/MS-AMP}{aka.ms/MS.AMP}.","upvotes":31,"discussionId":"653f175be8ed050cb3513a4f"},"canReadDatabase":false,"canManageCommunity":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/uc2Q5G2HKphTD0TbOsYiC.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"630920925a5c889aaedc7f33","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/630920925a5c889aaedc7f33/w00N19M21l2FXe6ZasSYc.jpeg","isPro":false,"fullname":"Kristaller486","user":"kristaller486","type":"user"},{"_id":"63179ac37690c5b55e5e2f90","avatarUrl":"/avatars/0ebcd30919249d7a5de4f1ed63eebcdd.svg","isPro":false,"fullname":"Jacopo Parvizi","user":"neuraloverflow","type":"user"},{"_id":"6244866a456803e9500d0f6a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1652185658647-6244866a456803e9500d0f6a.jpeg","isPro":false,"fullname":"Leo Tronchon","user":"Leyo","type":"user"},{"_id":"625026b7d2d191ac43320c5e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/625026b7d2d191ac43320c5e/K-Fn3v2KwNyg9QzhKB4vH.jpeg","isPro":false,"fullname":"Jingcheng Hu","user":"reign12","type":"user"},{"_id":"6040a00558b78f3a0047c23a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6040a00558b78f3a0047c23a/_BsyJoaCBO3r6GnfSgnIA.jpeg","isPro":false,"fullname":"David Mataciunas","user":"DeividasM","type":"user"},{"_id":"645d250001f4eaab2a0848d5","avatarUrl":"/avatars/e669098bb674f1d147cacdff7d0a5251.svg","isPro":false,"fullname":"Sean Smith","user":"sms1097","type":"user"},{"_id":"6400f6cc2b67d27affcfdb93","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6400f6cc2b67d27affcfdb93/WA6FEZy_YaZPGhIWj2zda.jpeg","isPro":false,"fullname":"Matthew Douglas","user":"mdouglas","type":"user"},{"_id":"62b2151c6a5435fd9a68b18f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62b2151c6a5435fd9a68b18f/dK8eT7fcG0ZnXgk0GjcUS.jpeg","isPro":false,"fullname":"Thuat Huu Nguyen","user":"nguyenhuuthuat09","type":"user"},{"_id":"5dd96eb166059660ed1ee413","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg","isPro":true,"fullname":"Julien Chaumond","user":"julien-c","type":"user"},{"_id":"6351b955ef8786433eca3a3c","avatarUrl":"/avatars/9d6860a551de0d4912e08e64589921dc.svg","isPro":false,"fullname":"John Steward","user":"HDiffusion","type":"user"},{"_id":"619c6d79030b707ff1f05e48","avatarUrl":"/avatars/bbd75d0a89cd280947ffe0f4489a8c9a.svg","isPro":false,"fullname":"huodon","user":"huodon","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2}">
Papers
arxiv:2310.18313

FP8-LM: Training FP8 Large Language Models

Published on Oct 27, 2023
ยท Submitted by akhaliq on Oct 30, 2023
#2 Paper of the day
Authors:
Kan Wu ,
,
,
Ze Liu ,
,
,
,
,
,
,

Abstract

In this paper, we explore FP8 low-bit data formats for efficient training of large language models (LLMs). Our key insight is that most variables, such as gradients and optimizer states, in LLM training can employ low-precision data formats without compromising model accuracy and requiring no changes to hyper-parameters. Specifically, we propose a new FP8 automatic mixed-precision framework for training LLMs. This framework offers three levels of FP8 utilization to streamline mixed-precision and distributed parallel training for LLMs. It gradually incorporates 8-bit gradients, optimizer states, and distributed learning in an incremental manner. Experiment results show that, during the training of GPT-175B model on H100 GPU platform, our FP8 mixed-precision training framework not only achieved a remarkable 42% reduction in real memory usage but also ran 64% faster than the widely adopted BF16 framework (i.e., Megatron-LM), surpassing the speed of Nvidia Transformer Engine by 17%. This largely reduces the training costs for large foundation models. Furthermore, our FP8 mixed-precision training methodology is generic. It can be seamlessly applied to other tasks such as LLM instruction tuning and reinforcement learning with human feedback, offering savings in fine-tuning expenses. Our FP8 low-precision training framework is open-sourced at {https://github.com/Azure/MS-AMP}{aka.ms/MS.AMP}.

Community

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

Unleashing FP8 Power: Efficiently Training Massive LLMs

Links ๐Ÿ”—:

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2310.18313 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2310.18313 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2310.18313 in a Space README.md to link it from this page.

Collections including this paper 13