https://huggingface.co/Xwin-LM\n","updatedAt":"2024-06-02T19:00:26.970Z","author":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/6OUJ7Hc9T1jXynYH3FGaf.png","fullname":"Adina Yakefu","name":"AdinaY","type":"user","isPro":false,"isHf":true,"isMod":false,"followerCount":212}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.46581920981407166},"editors":["AdinaY"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2405.20335","authors":[{"_id":"66594d86f1b4f073d80378d2","user":{"_id":"642149592cc2b3c39e7ef6c9","avatarUrl":"/avatars/fed5d447375cd8c3b2bc9e096fe681de.svg","isPro":false,"fullname":"Bolin Ni","user":"nbl97","type":"user"},"name":"Bolin Ni","status":"admin_assigned","statusLastChangedAt":"2024-05-31T09:52:41.072Z","hidden":false},{"_id":"66594d86f1b4f073d80378d3","user":{"_id":"625026b7d2d191ac43320c5e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/625026b7d2d191ac43320c5e/K-Fn3v2KwNyg9QzhKB4vH.jpeg","isPro":false,"fullname":"Jingcheng Hu","user":"reign12","type":"user"},"name":"JingCheng Hu","status":"admin_assigned","statusLastChangedAt":"2024-05-31T09:52:47.834Z","hidden":false},{"_id":"66594d86f1b4f073d80378d4","user":{"_id":"640efe54a92fedb0e84ee598","avatarUrl":"/avatars/cb1c6720525bfdbf59a9d3eac93a879e.svg","isPro":false,"fullname":"Yixuan Wei","user":"EasonWei","type":"user"},"name":"Yixuan Wei","status":"admin_assigned","statusLastChangedAt":"2024-05-31T09:53:01.864Z","hidden":false},{"_id":"66594d86f1b4f073d80378d5","user":{"_id":"631952b6f18d0b5d999ca397","avatarUrl":"/avatars/eb87dfc142a7602f8fb888f7b7b60d38.svg","isPro":false,"fullname":"Peng","user":"Houwen","type":"user"},"name":"Houwen Peng","status":"admin_assigned","statusLastChangedAt":"2024-05-31T09:53:10.851Z","hidden":false},{"_id":"66594d86f1b4f073d80378d6","name":"Zheng Zhang","hidden":false},{"_id":"66594d86f1b4f073d80378d7","name":"Gaofeng Meng","hidden":false},{"_id":"66594d86f1b4f073d80378d8","name":"Han Hu","hidden":false}],"publishedAt":"2024-05-30T17:59:31.000Z","submittedOnDailyAt":"2024-05-31T02:39:44.509Z","title":"Xwin-LM: Strong and Scalable Alignment Practice for LLMs","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"In this work, we present Xwin-LM, a comprehensive suite of alignment\nmethodologies for large language models (LLMs). This suite encompasses several\nkey techniques, including supervised finetuning (SFT), reward modeling (RM),\nrejection sampling finetuning (RS), and direct preference optimization (DPO).\nThe key components are as follows: (1) Xwin-LM-SFT, models initially finetuned\nwith high-quality instruction data; (2) Xwin-Pair, a large-scale, multi-turn\npreference dataset meticulously annotated using GPT-4; (3) Xwin-RM, reward\nmodels trained on Xwin-Pair, developed at scales of 7B, 13B, and 70B\nparameters; (4) Xwin-Set, a multiwise preference dataset in which each prompt\nis linked to 64 unique responses generated by Xwin-LM-SFT and scored by\nXwin-RM; (5) Xwin-LM-RS, models finetuned with the highest-scoring responses\nfrom Xwin-Set; (6) Xwin-LM-DPO, models further optimized on Xwin-Set using the\nDPO algorithm. Our evaluations on AlpacaEval and MT-bench demonstrate\nconsistent and significant improvements across the pipeline, demonstrating the\nstrength and scalability of Xwin-LM. The repository\nhttps://github.com/Xwin-LM/Xwin-LM will be continually updated to foster\ncommunity research.","upvotes":17,"discussionId":"66594d88f1b4f073d803798f"},"canReadDatabase":false,"canManageCommunity":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/uc2Q5G2HKphTD0TbOsYiC.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"655ac762cb17ec19ef82719b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655ac762cb17ec19ef82719b/1kDncYrGLYS_2SR8cNdAL.png","isPro":false,"fullname":"Welcome to matlok","user":"matlok","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"642149592cc2b3c39e7ef6c9","avatarUrl":"/avatars/fed5d447375cd8c3b2bc9e096fe681de.svg","isPro":false,"fullname":"Bolin Ni","user":"nbl97","type":"user"},{"_id":"63a369d98c0c89dcae3b8329","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/6OUJ7Hc9T1jXynYH3FGaf.png","isPro":false,"fullname":"Adina Yakefu","user":"AdinaY","type":"user"},{"_id":"65ba471ad88a65abb9328ee2","avatarUrl":"/avatars/956238ce5034091e64d026b0272c4400.svg","isPro":false,"fullname":"Dazhi Jiang","user":"thuzhizhi","type":"user"},{"_id":"641b754d1911d3be6745cce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/GXN8mEmaq3rfITRrw7GeZ.jpeg","isPro":false,"fullname":"atayloraerospace","user":"Taylor658","type":"user"},{"_id":"646e789d98e8f749fc5f85fa","avatarUrl":"/avatars/5e9316f03e68d11604584bb2ffe7eb28.svg","isPro":false,"fullname":"Barbara ","user":"Babl21","type":"user"},{"_id":"657217faabb25ed8aedd5e48","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/657217faabb25ed8aedd5e48/UUHAXeGtOnQBXFD3nYtf2.jpeg","isPro":false,"fullname":"Vlad Bogolin","user":"vladbogo","type":"user"},{"_id":"64a84de2eb47b3552285ef74","avatarUrl":"/avatars/114e0cc393d0aea9680f3af6d84d6f46.svg","isPro":false,"fullname":"Eni Grand","user":"Enigrand","type":"user"},{"_id":"625026b7d2d191ac43320c5e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/625026b7d2d191ac43320c5e/K-Fn3v2KwNyg9QzhKB4vH.jpeg","isPro":false,"fullname":"Jingcheng Hu","user":"reign12","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
In this work, we present Xwin-LM, a comprehensive suite of alignment
methodologies for large language models (LLMs). This suite encompasses several
key techniques, including supervised finetuning (SFT), reward modeling (RM),
rejection sampling finetuning (RS), and direct preference optimization (DPO).
The key components are as follows: (1) Xwin-LM-SFT, models initially finetuned
with high-quality instruction data; (2) Xwin-Pair, a large-scale, multi-turn
preference dataset meticulously annotated using GPT-4; (3) Xwin-RM, reward
models trained on Xwin-Pair, developed at scales of 7B, 13B, and 70B
parameters; (4) Xwin-Set, a multiwise preference dataset in which each prompt
is linked to 64 unique responses generated by Xwin-LM-SFT and scored by
Xwin-RM; (5) Xwin-LM-RS, models finetuned with the highest-scoring responses
from Xwin-Set; (6) Xwin-LM-DPO, models further optimized on Xwin-Set using the
DPO algorithm. Our evaluations on AlpacaEval and MT-bench demonstrate
consistent and significant improvements across the pipeline, demonstrating the
strength and scalability of Xwin-LM. The repository
https://github.com/Xwin-LM/Xwin-LM will be continually updated to foster
community research.