An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
-
Updated
Nov 8, 2024 - Python
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Aligning Large Language Models with Human: A Survey
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality synthetic data generation pipeline!
开源SFT数据集整理,随时补充
The offical realization of InstructERC
[ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".
We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.
[NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"
EMNLP'2024: Knowledge Verification to Nip Hallucination in the Bud
使用LLaMA-Factory微调多模态大语言模型的示例代码 Demo of Finetuning Multimodal LLM with LLaMA-Factory
Automatically Generating Numerous Context-Driven SFT Data for LLMs across Diverse Granularity
Finetuning Google's Gemma Model for Translating Natural Language into SQL
Code for Paper (Entropic Distribution Matching in Supervised Fine-tuning of LLMs: Less Overfitting and Better Diversity)
Qwen2-VL在文旅领域的LLaMA-Factory微调案例 The case for fine-tuning Qwen2-VL in the field of historical literature and museums
Various LMs/LLMs below 3B parameters (for now) trained using SFT (Supervised Fine Tuning) for several downstream tasks
An LLM challenge to (i) fine-tune pre-trained HuggingFace transformer model to build a Code Generation language model, and (ii) build a retrieval-augmented generation (RAG) application using LangChain
Fine tune Large Language Model on Mathematic dataset
This project streamlines the fine-tuning process, enabling you to leverage Llama-2's capabilities for your own projects.
Add a description, image, and links to the supervised-finetuning topic page so that developers can more easily learn about it.
To associate your repository with the supervised-finetuning topic, visit your repo's landing page and select "manage topics."