Skip to content
/ llm Public
forked from rustformers/llm

An ecosystem of Rust libraries for working with large language models

License

Notifications You must be signed in to change notification settings

Fyphen1223/llm

 
 

Repository files navigation

llm - Large Language Models for Everyone, in Rust

llm is an ecosystem of Rust libraries for working with large language models - it's built on top of the fast, efficient GGML library for machine learning.

A llama riding a crab, AI-generated

Image by @darthdeus, using Stable Diffusion

Latest version MIT/Apache2 Discord

Current State

There are currently four available versions of llm (the crate and the CLI):

  • The released version 0.1.1 on crates.io. This version is several months out of date and does not include support for the most recent models.
  • The main branch of this repository. This version can reliably infer GGMLv3 models, but does not support GGUF, and uses an old version of GGML.
  • The gguf branch of this repository; this is a version of main that supports inferencing with GGUF, but does not support any models other than Llama, requires the use of a Hugging Face tokenizer, and does not support quantization. It also uses an old version of GGML.
  • The develop branch of this repository. This is a from-scratch re-port of llama.cpp to synchronize with the latest version of GGML, and to support all models and GGUF. It is currently a work in progress, and is not yet ready for use.

The plan is to finish up the work on develop (see the PR), and then merge it into main and release a new version of llm to crates.io, so that up-to-date support for the latest models and GGUF will be available. It is not yet known when this will happen.

Overview

The primary entrypoint for developers is the llm crate, which wraps llm-base and the supported model crates. Documentation for released version is available on Docs.rs.

For end-users, there is a CLI application, llm-cli, which provides a convenient interface for interacting with supported models. Text generation can be done as a one-off based on a prompt, or interactively, through REPL or chat modes. The CLI can also be used to serialize (print) decoded models, quantize GGML files, or compute the perplexity of a model. It can be downloaded from the latest GitHub release or by installing it from crates.io.

llm is powered by the ggml tensor library, and aims to bring the robustness and ease of use of Rust to the world of large language models. At present, inference is only on the CPU, but we hope to support GPU inference in the future through alternate backends.

Currently, the following models are supported:

See getting models for more information on how to download supported models.

Using llm in a Rust Project

This project depends on Rust v1.65.0 or above and a modern C toolchain.

The llm crate exports llm-base and the model crates (e.g. bloom, gpt2 llama).

Add llm to your project by listing it as a dependency in Cargo.toml. To use the version of llm you see in the main branch of this repository, add it from GitHub (although keep in mind this is pre-release software):

[dependencies]
llm = { git = "https://github.com/rustformers/llm" , branch = "main" }

To use a released version, add it from crates.io by specifying the desired version:

[dependencies]
llm = "0.1"

By default, llm builds with support for remotely fetching the tokenizer from Hugging Face's model hub. To disable this, disable the default features for the crate, and turn on the models feature to get llm without the tokenizer:

[dependencies]
llm = { version = "0.1", default-features = false, features = ["models"] }

NOTE: To improve debug performance, exclude the transitive ggml-sys dependency from being built in debug mode:

[profile.dev.package.ggml-sys]
opt-level = 3

Leverage Accelerators with llm

The llm library is engineered to take advantage of hardware accelerators such as cuda and metal for optimized performance.

To enable llm to harness these accelerators, some preliminary configuration steps are necessary, which vary based on your operating system. For comprehensive guidance, please refer to Acceleration Support in our documentation.

Using llm from Other Languages

Bindings for this library are available in the following languages:

Using the llm CLI

The easiest way to get started with llm-cli is to download a pre-built executable from a released version of llm, but the releases are currently out of date and we recommend you install from source instead.

Installing from Source

To install the main branch of llm with the most recent features to your Cargo bin directory, which rustup is likely to have added to your PATH, run:

cargo install --git https://github.com/rustformers/llm llm-cli

The CLI application can then be run through llm. See also features and acceleration support to turn features on as required. Note that GPU support (CUDA, OpenCL, Metal) will not work unless you build with the relevant feature.

Installing with cargo

Note that the currently published version is out of date and does not include support for the most recent models. We currently recommend that you install from source.

To install the most recently released version of llm to your Cargo bin directory, which rustup is likely to have added to your PATH, run:

cargo install llm-cli

The CLI application can then be run through llm. See also features to turn features on as required.

Features

By default, llm builds with support for remotely fetching the tokenizer from Hugging Face's model hub. This adds a dependency on your system's native SSL stack, which may not be available on all systems.

To disable this, disable the default features for the build:

cargo build --release --no-default-features

To enable hardware acceleration, see Acceleration Support for Building section, which is also applicable to the CLI.

Getting Models

GGML models are easy to acquire. They are primarily located on Hugging Face (see From Hugging Face), but can be obtained from elsewhere.

Models are distributed as single files, and do not need any additional files to be downloaded. However, they are quantized with different levels of precision, so you will need to choose a quantization level that is appropriate for your application.

Additionally, we support Hugging Face tokenizers to improve the quality of tokenization. These are separate files (tokenizer.json) that can be used with the CLI using the -v or -r flags, or with the llm crate by using the appropriate TokenizerSource enum variant.

For a list of models that have been tested, see the known-good models.

Certain older GGML formats are not supported by this project, but the goal is to maintain feature parity with the upstream GGML project. For problems relating to loading models, or requesting support for supported GGML model types, please open an Issue.

From Hugging Face

Hugging Face 🤗 is a leader in open-source machine learning and hosts hundreds of GGML models. Search for GGML models on Hugging Face 🤗.

r/LocalLLaMA

This Reddit community maintains a wiki related to GGML models, including well organized lists of links for acquiring GGML models (mostly from Hugging Face 🤗).

Usage

Once the llm executable has been built or is in a $PATH directory, try running it. Here's an example that uses the open-source RedPajama language model:

llm infer -a gptneox -m RedPajama-INCITE-Base-3B-v1-q4_0.bin -p "Rust is a cool programming language because" -r togethercomputer/RedPajama-INCITE-Base-3B-v1

In the example above, the first two arguments specify the model architecture and command, respectively. The required -m argument specifies the local path to the model, and the required -p argument specifies the evaluation prompt. The optional -r argument is used to load the model's tokenizer from a remote Hugging Face 🤗 repository, which will typically improve results when compared to loading the tokenizer from the model file itself; there is also an optional -v argument that can be used to specify the path to a local tokenizer file. For more information about the llm CLI, use the --help parameter.

There is also a simple inference example that is helpful for debugging:

cargo run --release --example inference gptneox RedPajama-INCITE-Base-3B-v1-q4_0.bin -r $OPTIONAL_VOCAB_REPO -p $OPTIONAL_PROMPT

Q&A

Does the llm CLI support chat mode?

Yes, but certain fine-tuned models (e.g. Alpaca, Vicuna, Pygmalion) are more suited to chat use-cases than so-called "base models". Here's an example of using the llm CLI in REPL (Read-Evaluate-Print Loop) mode with an Alpaca model - note that the provided prompt format is tailored to the model that is being used:

llm repl -a llama -m ggml-alpaca-7b-q4.bin -f utils/prompts/alpaca.txt

There is also a Vicuna chat example that demonstrates how to create a custom chatbot:

cargo run --release --example vicuna-chat llama ggml-vicuna-7b-q4.bin

Can llm sessions be persisted for later use?

Sessions can be loaded (--load-session) or saved (--save-session) to file. To automatically load and save the same session, use --persist-session. This can be used to cache prompts to reduce load time, too.

How do I use llm to quantize a model?

llm can produce a q4_0- or q4_1-quantized model from an f16-quantized GGML model

cargo run --release quantize -a $MODEL_ARCHITECTURE $MODEL_IN $MODEL_OUT {q4_0,q4_1}

Do you provide support for Docker and NixOS?

The llm Dockerfile is in the utils directory; the NixOS flake manifest and lockfile are in the project root.

What's the best way to get in touch with the llm community?

GitHub Issues and Discussions are welcome, or come chat on Discord!

Do you accept contributions?

Absolutely! Please see the contributing guide.

What applications and libraries use llm?

Applications

  • llmcord: Discord bot for generating messages using llm.
  • local.ai: Desktop app for hosting an inference API on your local machine using llm.
  • secondbrain: Desktop app to download and run LLMs locally in your computer using llm.
  • floneum: A graph editor for local AI workflows.
  • poly: A versatile LLM serving back-end with tasks, streaming completion, memory retrieval, and more.

Libraries

  • llm-chain: Build chains in large language models for text summarization and completion of more complex tasks

About

An ecosystem of Rust libraries for working with large language models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust 99.7%
  • Other 0.3%