Incoming assistant professor at UCSD. Systems for ML.
Pinned Loading
-
Dao-AILab/flash-attention
Dao-AILab/flash-attention PublicFast and memory-efficient exact attention
-
HazyResearch/flash-fft-conv
HazyResearch/flash-fft-conv PublicFlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
-
-
HazyResearch/H3
HazyResearch/H3 PublicLanguage Modeling with the H3 State Space Model
-
HazyResearch/m2
HazyResearch/m2 PublicRepo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.