Publishing state-of-the-art research on efficient fine-tuning techniques.
Read our findings on memory optimization and parameter-efficient fine tuning.
We release the code for our experimental optimizations to the open-source community.
Collaborating with leading universities to push the boundaries of accessible AI.
Our published methodologies continue to push the boundaries of parameter-efficient fine-tuning and automated evals.