We are proud to host the world's largest open collection of LoRAs (Low-Rank Adaption). This initiative is part of a broader effort to advance research and development in the field of LoRA and PEFT (Parameter Efficient Fine-Tuning).
Our collection currently includes over 500 LoRAs, making it the most extensive open repository of its kind. This project aims to support and drive forward research by providing a comprehensive dataset of LoRAs for analysis, experimentation, and application development.
By making these resources available to the community, we hope to encourage collaboration and innovation, helping to push the boundaries of what's possible in PEFT.
We also include the exact data that these LoRAs were trained on, with the corresponding training, validation, and test splits.
Details can be found in the paper: https://www.arxiv.org/abs/2407.00066
We are always looking for new contributions and collaborators:
Interested in contributing or have questions? Reach out to us at email.
We appreciate your interest in our project and hope you will join us in this exciting venture.