From the latest commits:
We are happy to release our final 1T token version of OpenLLaMA 3B and 7B. We’ve updated the evaluation results. We are also happy to release a 600B token preview of the 13B model, trained in collaboration with Stability AI.
Haven’t tried it yet, and the 13B model is still in the works, but hopefully this will be a better foundation than the leaked Meta AI model, not only for more reproducible research, but because nonacademics will be completely in the clear from a legal standpoint to run this stuff locally.
You must log in or register to comment.
Nice work from these guys. I wonder how the open source reproduction compares side-by-side to the original LLaMA model…
Depends on the task, but looks like about the same on average.