🖼️ Available models from 1 repositories

Filter by type:

Filter by tags:
llama-salad-8x8b

This MoE merge is meant to compete with Mixtral fine-tunes, more specifically Nous-Hermes-2-Mixtral-8x7B-DPO, which I think is the best of them. I've done a bunch of side-by-side comparisons, and while I can't say it wins in every aspect, it's very close. Some of its shortcomings are multilingualism, storytelling, and roleplay, despite using models that are very good at those tasks.

Repository: localaiLicense: llama3

Link #1Link #2