Nous: Hermes 2 Mixtral 8x7B DPO

nousresearch/nous-hermes-2-mixtral-8x7b-dpo

Created Jan 16, 202432,768 context
$0.60/M input tokens$0.60/M output tokens

Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the Mixtral 8x7B MoE LLM.

The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.

#moe

Recent activity on Hermes 2 Mixtral 8x7B DPO

Tokens processed per day

Apr 24May 1May 8May 15May 22May 29Jun 5Jun 12Jun 19Jun 26Jul 3Jul 10Jul 173.5M7M10.5M14M
    Nous: Hermes 2 Mixtral 8x7B DPO – Recent Activity | OpenRouter