Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Deep Cogito, a new AI research startup based in San Francisco, officially emerged from stealth today with Cogito v1, a new line of open source large language models (LLMs) fine-tuned from Meta’s Llama 3.2 and equipped with hybrid reasoning capabilities — the ability to answer quickly and immediately, or “self-reflect” like OpenAI’s “o” series and DeepSeek R1.
The company aims to push the boundaries of AI beyond current human-overseer limitations by enabling models to iteratively refine and internalize their own improved reasoning strategies. It’s ultimately on a quest toward developing superintelligence — AI smarter than all humans in all domains — yet the company says that “All models we create will be open sourced.”
Deep Cogito’s CEO and co-founder Drishan Arora — a former Senior Software Engineer at Google who says he led the large language model (LLM) modeling for Google’s generative search product —also said in a post on X they are “the strongest open models at their scale – including those from LLaMA, DeepSeek, and Qwen.”
The initial model lineup includes five base sizes: 3 billion, 8 billion, 14 billion, 32 billion, and 70 billion parameters, available now on AI code sharing community Hugging Face, Ollama and through application programming interfaces (API) on Fireworks and Together AI.
They’re available under the Llama licensing terms which allows for commercial usage — so third-party enterprises could put them to work in paid products — up to 700 million monthly users, at which point they need to obtain a paid license from Meta.
The company plans to release even larger models — up to 671 billion parameters — in the coming months.
Arora describes the company’s training approach, iterated distillation and amplification (IDA), as a novel alternative to traditional reinforcement learning from human feedback (RLHF) or teacher-model distillation.
The core idea behind IDA is to allocate more compute for a model to generate improved solutions, then distill the improved reasoning process into the model’s own parameters — effectively creating a feedback loop for capability growth. Arora likens this approach to Google AlphaGo’s self-play strategy, applied to natural language.
The Cogito models are open-source and available for download via Hugging Face and Ollama, or through APIs provided by Fireworks AI and Together AI. Each model supports both a standard mode for direct answers and a reasoning mode, where the model reflects internally before responding.
Benchmarks and evaluations
The company shared a broad set of evaluation results comparing Cogito models to open-source peers across general knowledge, mathematical reasoning, and multilingual tasks. Highlights include:
- Cogito 3B (Standard) outperforms LLaMA 3.2 3B on MMLU by 6.7 percentage points (65.4% vs. 58.7%), and on Hellaswag by 18.8 points (81.1% vs. 62.3%).
- In reasoning mode, Cogito 3B scores 72.6% on MMLU and 84.2% on ARC, exceeding its own standard-mode performance and showing the effect of IDA-based self-reflection.
- Cogito 8B (Standard) scores 80.5% on MMLU, outperforming LLaMA 3.1 8B by 12.8 points. It also leads by over 11 points on MMLU-Pro and achieves 88.7% on ARC.
- In reasoning mode, Cogito 8B achieves 83.1% on MMLU and 92.0% on ARC. It surpasses DeepSeek R1 Distill 8B in nearly every category except the MATH benchmark, where Cogito scores significantly lower (60.2% vs. 80.6%).
- Cogito 14B and 32B models outperform Qwen2.5 counterparts by around 2–3 percentage points on aggregate benchmarks, with Cogito 32B (Reasoning) reaching 90.2% on MMLU and 91.8% on the MATH benchmark.
- Cogito 70B (Standard) outperforms LLaMA 3.3 70B on MMLU by 6.4 points (91.7% vs. 85.3%) and exceeds LLaMA 4 Scout 109B on aggregate benchmark scores (54.5% vs. 53.3%).
- Against DeepSeek R1 Distill 70B, Cogito 70B (Reasoning) posts stronger results in general and multilingual benchmarks, with a notable 91.0% on MMLU and 92.7% on MGSM.
Cogito models generally show their highest performance in reasoning mode, though some trade-offs emerge — particularly in mathematics.
For instance, while Cogito 70B (Standard) matches or slightly exceeds peers in MATH and GSM8K, Cogito 70B (Reasoning) trails DeepSeek R1 in MATH by over five percentage points (83.3% vs. 89.0%).
Tool calling built-in
In addition to general benchmarks, Deep Cogito evaluated its models on native tool-calling performance — a growing priority for agents and API-integrated systems.
- Cogito 3B supports four tool-calling tasks natively (simple, parallel, multiple, and parallel-multiple), whereas LLaMA 3.2 3B does not support tool calling.
- Cogito 3B scores 92.8% on simple tool calls and over 91% on multiple tool calls.
- Cogito 8B scores over 89% across all tool call types, significantly outperforming LLaMA 3.1 8B, which ranges between 35% and 54%.
These improvements are attributed not only to model architecture and training data, but also to task-specific post-training, which many baseline models currently lack.
Looking Ahead
Deep Cogito plans to release larger-scale models in upcoming months, including mixture-of-expert variants at 109B, 400B, and 671B parameter scales. The company will also continue updating its current model checkpoints with extended training.
The company positions its IDA methodology as a long-term path toward scalable self-improvement, removing dependence on human or static teacher models.
Arora emphasizes that while performance benchmarks are important, real-world utility and adaptability are the true tests for these models — and that the company is just at the beginning of what it believes is a steep scaling curve.
Deep Cogito’s research and infrastructure partnerships include teams from Hugging Face, RunPod, Fireworks AI, Together AI, and Ollama. All released models are open source and available now.
READ SOURCE