Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
It started with the announcement of OpenAI’s o1 model in September 2024, but really took off with DeepSeek R1 released in January 2025.
Now, it seems that most major AI model providers and trainers are in a new race to deliver better, faster, cheaper, more affordable or more powerful and performant “reasoning” AI language models — that is, ones that maybe take a little longer to respond to a human user, but ideally do so with better, more comprehensive, more well “reasoned” answers, which these class of models get by performing “chain-of-thought,” reflecting on their own conclusions and interrogating them for veracity before responding.
ByteDance, the Chinese web media giant parent of TikTok, is the latest to join the party with announcement and publication of the technical paper behind Seed-Thinking-v1.5, an upcoming large language model (LLM) designed to advance reasoning performance across both science, tech, math, and engineering (STEM) fields and general-purpose domains.
The model is not yet available for download or use, and it’s unclear what the licensing terms will be — whether it will be proprietary/closed source or open source/free for all to use and modify at will, or somewhere in between. But the technical paper provides some noteworthy details that are worth going over now in advance of whenever it is made available.
Built atop the increasingly popular Mixture-of-Experts (MoE) architecture
Like Meta’s new Llama 4 and Mistral’s Mixtral before it, Seed-Thinking-v1.5 is built using a Mixture-of-Experts (MoE) architecture.
This architecture is designed to make models more efficient, essentially combining the capabilities of multiple models into one, each model specializing in a different domain.
In this case, the MoE architecture means that Seed-Thinking-v1.5 uses only 20 billion parameters at a time from a total of 200 billion.
ByteDance says in its technical paper published to GitHub that Seed-Thinking-v1.5 prioritizes structured reasoning and thoughtful response generation.
The results nearly speak for themselves, with Seed-Thinking-v1.5 outperforming DeepSeek R1 and approaching Google’s newly released Gemini 2.5 Pro and OpenAI’s o3-mini-high reasoner on many third-party benchmark evaluations, even exceeding those two in the case of the ARC-AGI benchmark, which measures progress towards artificial general intelligence, seen as the goal or “Holy Grail” of AI — a model that outperforms humans on most economically valuable tasks, according to OpenAI’s definition.
Positioned as a compact yet capable alternative to larger state-of-the-art models, Seed-Thinking-v1.5 achieves competitive benchmark results and introduces innovations in reinforcement learning (RL), training data curation, and AI infrastructure.
Performance benchmarks and model focus
Seed-Thinking-v1.5 shows strong performance on a suite of challenging tasks, scoring 86.7% on AIME 2024, 55.0% pass@8 on Codeforces, and 77.3% on the GPQA science benchmark. These results place it close to or matching models like OpenAI’s o3-mini-high and Google’s Gemini 2.5 Pro on specific reasoning metrics.
On non-reasoning tasks, the model was evaluated through human preference comparisons and achieved an 8.0% higher win rate over DeepSeek R1, suggesting that its strengths generalize beyond just logic or math-heavy challenges.
To address saturation in common benchmarks like AIME, ByteDance introduced BeyondAIME, a new, harder math benchmark with curated problems designed to resist memorization and better discriminate model performance. This and the Codeforces evaluation set are expected to be publicly released to support future research.
Data strategy
Training data played a central role in the model’s development. For supervised fine-tuning (SFT), the team curated 400,000 samples, including 300,000 verifiable (STEM, logic, and coding tasks) and 100,000 non-verifiable problems like creative writing and role-playing.
For RL training, data was segmented into:
- Verifiable problems: 100,000 rigorously filtered STEM questions and logic puzzles with known answers, sourced from elite competitions and expert review.
- Non-verifiable tasks: Human-preference datasets focused on open-ended prompts, evaluated using pairwise reward models.
The STEM data leaned heavily on advanced mathematics, accounting for over 80% of the problem set. Additional logic data included tasks like Sudoku and 24-point puzzles, with adjustable difficulty to match model progress.
Reinforcement learning approach
Reinforcement learning in Seed-Thinking-v1.5 is powered by custom actor-critic (VAPO) and policy-gradient (DAPO) frameworks, developed to address known instabilities in RL training. These techniques focus on reducing reward signal sparsity and enhancing training stability, especially in long chain-of-thought (CoT) settings.
Reward models play a critical role in supervising RL outputs. ByteDance introduced two key tools:
- Seed-Verifier: A rule-based LLM that checks if generated and reference answers are mathematically equivalent.
- Seed-Thinking-Verifier: A step-by-step reasoning-based judge that improves judgment consistency and resists reward hacking.
This two-tiered reward system enables nuanced evaluation for both straightforward and complex tasks.
Infrastructure and scaling
To support efficient large-scale training, ByteDance built a system atop its HybridFlow framework, with execution handled by Ray clusters and co-located training and inference processes to reduce GPU idle time.
A notable innovation is the Streaming Rollout System (SRS), which separates model evolution from runtime execution. It accelerates iteration speed by asynchronously managing partially completed generations across model versions. This architecture reportedly delivers up to 3× faster RL cycles.
Additional infrastructure techniques include:
- Mixed precision (FP8) for memory savings
- Expert parallelism and kernel auto-tuning for MoE efficiency
- ByteCheckpoint for resilient and flexible checkpointing
- AutoTuner for optimizing parallelism and memory configurations
Human evaluation and real-world impact
To evaluate alignment with human-centric preferences, ByteDance conducted human testing across a range of domains including creative writing, humanities knowledge, and general conversation.
Seed-Thinking-v1.5 consistently outperformed DeepSeek R1 across sessions, reinforcing its applicability to real-world user needs.
The development team notes that reasoning models trained primarily on verifiable tasks demonstrated strong generalization to creative domains—an outcome attributed to the structure and rigor embedded in mathematical training workflows.
What it means for technical leaders, data engineers and enterprise decision-makers
For technical leads managing the lifecycle of large language models—from data curation to deployment—Seed-Thinking-v1.5 presents an opportunity to rethink how reasoning capabilities are integrated into enterprise AI stacks.
Its modular training process, which includes verifiable reasoning datasets and multi-phase reinforcement learning, is particularly appealing to teams looking to scale LLM development while retaining fine-grained control.
ByteDance’s moves to introduce Seed-Verifier and Seed-Thinking-Verifier offer mechanisms for more trustworthy reward modeling, which can be critical when deploying models into customer-facing or regulated environments.
For teams that often operate under tight deadlines and limited bandwidth, the model’s stability under reinforcement learning—enabled by innovations like VAPO and dynamic sampling—could reduce iteration cycles and streamline fine-tuning for specific tasks.
From an orchestration and deployment perspective, the model’s hybrid infrastructure approach—including the Streaming Rollout System (SRS) and support for FP8 optimization—suggests significant gains in training throughput and hardware utilization.
These features would be valuable for engineers responsible for scaling LLM operations across cloud and on-prem systems. The fact that Seed-Thinking-v1.5 was trained with mechanisms to adapt reward feedback based on runtime dynamics speaks directly to the challenges of managing heterogeneous data pipelines and maintaining consistency across domains.
For teams tasked with ensuring reliability, reproducibility, and continuous integration of new tools, Seed-Thinking-v1.5’s system-level design could serve as a blueprint for building robust, multi-modal orchestration systems.
For data engineering professionals, the structured approach to training data—including rigorous filtering, augmentation, and expert verification—reinforces the importance of data quality as a multiplier of model performance. This could inspire more deliberate approaches to dataset development and validation pipelines.
Future outlook
Seed-Thinking-v1.5 is the result of collaboration within ByteDance’s Seed LLM Systems team, led by Yonghui Wu and with public representation by Haibin Lin, a long-time AI contributor.
The project also draws on previous efforts like Doubao 1.5 Pro and incorporates shared techniques in RLHF and data curation.
Looking ahead, the team plans to continue refining reinforcement learning techniques, with a focus on training efficiency and reward modeling for non-verifiable tasks. The public release of internal benchmarks such as BeyondAIME is intended to foster broader advancement in reasoning-focused AI research.
READ SOURCE