In a major step forward for artificial intelligence and mathematical automation, DeepSeek has open-sourced DeepSeek-Prover-V2-671B—a cutting-edge model designed to automate mathematical theorem proving. Now available on the Hugging Face platform, this model is accessible to researchers and developers worldwide. Built on the robust 671 billion parameter Mixture-of-Experts (MoE) architecture derived from DeepSeek-V3, the new model brings unprecedented power to formal mathematical reasoning.
Specialized in mathematical problem-solving and automated formal theorem proving, DeepSeek-Prover-V2-671B achieves a remarkable 88.9% pass rate on the MiniF2F test, surpassing current state-of-the-art neural theorem provers. It also solved 49 out of 658 problems in the challenging PutnamBench dataset, demonstrating its effectiveness in tackling high-level math challenges.
The model’s training pipeline leverages synthetic, step-by-step mathematical proofs generated using recursive problem decomposition—enabling the integration of both informal and formal reasoning processes. This data, combined with reinforcement learning, allows the model to bridge heuristic approaches and strict proof construction efficiently. DeepSeek-Prover-V2-671B is positioned to assist in diverse domains such as education, scientific research, engineering design, and financial analysis; software developers can utilize it for algorithm design and code verification.
Two variants of the model—7B and 671B parameters—are available for download via Hugging Face and can be used in both academic and commercial projects under a permissive open-source license. Developers can rapidly deploy the model using the Hugging Face Transformers library. DeepSeek has also introduced new benchmark datasets like ProverBench, and committed to ongoing community-driven enhancements and support for the model.