Microsoft isn’t like OpenAI, Google, and Meta; particularly not in terms of massive language fashions. Whereas different tech giants desire to launch a number of fashions nearly overwhelming the customers with selections; Microsoft launches just a few, however these fashions at all times make it huge amongst builders all over the world. Of their newest launch, they’ve launched 2 reasoning fashions: Phi-4-Reasoning and Phi-4-Reasoning-plus, each educated on the bottom Phi-4 mannequin. The 2 Phi-4-Reasoning fashions compete with the mighty fashions like o1, o3-mini, and DeepSeek R1. On this weblog, we are going to dive into the technical particulars, structure, coaching strategies, and efficiency of Phi-4-Reasoning fashions intimately.
Let’s discover the Phi-4-Reasoning fashions.
What’s Phi-4 Reasoning?
Phi-4 is just not new within the LLM world. This small and mighty language fashions broke the web when it was launched final yr. Now to cater to the growing demand for reasoning fashions, Microsoft has launched Phi-4-Reasoning fashions. These are 14 B parameters that excel at performing complicated reasoning duties involving arithmetic, coding, and STEM questions. Unline its basic objective Phi-4 collection, Phi-4-Reasoning is particularly optimized for long-chain reasoning – that’s the skill to interrupt down complicated multi-step issues systematically into logical steps.
Additionally Learn: Phi-4: Redefining Language Fashions with Artificial Information
Phi 4 Reasoning Fashions
The 2 reasoning fashions launched by Microsoft are:
- Phi-4-Reasoning: A reasoning mannequin educated utilizing supervised fine-tuning or SFT on high-quality datasets. The mannequin is most well-liked for all duties that require sooner responses with guided efficiency constraints.
- Phi-4-Reasoning-Plus: An enhanced reasoning mannequin that has been enhanced utilizing reinforcement studying or RL to enhance its efficiency however generates nearly 50% extra tokens in comparison with its counterpart. The mannequin exhibits an elevated latency and therefore is really helpful for high-accuracy duties.
The 2 14B fashions at the moment help solely textual content enter, and Microsoft has launched them as open-weight so builders can freely take a look at and fine-tune them primarily based on their wants. Listed below are some key highlights of the fashions:
Particulars | Phi-4-Reasoning Fashions |
---|---|
Developer | Microsoft Analysis |
Mannequin Variants | Phi-4-Reasoning, Phi-4-Reasoning-Plus |
Base Structure | Phi-4 (14B parameters), dense decoder-only Transformer |
Coaching Technique | Supervised fine-tuning on chain-of-thought information; Plus variant contains further Reinforcement Studying (RLHF) |
Coaching Length | 2.5 days on 32× H100-80G GPUs |
Coaching Information | 16B tokens complete (~8.3B distinctive), from artificial prompts and filtered public area information |
Coaching Interval | January – April 2025 |
Information Cutoff | March 2025 |
Enter Format | Textual content enter, optimized for chat-style prompts |
Context Size | 32,000 tokens |
Output Format | Two sections: reasoning chain-of-thought block adopted by a summarization block |
Launch Date | April 30, 2025 |
Key Options of Phi-4-Reasoning Fashions
For Phi-4 the crew took a number of revolutionary steps involving information choice, its coaching methodology in addition to its efficiency. A number of the key issues they did had been:
Information Centric Coaching
The Information Curation for coaching the Phi-4 reasoning fashions relied not simply on sheer amount however emphasised equally on the standard of knowledge too. They particularly selected the info that was on the “edge” of the mannequin’s capabilities. This ensured that the coaching information was solvable however not simply.
The primary steps concerned in constructing the info set for Phi-4 fashions had been:
- Seed Database: The Microsft crew began with publicly accessible datasets like AIME and GPQA. These information units concerned issues in algebra and geometry involving multi-step reasoning.
- Artificial Reasoning Chains: To get complete and detailed step-by-step reasoned-out responses for the issues, the Microsoft crew relied on OpenAI’s o3-mini mannequin.
For instance, for the query “What’s the spinoff of sin(x2 )2?”; o3-mini gave the next output:
Step 1: Apply the chain rule: d/dx sin(u)=cos(u)*du/dx.Step 2: Let u=x² ⇒ du/dx=2x.
Last Reply: cos(x²) * 2x.
These artificially or synthetically generated chains of well-reasoned responses gave a transparent blueprint on how a mannequin ought to construction its personal reasoning responses.
- Choosing “Teachable Moments”: The developer crew, knowingly went for prompts that challenged the bottom Phi-4 mannequin whereas being solvable. These included issues on which Phi-4 initially confirmed round 50-% accuracy. This strategy ensured that the coaching course of prevented “simple” information that simply bolstered current patterns, and centered extra on “structured reasoning”.
The crew basically needed the Phi-4-reasoning fashions to study as they do, an strategy that we people often depend on.
Supervised Superb-Tuning (SFT)
Supervised Superb-Tuning (SFT) is the method of bettering a pre-trained language mannequin by coaching it on rigorously chosen enter–output pairs with high-quality responses. For the Phi-4-Reasoning fashions, this meant beginning with the bottom Phi-4 mannequin after which refining it utilizing reasoning-focused duties. Primarily, Phi-4-Reasoning was educated to study and comply with the step-by-step reasoning patterns seen in responses from o3-mini.
Coaching Particulars
- Batch Measurement: It was stored at 32. This small batch dimension allowed the mannequin to deal with particular person examples with out being overwhelmed by the extra noise.
- Studying fee: This was 7e-5, a average fee that avoids overshooting the optimum weights throughout updates.
- Optimizer: A normal “Adam W” optimizer was used. This deep studying optimizer balances velocity and stability.
- Context Size: It was taken to 32,768 tokens which was double the 16K token restrict of the bottom Phi-4 mannequin. This allowed the mannequin to deal with an extended context.
Utilizing SFT throughout early coaching allowed the mannequin to make use of
Reinforcement Studying
Reinforcement studying is instructing a mannequin the best way to do higher with suggestions on all its generated outputs. The mannequin will get a reward each time it solutions accurately and is punished every time it responds incorrectly. RL was used to additional practice the Phi-4-Reasoning -Plus mannequin. This coaching technique refined the mannequin’s math-solving expertise which evaluated the responses for accuracy and the structured strategy.
How does RL work?
- Reward Design: The mannequin obtained +1 for every right response and -0.5 for incorrect response. The mannequin obtained punished for repetitive phrases “Let’s see.. Let’s see..” and so forth.
- Algorithm: The GRPO algorithm or generalized reward coverage optimization algorithm, which is a variant of RL that balances exploration and exploitation was used.
- Outcomes: Phi-4-Reasoning-Plus achieved 82.5% accuracy on AIME 2025 whereas Phi-4 Reasoning scored simply 71.4%. It confirmed improved efficiency on Omni-MATH and TSP (touring Salesman Downside) too.
RL coaching allowed the mannequin to refine its steps iteratively and helped cut back the “hallucinations” within the generated outputs.
Structure of Phi-4-Reasoning Fashions
The primary structure of the Phi-4-Reasoning fashions is much like the bottom Phi-4 mannequin however to help the “reasoning” duties some key modifications had been made.
- The 2 placeholder tokens from Phi-4 had been repurposed. These tokens helped the mannequin to distinguish between uncooked enter and inner reasoning.
: Used to mark the beginning of a reasoning block. - : Used to mark the tip of a reasoning block.
- The Phi-4 Reasoning fashions obtained an prolonged context window of 32 Okay tokens to deal with the additional reasoning chains.
- The fashions used rotary place embeddings to raised observe the place of tokens in lengthy sequences to assist the fashions keep coherency.
- The fashions are educated to work effectively on client {hardware} together with gadgets like mobiles, tablets and desktops.
Phi-4-Reasoning Fashions: Benchmark Efficiency
Phi-4-Reasoning fashions had been evaluated on numerous benchmarks to check their efficiency in opposition to totally different fashions on various duties.
- AIME 2025: A benchmark that checks superior math, reasoning, and up to date examination problem. Phi-4-Reasoning Plus outperforms many of the top-performing fashions like o1, and Claude 3.7 Sonnet however remains to be behind o3-mini-high.
- Omni-MATH: A benchmark that evaluates various math reasoning throughout subjects and ranges. Each Phi-4-Reasoning and Phi-4-Reasoning plus outperform nearly all fashions solely behind DeepSeek R1.
- GPQA: A benchmark that checks mannequin efficiency on graduate-level skilled QA reasoning. The 2 Phi reasoning fashions lag behind the giants like o1, o3-mini excessive and DeepSeek R1.
- SAT: A benchmark that evaluates U.S. excessive school-level tutorial reasoning (math + verbal mix). The Phi-4-Reasoning-Plus mannequin stands among the many high 3 contenders with Phi-4-Reasoning following shut behind.
- Maze: This benchmark checks navigation + resolution pathfinding reasoning. On this benchmark, the Phi-4-reasoning fashions lag behind the top-tier fashions like o1 and Claude 3.7 sonnet.
On different benchmarks like Spatial map, TSP, and BA calendar, each the Phi-4-Reasoning fashions carry out decently.
Additionally Learn: How you can Superb-Tune Phi-4 Regionally?
How you can Entry Phi-4-Reasoning Fashions?
The 2 Phi-4-Reasoning fashions can be found on Hugging Face:
Click on on the hyperlinks to go to the cuddling face web page the place you possibly can entry these fashions. On the appropriate facet nook of the display, click on on “Use This Mannequin”, click on on “Transformers” and duplicate the next code:
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", mannequin="microsoft/Phi-4-reasoning")
pipe(messages)
Since it’s a 14B parameter mannequin and therefore requires round 40 + GB of VRAM (GPU), You may both run these fashions on “Colab Professional” or “Runpod”. For this weblog, we ran the mannequin on “Runpod” and used “A100 GPU”.
Set up Required Libraries
First, guarantee you might have the transformer’s library put in. You may set up it utilizing pip:
pip set up transformers
Load the Mannequin
As soon as all of the libraries have been put in, now you can load the Phi-4-Reasoning mannequin in your pocket book:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", mannequin="microsoft/Phi-4-reasoning", max_new_tokens=4096)
Ensure to set the max_new_tokens = 4096, the mannequin generates its complete reasoning and sometimes lesser token rely can cease its output halfway.
Phi-4-Reasoning: HandsOn Functions
We are going to now take a look at the Phi-4-reasoning fashions for 2 duties involving Logical Considering and Reasoning. Let’s begin.
Activity 1: Logical Considering
Enter:
messages = [
{"role": "user", "content": """A team is to be selected from among ten persons — A, B, C, D, E, F, G, H, I and J — subject to the following conditions.
Exactly two among E, J, l and C must be selected.
If F is selected, then J cannot be selected.
Exactly one among A and C must be selected.
Unless A is selected, E cannot be selected,
If and only if G is selected, D must not be selected.
If D is not selected, then H must be selected.
The size of a team is defined as the number of members in the team. In how many ways can the team of size 6 be selected, if it includes E? and What is the largest possible size of the team?"""
},
]
Output:
Markdown(pipe(messages)[0]["generated_text"][1]["content"])
The mannequin thinks totally. It does a terrific job of breaking down the complete downside into small steps. The issue consists of two duties, with the given token window, it gave the reply for the primary activity but it surely couldn’t generate the reply for the second activity. What was attention-grabbing was the strategy that the mannequin took in the direction of fixing the given downside. First, it began by understanding the query, mapping out all the chances, after which it went forward into fixing every activity, generally, repeating the logic that it had pre-established.
Activity 2: Clarify Working of LLMs to an 8 12 months Outdated Child
Enter:
messages = [
{"role": "user", "content": """Explain How LLMs works by comparing their working to the photosynthesis process in a plant so that an 8 year old kid can actually understand"""
},
]
Output:
Markdown(pipe(messages)[0]["generated_text"][1]["content"])
The mannequin hallucinates a bit whereas producing the response for this downside. Then lastly it generates the response that gives a very good analogy between how LLMs work and the photosynthesis course of. It retains the language easy and eventually provides a disclaimer too.
Phi-4 Reasoning vs o3-mini: Comparability
Within the final part, we noticed how the Phi-4-Reasoning mannequin performs whereas coping with complicated issues. Now let’s evaluate its efficiency in opposition to OpenAI’s o3-mini. To do that, let’s take a look at the output generated by the 2 fashions for a similar activity.
Phi-4-Reasoning
Enter:
from IPython.show import Markdown
messages = [
{"role": "user", "content": """Suppose players A and B are playing a game with fair coins. To begin the game A and B
both flip their coins simultaneously. If A and B both get heads, the game ends. If A and B both get tails, they both
flip again simultaneously. If one player gets heads and the other gets tails, the player who got heads flips again until he
gets tails, at which point the players flip again simultaneously. What is the expected number of flips until the game ends?"""
},
]
Output = pipe(messages)
Output:
Markdown(Output[0]["generated_text"][1]["content"])

o3-mini
Enter:
response = shopper.responses.create(
mannequin="o3-mini",
enter="""Suppose gamers A and B are taking part in a recreation with honest cash. To start the sport A and B
each flip their cash concurrently. If A and B each get heads, the sport ends. If A and B each get tails, they each
flip once more concurrently. If one participant will get heads and the opposite will get tails, the participant who obtained heads flips once more till he
will get tails, at which level the gamers flip once more concurrently. What's the anticipated variety of flips till the sport ends?"""
)
Output:
print(response.output_text)

To test the detailed output you possibly can check with the next Github hyperlink.
Outcome Analysis
Each fashions give correct solutions. Phi-4-Reasoning breaks the issue into many detailed steps and thinks by way of every one earlier than reaching the ultimate reply. o3-mini, then again, combines its pondering and ultimate response extra easily, making the output clear and able to use. Its solutions are additionally extra concise and direct.
Functions of Phi-4-Reasoning Fashions
The Phi-4-Reasoning fashions open a world of potentialities. Builders can use these fashions to develop clever methods to cater to totally different industries. Listed below are just a few areas the place the Phi-4-Reasoning fashions can really excel:
- Their sturdy efficiency in coding benchmarks (like LiveCodeBench) suggests functions in code technology, debugging, algorithm design, and automatic software program improvement.
- Their skill to generate detailed reasoning chains makes them well-suited for answering complicated questions that require multi-step inference and logical deduction.
- The fashions’ talents in planning duties could possibly be leveraged in logistics, useful resource administration, game-playing, and autonomous methods requiring sequential decision-making.
- The fashions also can contribute to designing methods in robotics, autonomous navigation, and duties involving the interpretation and manipulation of spatial relationships.
Conclusion
The Phi-4-Reasoning fashions are open-weight and constructed to compete with high paid reasoning fashions like DeepSeek and OpenAI’s o3-mini. Since they don’t seem to be instruction-tuned, their solutions might not at all times comply with a transparent, structured format like some common fashions, however this will enhance over time or with customized fine-tuning. Microsoft’s new fashions are highly effective reasoning instruments with sturdy efficiency, they usually’re solely going to get higher from right here.
Login to proceed studying and revel in expert-curated content material.