Introduction
MiroThinker v1.0 is an open-source research agent designed to advance tool-augmented reasoning and information-seeking capabilities.
Unlike previous agents that scale only model size or context length, MiroThinker introduces interactive scaling at the model level, systematically training the model to handle deeper and more frequent agent–environment interactions as a third dimension of performance improvement. Interactive scaling leverages environment feedback and external information acquisition to correct errors and refine trajectories.
Empirical results demonstrate the effectiveness of this interactive scaling. Performance across several benchmarks improves predictably as the model engages in increasingly deep and frequent interactions with its environment.
Key Features
- MiroThinker v1.0 supports a 256K context window, long-horizon reasoning, and deep multi-step analysis.
- Handles up to 600 tool calls per task — a substantial improvement over previous open-source research agents.
- Released in 8B, 30B, and 72B parameter scales, accompanied by a comprehensive suite of tools and workflows to flexibly support diverse research settings and compute budgets.
MiroThinker v1.0 demonstrates strong general-research performance across a broad range of benchmarks, achieving 37.7%, 47.1%, 55.6%, and 81.9% on HLE-Text, BrowseComp, BrowseComp-ZH, and GAIA-Text-103, respectively. These results surpass previous open-source agents and narrow the gap with commercial counterparts such as GPT-5-high.
More details can be found in our technical report.
Online Demo
Welcome to try out our online demo here.
Performance
To prevent potential information leakage (e.g., searching benchmark answers from HuggingFace), access to HuggingFace has been explicitly disabled in these tools.
Interactive Scaling
The RL-tuned MiroThinker-v1.0-30B model exhibits far longer and deeper interaction trajectories than its SFT counterpart across all four major benchmarks. While SFT models often terminate after only a few tool calls, the RL model performs extended multi-turn reasoning, exploring and verifying information before concluding.
This behavioral shift yields 8–10 points accuracy gains, showing a clear link between interaction depth and performance. We refer to this effect as interactive scaling: increasing the frequency and depth of tool-augmented interactions reliably improves research reasoning capability. This forms a third dimension of scaling—alongside model size and context length—defining MiroThinker’s path toward more general agentic intelligence.
Quick Start
Please refer to our GitHub repository for installation instructions, examples, and full documentation:
👉 https://github.com/MiroMindAI/MiroThinker
Local Deployment
It is recommended to use SGLang for deploying the model:
python -m sglang.launch_server --model-path miromind-ai/MiroThinker-v1.0-72B --host 0.0.0.0 --port 1234
For optimal performance in agentic tasks, we recommend the following inference parameters:
temperature: 1.0
top_p: 0.95
repetition_penalty: 1.05
max_context_length: 262144
max_tokens: 16384
Recommended System Prompt
We use this unified XML-wrapped JSON format to describe and organize all tools. If you have additional tools, please document them using the same structure and formatting to ensure consistent parsing, compatibility, and optimal performance across the environment.
Click to expand system prompt example
You are MiroThinker, an advanced AI assistant developed by MiroMind.
In this environment you have access to a set of tools you can use to answer the user's question.
You only have access to the tools provided below. You can only use one tool per message, and will receive the result of that tool in the user's next response. You use tools step-by-step to accomplish a given task, with each tool-use informed by the result of the previous tool-use. Today is: {today_date}
# Tool-Use Formatting Instructions
Tool-use is formatted using XML-style tags. The tool-use is enclosed in <use_mcp_tool></use_mcp_tool> and each parameter is similarly enclosed within its own set of tags.
The Model Context Protocol (MCP) connects to servers that provide additional tools and resources to extend your capabilities. You can use the server's tools via the `use_mcp_tool`.
Description:
Request to use a tool provided by a MCP server. Each MCP server can provide multiple tools with different capabilities. Tools have defined input schemas that specify required and optional parameters.
Parameters:
- server_name: (required) The name of the MCP server providing the tool
- tool_name: (required) The name of the tool to execute
- arguments: (required) A JSON object containing the tool's input parameters, following the tool's input schema, quotes within string must be properly escaped, ensure it's valid JSON
Usage:
<use_mcp_tool>
<server_name>server name here</server_name>
<tool_name>tool name here</tool_name>
<arguments>
{
"param1": "value1",
"param2": "value2 \"escaped string\""
}
</arguments>
</use_mcp_tool>
Important Notes:
- Tool-use must be placed **at the end** of your response, **top-level**, and not nested within other tags.
- Always adhere to this format for the tool use to ensure proper parsing and execution.
String and scalar parameters should be specified as is, while lists and objects should use JSON format. Note that spaces for string values are not stripped. The output is not expected to be valid XML and is parsed with regular expressions.
Here are the functions available in JSONSchema format:
## Server name: tool-python
### Tool name: create_sandbox
Description: Create a linux sandbox.
Args:
timeout: Time in seconds before the sandbox is automatically shutdown. The default is 600 seconds.
Returns:
The id of the newly created sandbox. You should use this sandbox_id to run other tools in the sandbox.
Input JSON schema: {'properties': {'timeout': {'default': 600, 'title': 'Timeout', 'type': 'integer'}}, 'title': 'create_sandboxArguments', 'type': 'object'}
### Tool name: run_python_code
Description: Run python code in an interpreter and return the execution result.
Args:
code_block: The python code to run.
sandbox_id: The id of the sandbox to run the code in. Reuse existing sandboxes whenever possible. To create a new sandbox, use tool `create_sandbox`.
Returns:
A result of the command execution, format like (stderr=..., stdout=..., exit_code=..., error=...)
Input JSON schema: {'properties': {'code_block': {'title': 'code_block', 'type': 'string'}, 'sandbox_id': {'title': 'Sandbox Id', 'type': 'string'}}, 'required': ['code_block', 'sandbox_id'], 'title': 'run_python_codeArguments', 'type': 'object'}
## Server name: search_and_scrape_webpage
### Tool name: google_search
Description:
Tool to perform web searches via Serper API and retrieve rich results.
It is able to retrieve organic search results, people also ask,
related searches, and knowledge graph.
Args:
q: Search query string
gl: Optional region code for search results in ISO 3166-1 alpha-2 format (e.g., 'us')
hl: Optional language code for search results in ISO 639-1 format (e.g., 'en')
location: Optional location for search results (e.g., 'SoHo, New York, United States', 'California, United States')
num: Number of results to return (default: 10)
tbs: Time-based search filter ('qdr:h' for past hour, 'qdr:d' for past day, 'qdr:w' for past week, 'qdr:m' for past month, 'qdr:y' for past year)
page: Page number of results to return (default: 1)
autocorrect: Whether to autocorrect spelling in query
Returns:
Dictionary containing search results and metadata.
Input JSON schema: {'properties': {'q': {'title': 'Q', 'type': 'string'}, 'gl': {'default': 'us', 'title': 'Gl', 'type': 'string'}, 'hl': {'default': 'en', 'title': 'Hl', 'type': 'string'}, 'location': {'default': None, 'title': 'Location', 'type': 'string'}, 'num': {'default': None, 'title': 'Num', 'type': 'integer'}, 'tbs': {'default': None, 'title': 'Tbs', 'type': 'string'}, 'page': {'default': None, 'title': 'Page', 'type': 'integer'}, 'autocorrect': {'default': None, 'title': 'Autocorrect', 'type': 'boolean'}}, 'required': ['q'], 'title': 'google_searchArguments', 'type': 'object'}
## Server name: jina_scrape_llm_summary
### Tool name: scrape_and_extract_info
Description:
Scrape content from a URL and extract specific types of information using LLM.
Args:
url (str): The URL to scrape content from
info_to_extract (str): The specific types of information to extract (usually a question)
custom_headers (Dict[str, str]): Additional headers to include in the scraping request
Returns:
Dict[str, Any]: A dictionary containing:
- success (bool): Whether the operation was successful
- url (str): The original URL
- extracted_info (str): The extracted information
- error (str): Error message if the operation failed
- scrape_stats (Dict): Statistics about the scraped content
- model_used (str): The model used for summarization
- tokens_used (int): Number of tokens used (if available)
Input JSON schema: {'properties': {'url': {'title': 'Url', 'type': 'string'}, 'info_to_extract': {'title': 'Info To Extract', 'type': 'string'}, 'custom_headers': {'additionalProperties': {'type': 'string'}, 'default': None, 'title': 'Custom Headers', 'type': 'object'}}, 'required': ['url', 'info_to_extract'], 'title': 'scrape_and_extract_infoArguments', 'type': 'object'}
# General Objective
You accomplish a given task iteratively, breaking it down into clear steps and working through them methodically.
Minimal Runnable Example
The following example shows how to run a MCP-style tool-calling workflow, including system prompt generation, model invocation, tool execution, and final response generation.
Before running the script, make sure to set the required environment variables:
export OPENAI_API_KEY="your-api-key-here"
export BASE_URL="https://your-model-endpoint.example.com/v1"
Click to expand python code example
import json
import os
import inspect
import re
from openai import OpenAI
from json_repair import repair_json
def get_weather(location: str, unit: str = "celsius") -> str:
"""
Get weather information for a specified location (simulated)
Args:
location: Location name
unit: Temperature unit, either celsius or fahrenheit
Returns:
JSON string with weather information
"""
weather_data = {
"London": {"temperature": 15, "condition": "sunny", "humidity": 45},
"New York": {"temperature": 20, "condition": "cloudy", "humidity": 60},
"Tokyo": {"temperature": 25, "condition": "rainy", "humidity": 75},
}
weather = weather_data.get(location, {"temperature": 18, "condition": "unknown", "humidity": 50})
if unit == "fahrenheit":
weather["temperature"] = weather["temperature"] * 9/5 + 32
weather["unit"] = "°F"
else:
weather["unit"] = "°C"
return json.dumps(weather, ensure_ascii=False)
def calculate(expression: str) -> str:
"""
Calculate a mathematical expression
Args:
expression: Mathematical expression, e.g., "2 + 3 * 4"
Returns:
Calculation result
"""
try:
result = eval(expression)
return json.dumps({"result": result, "expression": expression}, ensure_ascii=False)
except Exception as e:
return json.dumps({"error": str(e)}, ensure_ascii=False)
tools = [
{"type": "function", "function": {"name": "get_weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "Location name"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"], "description": "Temperature unit, default is celsius"}}, "required": ["location"]}}},
{"type": "function", "function": {"name": "calculate", "parameters": {"type": "object", "properties": {"expression": {"type": "string", "description": "Mathematical expression to calculate, e.g., '2 + 3 * 4'"}}, "required": ["expression"]}}}
]
available_functions = {"get_weather": get_weather, "calculate": calculate}
def parse_mcp_tool_call(response_text: str):
"""Parse MCP-style tool call from model response. Returns first tool call or None."""
match = re.search(r'<use_mcp_tool>(.*?)</use_mcp_tool>', response_text, re.DOTALL)
if not match:
return None
content = match.group(1)
server_match = re.search(r'<server_name>(.*?)</server_name>', content, re.DOTALL)
tool_match = re.search(r'<tool_name>(.*?)</tool_name>', content, re.DOTALL)
args_match = re.search(r'<arguments>(.*?)</arguments>', content, re.DOTALL)
server_name = server_match.group(1).strip() if server_match else None
tool_name = tool_match.group(1).strip() if tool_match else None
if args_match:
try:
arguments = json.loads(args_match.group(1).strip())
except json.JSONDecodeError as e:
print(f"⚠️ Warning: Failed to parse arguments JSON: {e}, attempting to repair...")
try:
repaired = repair_json(args_match.group(1).strip())
arguments = json.loads(repaired)
print(f"✅ Successfully repaired JSON")
except Exception as repair_error:
print(f"❌ Failed to repair JSON: {repair_error}")
arguments = {}
else:
arguments = {}
if server_name and tool_name:
return {"server_name": server_name, "tool_name": tool_name, "arguments": arguments}
return None
def generate_mcp_system_prompt(openai_tools: list, available_functions: dict = None, server_name: str = "default", date: str = "2025-11-27") -> str:
"""Generate MCP-style system prompt from OpenAI tools format."""
prefix = f"""You are MiroThinker, an advanced AI assistant developed by MiroMind.
In this environment you have access to a set of tools you can use to answer the user's question.
You only have access to the tools provided below. You can only use one tool per message, and will receive the result of that tool in the user's next response. You use tools step-by-step to accomplish a given task, with each tool-use informed by the result of the previous tool-use. Today is: {date}
# Tool-Use Formatting Instructions
Tool-use is formatted using XML-style tags. The tool-use is enclosed in <use_mcp_tool></use_mcp_tool> and each parameter is similarly enclosed within its own set of tags.
The Model Context Protocol (MCP) connects to servers that provide additional tools and resources to extend your capabilities. You can use the server's tools via the `use_mcp_tool`.
Description:
Request to use a tool provided by a MCP server. Each MCP server can provide multiple tools with different capabilities. Tools have defined input schemas that specify required and optional parameters.
Parameters:
- server_name: (required) The name of the MCP server providing the tool
- tool_name: (required) The name of the tool to execute
- arguments: (required) A JSON object containing the tool's input parameters, following the tool's input schema, quotes within string must be properly escaped, ensure it's valid JSON
Usage:
<use_mcp_tool>
<server_name>server name here</server_name>
<tool_name>tool name here</tool_name>
<arguments>
{{
"param1": "value1",
"param2": "value2 \\"escaped string\\""
}}
</arguments>
</use_mcp_tool>
Important Notes:
- Tool-use must be placed **at the end** of your response, **top-level**, and not nested within other tags.
- Always adhere to this format for the tool use to ensure proper parsing and execution.
String and scalar parameters should be specified as is, while lists and objects should use JSON format. Note that spaces for string values are not stripped. The output is not expected to be valid XML and is parsed with regular expressions.
Here are the functions available in JSONSchema format:
## Server name: {server_name}
"""
tools_section = []
for i, tool in enumerate(openai_tools):
if tool.get("type") == "function":
func = tool["function"]
tool_name = func["name"]
func_obj = available_functions[tool_name]
full_description = inspect.getdoc(func_obj) or func.get("description", "")
if i > 0:
tools_section.append("\n")
tools_section.append(f"### Tool name: {tool_name}\nDescription: {full_description}\n\nInput JSON schema: {json.dumps(func['parameters'], ensure_ascii=False)}\n")
suffix = "\n# General Objective\n\nYou accomplish a given task iteratively, breaking it down into clear steps and working through them methodically."
return prefix + ''.join(tools_section) + suffix
def run_conversation(user_query: str, model: str = "MiroThinker"):
"""Run a complete conversation with tool calling"""
system_prompt = generate_mcp_system_prompt(openai_tools=tools, available_functions=available_functions, server_name="My-Tools", date="2025-12-01")
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "your-api-key-here"), base_url=os.environ.get("BASE_URL", "your-base-url-here"))
print(f"\n{'='*60}\nUser Query: {user_query}\n{'='*60}\n")
messages = [{'role': 'system', 'content': system_prompt}, {"role": "user", "content": user_query}]
print("📤 Sending request to model...")
response = client.chat.completions.create(model=model, messages=messages)
response_message = response.choices[0].message
response_content = response_message.content
tool_call = parse_mcp_tool_call(response_content)
print(f"📝 Model response:\n{response_content}\n")
messages.append(response_message)
if tool_call:
server_name = tool_call["server_name"]
tool_name = tool_call["tool_name"]
function_args = tool_call["arguments"]
print(f"\n🔧 Model decided to call tool:\n - Server: {server_name}\n Tool: {tool_name}\n Args: {json.dumps(function_args, ensure_ascii=False)}")
function_response = available_functions[tool_name](**function_args)
print(f" Result: {function_response}\n")
messages.append({"role": "user", "content": function_response})
print("📤 Requesting model to generate final response based on tool results...\n")
second_response = client.chat.completions.create(model=model, messages=messages)
final_message = second_response.choices[0].message.content
print(f"💬 Final Response:\n{final_message}\n")
return final_message
else:
print(f"💬 Model Response (no tool calls):\n{response_message.content}\n")
return response_message.content
def main():
"""Run multiple examples"""
run_conversation("What's the weather like in London?")
# run_conversation("Calculate (25 + 15) * 3 - 10")
if __name__ == "__main__":
main()
License
MiroThinker v1.0 is released under the MIT License.
Citation
If you find this project useful in your research, please consider citing:
@article{miromind2025mirothinker,
title={MiroThinker: Pushing the Performance Boundaries of Open-Source Research Agents via Model, Context, and Interactive Scaling},
author={MiroMind Team and Bai, Song and Bing, Lidong and Chen, Carson and Chen, Guanzheng and Chen, Yuntao and Chen, Zhe and Chen, Ziyi and Dai, Jifeng and Dong, Xuan and others},
journal={arXiv preprint arXiv:2511.11793},
year={2025}
}
Contact Us
MiroThinker is developed by the MiroMind AI Team. If you would like to leave us a message, feel free to get in touch. In addition to GitHub, Discord, WeChat, and RedNote, you can also reach us via email at [email protected].
- Downloads last month
- 1,589
Model tree for miromind-ai/MiroThinker-v1.0-8B
Base model
Qwen/Qwen3-8B-Base

