AI & ML interests

None defined yet.

Recent Activity

anditoย 
posted an update 13 days ago
view post
Post
1604
Finally, our new paper is out! "๐—™๐—ถ๐—ป๐—ฒ๐—ฉ๐—ถ๐˜€๐—ถ๐—ผ๐—ป: ๐—ข๐—ฝ๐—ฒ๐—ป ๐——๐—ฎ๐˜๐—ฎ ๐—œ๐˜€ ๐—”๐—น๐—น ๐—ฌ๐—ผ๐˜‚ ๐—ก๐—ฒ๐—ฒ๐—ฑ"! ๐Ÿฅณ
FineVision: Open Data Is All You Need (2510.17269)

If you've ever trained a VLM, you know this problem: nobody shares their data mixtures. It's a black box, making replicating SOTA work impossible.
We wanted to change that.

FineVision unifies 200 sources into 24 million samples. With 17.3 million images and 9.5 billion answer tokens, it's the largest open resource of its kind.

In the paper, we share how we built it:
๐Ÿ” finding and cleaning data at scale
๐Ÿงน removing excessive duplicates across sources
๐Ÿค— decontaminating against 66 public benchmarks

My favorite part is Figure 6 (in the video!). It's our visual diversity analysis. It shows that FineVision isn't just bigger; it's more balanced and conceptually richer than other open datasets.
NVIDIA's Eagle 2 paper highlighted just how critical this visual diversity is, and our results confirm it: models trained on FineVision consistently outperform those trained on any other open dataset on 11 benchmarks!

๐ŸŽ‰ To celebrate the paper, Iโ€™m also releasing a concatenated and shuffled version of the full dataset! ๐Ÿ‘‰HuggingFaceM4/FineVision_full_shuffled

Itโ€™s ready to stream, so you can start training your own models right away:

from datasets import load_dataset
d = load_dataset("HuggingFaceM4/FineVision_full_shuffled", split="train", streaming=True)
print(next(iter(d)))

A big shoutout to the first authors: Luis Wiedmann and Orr Zohar. They are rockstars!
burtenshawย 
posted an update about 2 months ago
view post
Post
4766
Smol course has a distinctive approach to teaching post-training, so I'm posting about how itโ€™s different to other post-training courses, including the llm course thatโ€™s already available.

In short, the smol course is just more direct that any of the other course, and intended for semi-pro post trainers.

- Itโ€™s a minimal set of instructions on the core parts.
- Itโ€™s intended to bootstrap real projects you're working on.
- The material handsover to existing documentation for details
- Likewise, it handsover to the LLM course for basics.
- Assessment is based on a leaderboard, without reading all the material.

To start the smol course, follow here: smol-course
burtenshawย 
posted an update about 2 months ago
view post
Post
5345
new smol course

If youโ€™re building with or learning about post training AI models right now, we have a new FREE and CERTIFIED course.

๐Ÿ”— Follow the org to join in smol-course

The course builds on smol course v1 which was the fastest way to learn to train your custom AI models. It now has:

- A leaderboard for students to submit models to
- Certification based on exams and leaderboards
- Prizes based on Leaderboards
- Up to date content on TRL and SmolLM3
- Deep integration with the Hubโ€™s compute for model training and evaluation

We will release chapters every few weeks, so you can follow the org to stay updated.
  • 2 replies
ยท
burtenshawย 
posted an update about 2 months ago
view post
Post
3001
The open source AI community is just made of people who are passionate and care about their work. So we thought it would be cool to share our favourite icons of the community with a fun award.

Winners get free Hugging Face Pro Subscriptions, Merchandise, or compute credits for the hub.

๐Ÿ”— Follow and nominate here: community-spotlight

This is a new initiative to recognise and celebrate the incredible work being done by community members. It's all about inspiring more collaboration and innovation in the world of machine learning and AI.

They're highlighting contributors in four key areas:
- model creators: building and sharing innovative and state-of-the-art models.
- educators: sharing knowledge through posts, articles, demos, and events.
- tool builders: creating the libraries, frameworks, and applications that we all use.
- community champions: supporting and mentoring others in forums.

Know someone who deserves recognition? Nominate them by opening a post in the Hugging Face community forum.
  • 1 reply
ยท
eliebakย 
posted an update 2 months ago
view post
Post
3503
Super excited to announce that our research team at Hugging Face will be doing an AMA on reddit r/LocalLLaMA.

Come ask any questions to the team behind SmolLM, FineWeb and more! And who knows, maybe thereโ€™ll be a shiny new release to talk about?

Thursday 4th September, 8AM-11AM PST ๐Ÿค—

science
eliebakย 
posted an update 2 months ago
view post
Post
648
Motif 2.6B tech report is pretty insane, first time i see a model with differential attention and polynorm trained at scale!

> It's trained on 2.5T of token, with a "data mixture schedule" to continuously adjust the mixture over training.
> They use WSD with a "Simple moving average" averaging the last 6 ckpt every 8B token.
> They trained on Finemath, Fineweb2, DCLM, TxT360.
> Lot of details in the finetuning data they used, for instance they used EvolKit and did some "dataset fusion" to have more compressed knowledge into the data.
> They mention they also tried Normalized GPT, QK-Norm and Cross Layer Attention.

Motif-Technologies/Motif-2.6B
anditoย 
posted an update 3 months ago
view post
Post
3016
Many VLMs claim to process hours of video. But can they follow the story?๐Ÿค”
Today, we introduce TimeScope: The benchmark that separates true temporal understanding from marketing hype. Let's see how much VLMs really understand!โณ

We test three skills that matter for real-world use:
๐Ÿ”Ž Localized Retrieval: Find a specific action.
๐Ÿงฉ Information Synthesis: Piece together scattered clues.
๐Ÿƒ Fine-Grained Perception: Analyze detailed motion (e.g., count how many times a person swings an axe).

The results are in, and they're revealing. Only Gemini 2.5 pro handles 1-hour-long videos.
Performance drops sharply with duration, proving that long video understanding is still challenging. We've found the breaking pointsโ€”now the community can start fixing them.๐Ÿ“ˆ

Want to learn more? TimeScope is 100% open-source. Benchmark your model and help us build the next generation of video AI.

๐Ÿ“– Blog:
https://huggingface.co/blog/timescope-video-lmm-benchmark
๐Ÿ‘ฉโ€๐Ÿ’ป Leaderboard & Demo: Apollo-LMMs/TimeScope
๐Ÿ“Š Dataset: Apollo-LMMs/TimeScope
โš™๏ธ Eval Code: https://github.com/EvolvingLMMs-Lab/lmms-eval
eliebakย 
posted an update 4 months ago
view post
Post
4741
Kimi K2 tech report is full of gems as always. Here are my notes on it:

> MuonClip: Pretty crazy how after 70k the training stabilizes and the QK-clip is basically inactive. There is also no loss in perf with QK-clip which is not trivial at all (at small scale but with aggressive threshold). Also a cool explanation of why muon makes the logit explode in appendix E (tl;dr is that muon makes the singular value of the update matrix higher)
> Sparsity scaling laws to justify their ratio, they have a very solid training infra that allows the model to be trained at this sparsity level, they could have increased even more but as sparsity increases the training becomes less efficient.
> They diminish the number of attention heads to make it more efficient for long context since attention heads are a big bottleneck for long context. They also remove 2 of the 3 "first dense" layers in the dsv3 arch.

With the sparsity and attention heads (divided by 2) they achieve 83% increased flops compared to deepseek v3 arch at 128k.

> Data: Rephrasing is KEY. They do a lot more synthetic data generation and rephrase their corpus to have different styles, for longer documents they do it by chunk. I'm (half) surprised by the fact that ONLY 1 epoch (assuming same number of training tokens I think?) of data rephrased 10 times has better accuracy than 10 epochs of the same data rephrased once.
> They do rewriting for Math and Knowledge, for Math they apply the ShallowMath recipe and instruct the model to rephrase in a "learning note" style
> They talk about diversity and probably have some internal stuff/eval to test that, as always still a bit unclear for me how to properly measure that.

The infra is also very nice, quick summary:
> PP=16 (1F1B schedule, a bit custom), EP=16, zero1
> No FP8 computation but for storage of specific layers, selective recomputation for inexpensive block, activation offloading to CPU
burtenshawย 
posted an update 4 months ago
view post
Post
1566
Kimi-K2 is ready for general use! In these notebooks I walk you through use cases like function calling and structured outputs.

๐Ÿ”— burtenshaw/Kimi-K2-notebooks

You can swap it into any OpenAI compatible application via Inference Providers and get to work with an open source model.
  • 1 reply
ยท
anditoย 
posted an update 4 months ago
view post
Post
4042
๐Ÿง ๐Ÿ‘๏ธ Can AI visualize solutions?

Humans often solve visual problems by sketching ideas in our minds. What if Vision-Language Models (VLMs) could do something similar, not by generating full images, but by using internal โ€œmental sketchesโ€?

Thatโ€™s the idea behind Mirage, a new framework that empowers VLMs to reason using latent visual tokens. Instead of just thinking in words, Mirage mixes in abstract visual representations that help the model solve complex tasks.

These aren't photorealistic images. They're compact, internal representations optimized purely to support reasoning.

๐Ÿ”ง Mirage is trained in two phases:

1) Grounding: It learns to produce latent tokens anchored in real images.
2) Refinement: The model drops the images and learns to generate visual tokens on its own.

๐Ÿ“ˆ And yes, it works!
On challenging benchmarks like Visual Spatial Planning, Jigsaw puzzles, and Spatial Attention Tasks, Mirage clearly outperforms GPT-4o and other strong baselines.
Smart sketches > empty words.

By mimicking the way humans visualize solutions, Mirage gives AI a new kind of imagination, one thatโ€™s faster, more efficient, and more human-like.
Kudos to the teams at UMass Amherst and MIT behind this exciting work.
Check the paper: Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (2506.17218)
ยท
burtenshawย 
posted an update 4 months ago
view post
Post
3091
Inference for generative ai models looks like a mine field, but thereโ€™s a simple protocol for picking the best inference:

๐ŸŒ 95% of users >> If youโ€™re using open (large) models and need fast online inference, then use Inference providers on auto mode, and let it choose the best provider for the model. https://huggingface.co/docs/inference-providers/index

๐Ÿ‘ท fine-tuners/ bespoke >> If youโ€™ve got custom setups, use Inference Endpoints to define a configuration from AWS, Azure, GCP. /static-proxy?url=https%3A%2F%2Fendpoints.huggingface.co%2F%3C%2Fa%3E%3Cbr%3E%3Cbr%3E%3Cspan%3E%F0%9F%A6%AB Locals >> If youโ€™re trying to stretch everything you can out of a server or local machine, use Llama.cpp, Jan, LMStudio or vLLM. https://huggingface.co/settings/local-apps#local-apps

๐ŸชŸ Browsers >> If you need open models running right here in the browser, use transformers.js. https://github.com/huggingface/transformers.js

Let me know what youโ€™re using, and if you think itโ€™s more complex than this.
burtenshawย 
posted an update 5 months ago
view post
Post
1134
You don't need remote APIs for a coding copliot, or the MCP Course! Set up a fully local IDE with MCP integration using Continue. In this tutorial Continue guides you through setting it up.

This is what you need to do to take control of your copilot:

1. Get the Continue extension from the [VS Code marketplace](https://marketplace.visualstudio.com/items?itemName=Continue.continue) to serve as the AI coding assistant.

2. Serve the model with an OpenAI compatible server in Llama.cpp / LmStudio/ etc.

llama-server -hf unsloth/Devstral-Small-2505-GGUF:Q4_K_M

3. Create a .continue/models/llama-max.yaml file in your project to tell Continue how to use the local Ollama model.
name: Llama.cpp model
    version: 0.0.1
    schema: v1
    models:
      - provider: llama.cpp
        model: unsloth/Devstral-Small-2505-GGUF
        apiBase: http://localhost:8080
        defaultCompletionOptions:
          contextLength: 8192 
    # Adjust based on the model
        name: Llama.cpp Devstral-Small
        roles:
          - chat
          - edit


4. Create a .continue/mcpServers/playwright-mcp.yaml file to integrate a tool, like the Playwright browser automation tool, with your assistant.

name: Playwright mcpServer
    version: 0.0.1
    schema: v1
    mcpServers:
      - name: Browser search
        command: npx
        args:
          - "@playwright/mcp@latest"


Check out the full tutorial in the [the MCP course](https://huggingface.co/learn/mcp-course/unit2/continue-client)
  • 1 reply
ยท
burtenshawย 
posted an update 5 months ago
view post
Post
1782
Brand new MCP Course has units are out, and now it's getting REAL! We've collaborated with Anthropic to dive deep into production ready and autonomous agents using MCP

๐Ÿ”— mcp-course

This is what the new material covers and includes:

- Use Claude Code to build an autonomous PR agent
- Integrate your agent with Slack and Github to integrate it with you Team
- Get certified on your use case and share with the community
- Build an autonomous PR cleanup agent on the Hugging Face hub and deploy it with spaces

The material goes deep into these problems and helps you to build applications that work. Weโ€™re super excited to see what you build with it.
burtenshawย 
posted an update 5 months ago
view post
Post
1640
Super excited to release Autotrain MCP. This is an MCP server for training AI models, so you can use your AI tools to train your AI models ๐Ÿคฏ.

๐Ÿ”— burtenshaw/autotrain-mcp

Use this MCP server with tools like Claude Desktop, Cursor, VSCode, or Continue to do this:

- Define an ML problem like Image Classification, LLM fine-tuning, Text Classification, etc.
- The AI can retrieve models and datasets from the hub using the hub MCP.
- Training happens on a Hugging Face space, so no worries about hardware restraints.
- Models are pushed to the hub to be used inference tools like Llama.cpp, vLLM, MLX, etc.
- Built on top of the AutoTrain library, so it has full integration with transformers and other libraries.

Everything is still under active development, but Iโ€™m super excited to hear what people build, and Iโ€™m open to contributions!
  • 1 reply
ยท
dvilasueroย 
posted an update 5 months ago
view post
Post
3106
Super excited to launch Hugging Face Sheets: Spreadsheets meet AI and unstructured data.

A few months ago, we started imagining new ways to build and transform datasets with the latest open-source models.

Today, I'm thrilled to introduce our first step in this direction.


In a nutshell:

๐Ÿ“ Effortlessly run prompts and models over your data.
๐ŸŒ Agentic search for accuracy and real-time information.
๐Ÿ–ผ๏ธ Familiar, minimalistic interface for interacting with data.
๐ŸŽฏ Human feedback 2.0: Your input directly improves generated data.
๐Ÿ’ฏ Access hundreds of open models and leading inference providers.

Go to this space to try it out!

aisheets/sheets

Leave your questions below, we're just getting started!
ยท