AI & ML interests

The AI community building the future.

Recent Activity

sayakpaulย  updated a dataset about 1 hour ago
huggingface/diffusers-metadata
Chunteย  new activity about 19 hours ago
huggingface/brand-assets:logo updated
imstevenpmworkย  updated a dataset about 20 hours ago
huggingface/documentation-images
View all activity

Articles

Chunteย 
in huggingface/brand-assets about 19 hours ago

logo updated

#5 opened about 19 hours ago by
Chunte
evalstateย 
posted an update 1 day ago
view post
Post
145
Hugging Face MCP Server v0.2.33
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Allow discovery of Product Documentation Library via the Search tool.
multimodalartย 
posted an update 7 days ago
view post
Post
999
Want to iterate on a Hugging Face Space with an LLM?

Now you can easily convert any HF entire repo (Model, Dataset or Space) to a text file and feed it to a language model!

multimodalart/repo2txt
evalstateย 
posted an update 8 days ago
view post
Post
206
Hugging Face MCP Server v0.2.31
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

- OpenAI Apps SDK Support for Gradio Content Generation spaces
giadapย 
posted an update 14 days ago
view post
Post
4332
๐ŸŒŽ AI ethics and sustainability are two sides of the same coin.

In our new blog post with Dr. Sasha Luccioni, we argue that separating them (as is too often the case) means missing the bigger picture of how AI systems impact both people and the planet.

Ethical and sustainable AI development canโ€™t be pursued in isolation. The same choices that affect who benefits or is harmed by AI systems also determine how much energy and resources they consume.

We explore how two key concepts, evaluation and transparency, can serve as bridges between these domains:

๐Ÿ“Š Evaluation, by moving beyond accuracy or performance metrics to include environmental and social costs, as weโ€™ve done with tools like the AI Energy Score.

๐Ÿ” Transparency, by enabling reproducibility, accountability, and environmental reporting through open tools like the Environmental Transparency Space.

AI systems mirror our priorities. If we separate ethics from sustainability, we risk building technologies that are efficient but unjust, or fair but unsustainable.

Read our blog post here: https://huggingface.co/blog/sasha/ethics-sustainability

AIEnergyScore/Leaderboard
sasha/environmental-transparency
  • 1 reply
ยท
evalstateย 
posted an update 16 days ago
view post
Post
316
Hugging Face MCP Server v0.2.29
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

- Document Fetch tool now returns markdown direct from HF Docs rather than processed.
- Relative URLs accepted for Fetch
evijitย 
posted an update 17 days ago
view post
Post
2457
AI for Scientific Discovery Won't Work Without Fixing How We Collaborate.

My co-author @cgeorgiaw and I just published a paper challenging a core assumption: that the main barriers to AI in science are technical. They're not. They're social.

Key findings:

๐Ÿšจ The "AI Scientist" myth delays progress: Waiting for AGI devalues human expertise and obscures science's real purpose: cultivating understanding, not just outputs.
๐Ÿ“Š Wrong incentives: Datasets have 100x longer impact than models, yet data curation is undervalued.
โš ๏ธ Broken collaboration: Domain scientists want understanding. ML researchers optimize performance. Without shared language, projects fail.
๐Ÿ” Fragmentation costs years: Harmonizing just 9 cancer files took 329 hours.

Why this matters: Upstream bottlenecks like efficient PDE solvers could accelerate discovery across multiple sciences. CASP mobilized a community around protein structure, enabling AlphaFold. We need this for dozens of challenges.

Thus, we're launching Hugging Science! A global community addressing these barriers through collaborative challenges, open toolkits, education, and community-owned infrastructure. Please find all the links below!

Paper: AI for Scientific Discovery is a Social Problem (2509.06580)
Join: hugging-science
Discord: https://discord.com/invite/VYkdEVjJ5J
Molbapย 
posted an update 17 days ago
view post
Post
2900
๐Ÿš€ New blog: Maintain the unmaintainable โ€“ 1M+ Python LOC, 400+ models

How do you stop a million-line library built by thousands of contributors from collapsing under its own weight?
At ๐Ÿค— Transformers, we do it with explicit software-engineering tenets, principles that make the codebase hackable at scale.

๐Ÿ” Inside the post:
โ€“ One Model, One File: readability first โ€” you can still open a modeling file and see the full logic, top to bottom.
โ€“ Modular Transformers: visible inheritance that cuts maintenance cost by ~15ร— while keeping models readable.
โ€“ Config-Driven Performance: FlashAttention, tensor parallelism, and attention scheduling are config-level features, not rewrites.

Written with @lysandre ,@pcuenq and @yonigozlan , this is a deep dive into how Transformers stays fast, open, and maintainable.

Read it here โ†’ transformers-community/Transformers-tenets