AI & ML interests

✍️ Rigor, 🀝 Consent, πŸ‘οΈβ€πŸ—¨οΈ Social Consciousness, 🌎 Sustainability, πŸ§‘β€πŸ€β€πŸ§‘ Inclusion, and πŸ€” Inquisitiveness

Recent Activity

frimelleΒ 
updated a Space about 10 hours ago
megΒ 
published a Space 3 days ago
giadapΒ 
posted an update 14 days ago
view post
Post
4335
🌎 AI ethics and sustainability are two sides of the same coin.

In our new blog post with Dr. Sasha Luccioni, we argue that separating them (as is too often the case) means missing the bigger picture of how AI systems impact both people and the planet.

Ethical and sustainable AI development can’t be pursued in isolation. The same choices that affect who benefits or is harmed by AI systems also determine how much energy and resources they consume.

We explore how two key concepts, evaluation and transparency, can serve as bridges between these domains:

πŸ“Š Evaluation, by moving beyond accuracy or performance metrics to include environmental and social costs, as we’ve done with tools like the AI Energy Score.

πŸ” Transparency, by enabling reproducibility, accountability, and environmental reporting through open tools like the Environmental Transparency Space.

AI systems mirror our priorities. If we separate ethics from sustainability, we risk building technologies that are efficient but unjust, or fair but unsustainable.

Read our blog post here: https://huggingface.co/blog/sasha/ethics-sustainability

AIEnergyScore/Leaderboard
sasha/environmental-transparency
  • 1 reply
Β·
christopherΒ 
posted an update 16 days ago
view post
Post
401
Something very cool is cooking at Lichess
  • 1 reply
Β·
evijitΒ 
posted an update 17 days ago
view post
Post
2462
AI for Scientific Discovery Won't Work Without Fixing How We Collaborate.

My co-author @cgeorgiaw and I just published a paper challenging a core assumption: that the main barriers to AI in science are technical. They're not. They're social.

Key findings:

🚨 The "AI Scientist" myth delays progress: Waiting for AGI devalues human expertise and obscures science's real purpose: cultivating understanding, not just outputs.
πŸ“Š Wrong incentives: Datasets have 100x longer impact than models, yet data curation is undervalued.
⚠️ Broken collaboration: Domain scientists want understanding. ML researchers optimize performance. Without shared language, projects fail.
πŸ” Fragmentation costs years: Harmonizing just 9 cancer files took 329 hours.

Why this matters: Upstream bottlenecks like efficient PDE solvers could accelerate discovery across multiple sciences. CASP mobilized a community around protein structure, enabling AlphaFold. We need this for dozens of challenges.

Thus, we're launching Hugging Science! A global community addressing these barriers through collaborative challenges, open toolkits, education, and community-owned infrastructure. Please find all the links below!

Paper: AI for Scientific Discovery is a Social Problem (2509.06580)
Join: hugging-science
Discord: https://discord.com/invite/VYkdEVjJ5J
lunarfluΒ 
posted an update 18 days ago
view post
Post
2099
Cool stuff these past weeks on huggingface! πŸ€— πŸš€ !
β€’ πŸ“ˆTrackio, local-first W&B alternative
https://github.com/gradio-app/trackio/issues
β€’ 🌍EmbeddingGemma, 300M-param, multilingual embeddings, on-device
https://huggingface.co/blog/embeddinggemma
β€’ πŸ’»Open LLMs in VS Code (Inference Providers)
https://x.com/reach_vb/status/1966185427582497171
β€’ πŸ€–Smol2Operator GUI agents
https://huggingface.co/blog/smol2operator
β€’ πŸ–ΌοΈGradio visible watermarking
https://huggingface.co/blog/watermarking-with-gradio
giadapΒ 
posted an update 24 days ago
view post
Post
10812
One of the hardest challenges in AI safety is finding the right balance: how do we protect people from harm without undermining their agency? This tension is especially visible in conversational systems, where safeguards can sometimes feel more paternalistic than supportive.

In my latest piece for Hugging Face, I argue that open source and community-driven approaches offer a promising (though not exclusive) way forward.

✨ Transparency can make safety mechanisms into learning opportunities.
✨ Collaboration with diverse communities makes safeguards more relevant across contexts.
✨ Iteration in the open lets protections evolve rather than freeze into rigid, one-size-fits-all rules.

Of course, this isn’t a silver bullet. Top-down safety measures will still be necessary in some cases. But if we only rely on corporate control, we risk building systems that are safe at the expense of trust and autonomy.

Read the blog post here: https://huggingface.co/blog/giadap/preserving-agency
Β·
megΒ 
posted an update about 1 month ago
view post
Post
2832
πŸ€– As AI-generated content is shared in movies/TV/across the web, there's one simple low-hanging fruit πŸ‡ to help know what's real: Visible watermarks. With the Gradio team, I've made sure it's trivially easy to add this disclosure to images, video, chatbot text. See how: https://huggingface.co/blog/watermarking-with-gradio
Thanks to the code collab in particular from @abidlabs and Yuvraj Sharma.
yjerniteΒ 
posted an update about 1 month ago
view post
Post
2352
Tremendous quality of life upgrade on the Hugging Face Hub - we now have auto-complete emojis πŸ€— πŸ₯³ πŸ‘ πŸ™Œ πŸŽ‰

Get ready for lots more very serious analysis on a whole range of topics from yours truly now that we have unlocked this full range of expression πŸ˜„ πŸ€” πŸ—£ πŸ™Š