1. |
1 Billion Classifications |
|
|
2. |
From Chunks to Blocks: Accelerating Uploads and Downloads on the Hub |
|
|
3. |
Build awesome datasets for video generation |
|
|
4. |
The Open Arabic LLM Leaderboard 2 |
|
|
5. |
DABStep: Data Agent Benchmark for Multi-step Reasoning |
|
|
6. |
π0 and π0-FAST: Vision-Language-Action Models for General Robot Control |
|
|
7. |
Open-source DeepResearch – Freeing our search agents |
|
|
8. |
The AI tools for Art Newsletter - Issue 1 |
|
|
9. |
How to deploy and fine-tune DeepSeek models on AWS |
|
|
10. |
Open-R1: a fully open reproduction of DeepSeek-R1 |
|
|
11. |
Welcome to Inference Providers on the Hub 🔥 |
|
|
12. |
State of open video generation models in Diffusers |
|
|
13. |
We now support VLMs in smolagents! |
|
|
14. |
SmolVLM Grows Smaller – Introducing the 250M & 500M Models! |
|
|
15. |
Hugging Face and FriendliAI partner to supercharge model deployment on the Hub |
|
|
16. |
Introducing multi-backends (TRT-LLM, vLLM) support for Text Generation Inference |
|
|
17. |
Timm ❤️ Transformers: Use any timm model with transformers |
|
|
18. |
Train 400x faster Static Embedding Models with Sentence Transformers |
|
|
19. |
AI Agents Are Here. What Now? |
|
|
20. |
Visual Document Retrieval Goes Multilingual |
|
|
21. |
CO₂ Emissions and Models Performance: Insights from the Open LLM Leaderboard |
|
|
22. |
Introducing smolagents: simple agents that write actions in code. |
|
|
23. |
Visualize and understand GPU memory in PyTorch |
|
|
24. |
Controlling Language Model Generation with NVIDIA's LogitsProcessorZoo |
|
|
25. |
Evaluating Audio Reasoning with Big Bench Audio |
|
|
26. |
Finally, a Replacement for BERT: Introducing ModernBERT |
|
|
27. |
Bamba: Inference-Efficient Hybrid Mamba2 Model |
|
|
28. |
Benchmarking Language Model Performance on 5th Gen Xeon at GCP |
|
|
29. |
Welcome the Falcon 3 Family of Open Models! |
|
|
30. |
Introducing the Synthetic Data Generator - Build Datasets with Natural Language |
|
|
31. |
LeMaterial: an open source initiative to accelerate materials discovery and research |
|
|
32. |
Hugging Face models in Amazon Bedrock |
|
|
33. |
Open Preference Dataset for Text-to-Image Generation by the 🤗 Community |
|
|
34. |
How good are LLMs at fixing their mistakes? A chatbot arena experiment with Keras and TPUs |
|
|
35. |
Welcome PaliGemma 2 – New vision language models by Google |
|
|
36. |
Rethinking LLM Evaluation with 3C3H: AraGen Benchmark and Leaderboard |
|
|
37. |
Investing in Performance: Fine-tune small models with LLM insights - a CFM case study |
|
|
38. |
Open Source Developers Guide to the EU AI Act |
|
|
39. |
SmolVLM - small yet mighty Vision Language Model |
|
|
40. |
Rearchitecting Hugging Face Uploads and Downloads |
|
|
41. |
You could have designed state of the art positional encoding |
|
|
42. |
Introduction to the Open Leaderboard for Japanese LLMs |
|
|
43. |
Faster Text Generation with Self-Speculative Decoding |
|
|
44. |
From Files to Chunks: Improving Hugging Face Storage Efficiency |
|
|
45. |
Letting Large Models Debate: The First Multilingual LLM Debate Competition |
|
|
46. |
Judge Arena: Benchmarking LLMs as Evaluators |
|
|
47. |
Share your open ML datasets on Hugging Face Hub! |
|
|
48. |
Hugging Face + PyCharm |
|
|
49. |
Argilla 2.4: Easily Build Fine-Tuning and Evaluation datasets on the Hub — No Code Required |
|
|
50. |
Universal Assisted Generation: Faster Decoding with Any Assistant Model |
|
|
51. |
Expert Support case study: Bolstering a RAG app with LLM-as-a-Judge |
|
|
52. |
A Deepdive into Aya Expanse: Advancing the Frontier of Multilinguality |
|
|
53. |
CinePile 2.0 - making stronger datasets with adversarial refinement |
|
|
54. |
Introducing HUGS - Scale your AI with Open Models |
|
|
55. |
Introducing SynthID Text |
|
|
56. |
Deploying Speech-to-Speech on Hugging Face |
|
|
57. |
Releasing Outlines-core 0.1.0: structured generation in Rust and Python |
|
|
58. |
🧨 Diffusers welcomes Stable Diffusion 3.5 Large |
|
|
59. |
Transformers.js v3: WebGPU support, new models & tasks, and more… |
|
|
60. |
Hugging Face Teams Up with Protect AI: Enhancing Model Security for the Community |
|
|
61. |
Llama 3.2 in Keras |
|
|
62. |
Fixing Gradient Accumulation |
|
|
63. |
Introducing the AMD 5th Gen EPYC™ CPU |
|
|
64. |
A Security Review of Gradio 5 |
|
|
65. |
Welcome, Gradio 5 |
|
|
66. |
Scaling AI-based Data Processing with Hugging Face + Dask |
|
|
67. |
Faster Assisted Generation with Dynamic Speculation |
|
|
68. |
Improving Parquet Dedupe on Hugging Face Hub |
|
|
69. |
Introducing the Open FinLLM Leaderboard |
|
|
70. |
A Short Summary of Chinese AI Global Expansion |
|
|
71. |
🇨🇿 BenCzechMark - Can your LLM Understand Czech? |
|
|
72. |
Converting Vertex-Colored Meshes to Textured Meshes |
|
|
73. |
Llama can now see and run on your device - welcome Llama 3.2 |
|
|
74. |
Exploring the Daily Papers Page on Hugging Face |
|
|
75. |
FineVideo: behind the scenes |
|
|
76. |
Optimize and deploy models with Optimum-Intel and OpenVINO GenAI |
|
|
77. |
Fine-tuning LLMs to 1.58bit: extreme quantization made easy |
|
|
78. |
Introducing the SQL Console on Datasets |
|
|
79. |
Introducing Community Tools on HuggingChat |
|
|
80. |
Accelerate 1.0.0 |
|
|
81. |
Hugging Face partners with TruffleHog to Scan for Secrets |
|
|
82. |
Scaling robotics datasets with video encoding |
|
|
83. |
The 5 Most Under-Rated Tools on Hugging Face |
|
|
84. |
Improving Hugging Face Training Efficiency Through Packing with Flash Attention |
|
|
85. |
Deploy Meta Llama 3.1 405B on Google Cloud Vertex AI |
|
|
86. |
A failed experiment: Infini-Attention, and why we should keep trying? |
|
|
87. |
Introduction to ggml |
|
|
88. |
Tool Use, Unified |
|
|
89. |
Welcome FalconMamba: The first strong attention-free 7B model |
|
|
90. |
XetHub is joining Hugging Face! |
|
|
91. |
Introducing TextImage Augmentation for Document Images |
|
|
92. |
2024 Security Feature Highlights |
|
|
93. |
Google releases Gemma 2 2B, ShieldGemma and Gemma Scope |
|
|
94. |
Memory-efficient Diffusion Transformers with Quanto and Diffusers |
|
|
95. |
Serverless Inference with Hugging Face and NVIDIA NIMs |
|
|
96. |
LAVE: Zero-shot VQA Evaluation on Docmatix with LLMs - Do We Still Need Fine-Tuning? |
|
|
97. |
Llama 3.1 - 405B, 70B & 8B with multilinguality and long context |
|
|
98. |
WWDC 24: Running Mistral 7B with Core ML |
|
|
99. |
TGI Multi-LoRA: Deploy Once, Serve 30 Models |
|
|
100. |
Docmatix - a huge dataset for Document Visual Question Answering |
|
|