Your GPUs Are Idle 60% of the Time — Hugging Face Surveyed 16 RL Libraries to Fix That
Hugging Face analyzed 16 open-source reinforcement learning libraries and found they all converged on the same fix for wasted GPU time. Here's what they learned.
Browse all articles in Research & Papers
Hugging Face analyzed 16 open-source reinforcement learning libraries and found they all converged on the same fix for wasted GPU time. Here's what they learned.
New open-source benchmark from NVIDIA exposes how existing AI speed tests mislead with fake data and narrow prompts. Here's why it matters.
Google DeepMind proposes a cognitive science framework to measure AGI progress, plus a $200K Kaggle hackathon to build the evaluations.
NVIDIA researchers use media compression techniques to reduce LLM key-value cache memory by 20x — without changing model weights.
MiniMax releases M2.7, a self-evolving AI model that handled 30-50% of its own reinforcement learning workflow during development.
Mamba 3 launches under Apache 2.0, claiming near-Transformer quality with dramatically lower memory and latency for AI inference.