![ReversingLabs Identifies Novel ML Malware Hosted on Leading Hugging Face AI Model Platform ReversingLabs Identifies Novel ML Malware Hosted on Leading Hugging Face AI Model Platform](https://i1.wp.com/ml.globenewswire.com/media/MGI0Mzc1ZmQtNjEzOS00MzczLTk2YjgtY2JhOGYyMWZhMjUyLTExMzQyODM=/tiny/ReversingLabs-US-Inc-.png?w=1200&resize=1200,0&ssl=1)
Dubbed “nullifAI,” a Tactic for Evading Detection in ML Models Targeted Pickle Files, Demonstrates Fast-Growing Cybersecurity Risks Presented by AI-Coding Tool Platforms
CAMBRIDGE, Mass., Feb. 06, 2025 (GLOBE NEWSWIRE) — ReversingLabs (RL), the trusted name in file and software security, today revealed a novel ML malware attack technique on the AI community Hugging Face. Dubbed “nullifAI,” it impacted two ML models Hugging Face hosts, employing a corruption for defense evasion on the AI platform. The discovery is outlined in RL’s latest research post, “Malicious ML models discovered on Hugging Face platform,” and is accompanied by a new white paper, “AI is the Supply Chain,” which highlights the larger cybersecurity challenges created by AI impacting software development.
In its research post, RL examines how threat actors are seeking hard-to-detect ways to insert and distribute ML malware via unsuspecting hosts, such as the AI platform Hugging Face. The research details how attackers used corrupt Pickle files to evade detection and bypass Hugging Face security protections while simultaneously managing to achieve execution of malicious code. Hugging Face has been notified and the ML models in question were taken down.
“While the files discovered by our researchers appear to be ‘proof of concept’ rather than active threats, the failure to detect their presence points to a larger set of issues that are going to grow significantly and become more problematic as the use of AI coding tools grows,” said Tomislav Peričin, Chief Software Architect and co-founder, ReversingLabs. “Right now, AI is fueling modern software development, populating libraries and emboldening attackers. In fact, it’s safe to say AI is the supply chain, and while the benefits are vast, the security risks that come with it are alarming. To mitigate these new risks, organizations must embrace new modern software supply chain security solutions.”
Securing AI platforms and communities is critical. nullifAI is an example of an evolving category of risks for software supply chains where AI is involved; in this case ML models hosted in an AI community. In its new white paper “AI is the Supply Chain,” RL examines how AI is transforming software development, altering software supply chains and creating significant new cybersecurity challenges for businesses. According to Gartner, 75% of enterprise software engineers will use AI code assistants by 2028. This includes those offered by companies including Hugging Face, GitHub Copilot, Tabnine, and others.
While fueling incredible new innovations, AI-generated code will introduce new cybersecurity challenges to software development organizations. Examples include the growing use of outdated code, and more concerning, compromised code containing exploitable software vulnerabilities, or malicious features that are undetectable by traditional security measures such as static code analysis.
Address AI Risks in Software Development with Spectra Assure
ReversingLabs works with some of the leading AI companies to help secure their LLM and ML models. With the industry’s largest threat repository and RL’s advanced complex binary analysis, Spectra Assure offers the most comprehensive SBOM and risk assessment for applications—identifying malware, tampering, exposed secrets, vulnerabilities, weak mitigations, and more, in minutes and without requiring source code. As AI-generated code continues to explode, Spectra Assure provides the critical build exam for software vendors and AI platforms before shipping or including AI models in their software.
To learn more about the risks of nullifAI, attend RL’s webinar “Hugging Face and ML Malware – How RL Discovered nullifAI” with RL Threat Researcher Karlo Zanki, RL Chief Software Architect Tomislav Peričin, and RL Director Editorial Content Paul Roberts on Thursday, February 20 at 11:00 a.m. EST.
To learn more about how AI is impacting software supply chain security, read the recent AI is the Supply Chain primer.
About ReversingLabs
ReversingLabs is the trusted name in file and software security. We provide the modern cybersecurity platform to verify and deliver safe binaries. Trusted by the Fortune 500 and leading cybersecurity vendors, RL Spectra Core powers the software supply chain and file security insights, tracking over 422 billion searchable files daily with the ability to deconstruct full software binaries in seconds to minutes. Only ReversingLabs provides that final exam to determine whether a single file or full software binary presents a risk to your organization and your customers.
Media Contact
Doug Fraim
Guyer Group
Doug@Guyergroup.com