Hugging Face says it fixed some worrying security issues, moves to boost online protection

Multiple generative AI models uploaded to Hugging Face were found to be vulnerable in a way that allowed threat actors to run malicious code and extract sensitive user information.

This is according to a new report from the cloud security firm Wiz. In a blog post published late last week, Wiz said that it found two critical architecture flaws on the platform where people collaborate on their machine learning (ML) models.  

The flaws are described as shared inference infrastructure takeover risk, and shared continuous integration and continuous deployment (CI/CD) takeover risk. In layman’s terms, the flaws can be used to upload malicious AI models and tamper with container registries.

Fixes and mitigations

With the first flaw, a threat actor could upload a malicious AI model, which can then be used to gain unauthorized access to other customers’ data. For Wiz and Hugging Face, this is a major concern, as AI-as-a-Service platforms (AIaaS) are being increasingly used to store and process sensitive information.

With the second flaw, the researchers found that some AIaaS platforms have insecure container registries. Usually container registries are used to store and manage container images, self-contained software packages that include everything necessary to run an application. With insecure container registries, attackers could modify other people’s models, potentially even introducing malicious code.

Wiz shared its findings with Hugging Face, after which the two worked together to mitigate the issues. Hugging Face has also shared the details of this collaboration on its blog, and the two firms suggested a number of steps that can be used to improve security on AIaaS platforms. These steps include implementing strong access controls, regularly monitoring for suspicious activity, and using secure container registries.

“We believe those findings are not unique to Hugging Face and represent challenges of tenant separation that many AI-as-a-Service companies will face, considering the model in which they run customer code and handle large amounts of data while growing faster than any industry before” Wiz researchers explained.

“We in the security community should partner closely with those companies to ensure safe infrastructure and guardrails are put in place without hindering this rapid (and truly incredible) growth.”

More from TechRadar Pro