Swiss startup Lakera has launched its innovative platform designed to protect large language models (LLMs) from malicious prompts and ensure data privacy. LLMs have gained significant popularity due to their ability to generate human-like text. However, these models are susceptible to manipulations by bad actors, who can inject carefully crafted prompts to exploit vulnerabilities and gain unauthorized access to systems. Lakera aims to address these security weaknesses and protect enterprises from prompt injections and data leakage.
Key Takeaway
Swiss startup Lakera has unveiled its platform to protect large language models from malicious prompts and enhance data privacy. By addressing prompt injections and a range of other security risks, Lakera aims to enable the secure adoption of generative AI applications. The launch coincides with the forthcoming EU AI Act, which will provide regulatory guidelines for safeguarding AI models.
Gandalf: A Game-like Approach to Enhance Security
Lakera has developed a comprehensive database of insights by leveraging various sources, including open source datasets, in-house research, and data collected from an interactive game called Gandalf. Through Gandalf, users attempt to “hack” the underlying LLM by employing linguistic tricks to extract a secret password. As users progress through the levels, Gandalf becomes more adept at defending against these attacks. The insights gained from the game are then integrated into Lakera Guard, the company’s flagship product.
Addressing a Range of Security Risks
Lakera is not solely focused on prompt injections but also works to safeguard against other cyber risks. The company aims to protect against accidental context leakage, prevent private or confidential data from being exposed, and moderate content to ensure LLMs do not generate unsuitable content for children. Additionally, the platform helps tackle misinformation and factual inaccuracies generated by LLMs.
Lakera Aligns with EU AI Act
Lakera’s launch is particularly timely, as the company aligns with the EU AI Act, which is set to introduce regulations to safeguard generative AI models. The Act emphasizes the need for LLM providers to identify risks and implement appropriate measures. Lakera’s founders have been involved in advisory roles for the Act, contributing their technical expertise to shape the regulatory landscape.
Enhancing Security for Generative AI Adoption
Lakera recognizes that enterprises may hesitate to adopt generative AI due to security concerns. The company works closely with startups and leading enterprises to ensure the secure integration of generative AI applications. By addressing security obstacles, Lakera aims to facilitate the smooth deployment of these applications while mitigating risks.