Newsnews

Vera Aims To Address AI Models’ Worst Behaviors With New Toolkit

vera-aims-to-address-ai-models-worst-behaviors-with-new-toolkit

Liz O’Sullivan, a member of the National AI Advisory Committee, is on a mission to make AI safer. She co-founded Vera, a startup that has developed a toolkit for establishing “acceptable use policies” for generative AI models. Generative AI models have the ability to generate text, images, music, and more. With the new toolkit, companies can enforce these policies across both open source and custom models.

Key Takeaway

Vera, a startup co-founded by Liz O’Sullivan, has raised $3.3 million in funding for their toolkit that establishes “acceptable use policies” for generative AI models. The toolkit aims to address the risks and worst behaviors associated with these models, helping companies control the behavior of their models in production.

Vera recently closed a $2.7 million funding round, bringing their total raised to $3.3 million. The funding will be used to grow Vera’s team, invest in research and development, and scale enterprise deployments. O’Sullivan believes that responsible stewardship of AI technology is crucial as companies race to define their generative AI strategies.

The Problem with Generative AI Models

Generative AI models pose various challenges for companies, especially in terms of compliance. There is concern about confidential data ending up with developers who train the models on user data. Additionally, offensive or inappropriate models can have a negative impact on a company’s reputation.

Vera’s platform attempts to mitigate these risks by identifying problematic content in model inputs and blocking, redacting, or transforming requests that may contain sensitive information, security credentials, intellectual property, or prompt injection attacks. The platform also places constraints on how models respond to prompts, giving companies greater control over their model’s behavior.

Vera’s Approach

Vera achieves its goals through the use of proprietary language and vision models. These models sit between users and internal or third-party models, detecting and blocking inappropriate prompts or responses in any form, including text, code, image, or video. By actively enforcing policies, Vera aims to prevent the generation of content that may be criminal or harmful to users.

While O’Sullivan acknowledges that no model is perfect, she believes that Vera can significantly reduce the worst behaviors of generative AI models. However, it’s important to consider potential biases and limitations in content moderation models, as they have been known to showcase biases in detecting toxicity in text or misidentifying objects in computer vision.

Competition and Value Proposition

Vera is not the only company working on model-moderating technology. Competitors like Nvidia, Salesforce, and Microsoft offer similar solutions to prevent models from retaining or regurgitating sensitive data or to moderate text and image content. However, Vera differentiates itself by addressing a wide range of generative AI threats and providing a one-stop solution for content moderation and protection against model attacks.

Vera has already gained traction with several customers, and the company is set to expand its customer base with the launch of a waitlist. O’Sullivan believes that Vera’s toolkit provides the ideal balance between AI-enhanced productivity and mitigating the risks associated with generative AI models, without the vendor lock-in of a one-size-fits-all approach.

Leave a Reply

Your email address will not be published. Required fields are marked *