Fake porn of Taylor Swift. Photorealistic — but fictionalized — images of Gaza. The list of disconcerting deepfakes goes on, and — as deepfake-creating tools grow easier and cheaper to use — the waves of fakes are coming faster and fiercer.
Key Takeaway
Clarity, a cybersecurity company, is leveraging AI to combat deepfakes by rapidly responding to new types of deepfakes, aiming to maintain adaptivity and resiliency.
According to a recent Pew Center poll, about two-thirds of Americans (66%) say they at least sometimes come across altered videos and images that are intended to mislead, with 15% encountering them often. In a separate survey of AI experts by Axios and Syracuse University, 62% said that misinformation will be the biggest challenge to maintaining the authenticity and credibility of news in an era of AI-generated content.
Clarity’s Mission to Combat Deepfakes
Clarity, co-founded by Michael Matias, a cybersecurity specialist, along with Gil Avriel and Natalie Fridman in 2022, aims to develop technology to spot AI-manipulated media, mainly images. The company is among many vendors racing to develop deepfake-spotting tools, including Reality Defender and Sentinel.
Clarity offers a scanning tool available via an app and API that leverages several AI models to compare uploaded media to a database of deepfakes and AI-generated images. Additionally, Clarity provides a form of watermarking that customers can use to indicate their content is legitimate.
Challenges and Backing
Precision in the deepfakes detection realm is a moving target, with major players like Google, Microsoft, and AWS embracing more sophisticated watermarking and provenance metadata as alternative deepfake-fighting measures. Despite these challenges, Clarity recently closed a $16 million seed round co-led by Walden Catalyst Ventures and Bessemer Venture Partners, with participation from Secret Chord Ventures, Ascend Ventures, and Flying Fish Partners.
Initially targeting news publishers and the public sector, Clarity has expanded to identity verification providers and other large enterprises, aiming to stay ahead in the fast-paced arms race against deepfake creators and spreaders.