May 2, 2024

AI, Deepfakes, and Digital Trust: A Conversation with Nicos Vekiarides

By Randy Ferguson

AI, Deepfakes, and Digital Trust

In an insightful interview with CloudTweaks, Nicos Vekiarides, CEO and Co-founder of Attestiv, sheds light on the transformative impact of artificial intelligence (AI) and the escalating challenges posed by deepfake technology. With a background rich in enterprise technology and a keen expertise in digital security, Vekiarides is at the helm of Attestiv, a company that is pioneering the use of blockchain to authenticate and protect digital media against manipulation. In this discussion, he explores the dual nature of AI—its potential to drive innovation and efficiency, alongside its darker capacity to generate convincing yet fraudulent digital content.

Vekiarides, an MIT-trained engineer, provides a deep dive into how AI and Deepfakes could influence sectors such as insurance, media, and public safety, stressing the need for industries to adopt advanced verification tools to maintain integrity and trust.

Can you describe the current landscape of deepfake technology and its most common uses in the context of elections?

It turns out there is both an ethical and a malicious use of deepfakes in the political realm. For instance, in India, deepfake technology has been used to amplify campaign outreach of candidates by using their likeness and voice. You can’t really describe such use as unethical.

Of course, everyone knows the darker side of using deepfakes to create disinformation and attempting to harm reputations of political candidates. It’s unclear whether these ethical and malicious uses can be reconciled to coexist, unless voters are somehow armed with tools to tell the difference.

How has the evolution of AI contributed to the advancements in deepfake technology, and what are the key technological breakthroughs that concern you the most?

The biggest concern is how easy it has become to create a deepfake and with how little training. Microsoft’s recent VASA-1, which is not being released to the public at this time, can create realistic deepfakes using a single image. Regardless, there are many frameworks now available that enable anyone to rapidly generate deepfakes that are very realistic and can spread disinformation, both commercial platforms and open source.

What are the most significant risks that deepfakes pose to political campaigns and the integrity of elections?

I suppose the most significant risk of deepfakes is they can impact the election outcome if left unchecked. In practice, political campaigns must be extremely vigilant about monitoring social media so they can debunk any fakes in a timely way that does not influence voters. Incidentally, news and social media should also play a role in this as well.

Could you explain how Attestiv detects deepfakes and the challenges involved in distinguishing between genuine and manipulated content?

Attestiv uses proprietary AI analysis to detect deepfakes, based on lip analysis, facial analysis and the presence of generative AI content. AI training on tens of thousands of videos helps us establish high 90s AUC scores, to ensure accuracy without false positives. However, we also have to train continuously as deepfake frameworks continue to evolve and improve.

What strategies do you believe are most effective for campaigns and organizations to protect their candidates from deepfake attacks?

Campaigns have to monitor social channels for unknown or unauthorized videos and photos of their candidates and be vigilant to respond quickly to any threats. At the same time, they need to invest in deepfake analysis tools, like Attestiv, in order to be able to credibly identify and flag fakes.

How do you foresee the regulation of deepfake technology evolving, and what role should government and industry play in managing these risks?

I think everyone plays a role here. Industry players who generate deepfakes can ensure they label or watermark their videos or photos so that it is clear they are AI-generated. That helps, but doesn’t stop the hackers and bad actors from finding open source or ways around that.

Government should create regulations, where applicable, that will act as a deterrent to people or organizations using deepfakes maliciously. Again, that helps, but laws do not completely deter criminal acts either.

What are the broader societal implications of widespread deepfake technology, especially regarding privacy and personal reputation?

The implications are enormous. Today there are many companies that measure, monitor and protect credit worthiness. You have to imagine that monitoring and protecting of peoples’ reputations, which can be tarnished by deepfakes, is just as important!

Can deepfakes have any positive uses in society, or are they predominantly a tool for misinformation and harm?

As mentioned earlier, deepfakes can be used positively. For example, I’ve already seen peers promoting their business, but too busy to create promotional videos, creating deepfakes with their approved messaging. In one sense, it seems very smart and efficient. In another sense, I feel there’s a fine line between what someone said through AI assistance and what someone said authentically that could pose ethical issues.

In what ways are emerging technologies like blockchain being leveraged to authenticate media and combat deepfakes?

Blockchain may be helpful in establishing some immutability when media is created. If you store the author, a hash or important information about the video on a blockchain that can be validated by the consumer or a streaming platform, you have a solution with the potential to establish provenance. I say “potential” because video creators and all streaming platforms would have to be compliant with such a framework in order for it to function at scale. On the other hand, a blockchain solution may work very well for creators and consumers that use private channels for dissemination or communication with a standard set of tools.

Looking forward, how do you predict the battle between deepfake creators and detectors will evolve?

It’s going to be a wild next couple of years as deepfake technology continues to rapidly advance. I believe with proper legislation, commercial compliance and strong deepfake detection toolsets, we will reach a point where we can keep malicious deepfakes in check. That said, it’s far too late to think we can eradicate the threat. As we do with other malicious cyberthreats, like viruses and phishing attacks, organizations and individuals will need to be vigilant and invest in the right toolsets to avoid being duped or otherwise harmed by deepfakes.

By Randy Ferguson

Randy Ferguson

Randy boasts 30 years in the tech industry, having penned articles for multiple esteemed online tech publications. Alongside a prolific writing career, Randy has also provided valuable consultancy services, leveraging a deep knowledge of technological trends and insights.

SPONSOR PARTNER

Unlock the power of Google Cloud with a $350 signup credit. Experience enhanced scalability, security, and innovation for your projects today!
© 2024 CloudTweaks. All rights reserved.