News Thought Leaders

Navigating the Deepfakes Challenge with Proactive measures

atul

The article dives into Deepfakes, and touches upon how  hyper-realistic AI-generated videos, are no longer science fiction. Further, it also delves into how social media is blurring the lines between truth and fiction and how there have been instances where deepfakes are used by malicious actors trying to impersonate brand executives, steal their identity and manipulate customers. To sum up, the piece deep dives into the impact of deepfakes on politics and business and how it can be navigated with proactive measures.  
By Atul Gupta, Partner and Head of Digital Trust and Cyber Security Services at KPMG in India

Today attackers have seen huge success using Artifical Intelligence (AI) deepfakes for injection and presentation attacks – which means we’ll only see more of them in the future. But advanced technology can help prevent (not just detect them).

Diving Deeper – So what are Deep Fakes?
Deepfakes are synthetic media files, where through usage of artificial intelligence powered neural networks the appearance or voice of a person or an object can  be manipulated or changed.

Deepfakes can be used to create realistic and convincing videos, images, or audio that are hard to distinguish from the original. This technology on one hand can be very useful to the world, however, unfortunately it’s finding a lot more takers that can use this to pose serious risks for individuals, businesses and organizations.

In current times, with ease of access to computing tech and AI, it is becoming very easy to generate Deepfakes and real looking fake content. This has massive implications across industry sectors, which includes:
Reputational impact – Spread false or malicious information, discredit or defame leaders or employees, or damage the image or reputation of a brand or a product
Financial impact – Impersonate executives, customers, or partners, and trick them into revealing sensitive information, transferring funds, or signing contracts
Regulatory issues – Fraudulent customer onboarding by providing fake information for KYC requirements
Market Cap – Publish incorrect information about products, executives etc, that may lead to direct impact on stock prices in the capital market.
 
There are multiple models which are available to identify Deepfakes, however, more sophisticated deepfakes are not easy to be identified. There exists  a scarcity of large and diverse datasets that can be used to train or test deepfake detection models. Moreover, deepfake generators are constantly evolving and improving, making it harder for detection models to keep up.

Consequently, enterprises need to follow a multi-tiered approach which includes the below:

  • Establish robust cyber security culture as hygiene
  • Strengthen processes to confirm identity
  • Inculcate culture of trust amongst employees such that they can potentially identify deep fake campaigns
  • Develop risk intelligence mindset in organization
  • Robust monitoring and response programs.

Way Forward
There is a key role that regulatory bodies and government too can play in stemming the challenges emerging due to deep fakes, which is related to increased awareness among citizens on potential risks and impacts of deepfakes, and to foster a culture of critical thinking and responsible consumption and production of online media.

There is a key role that regulatory bodies and government too can play in stemming the challenges emerging due to deep fakes, which is related to increased awareness among citizens on potential risks and impacts of deepfakes, and to foster a culture of critical thinking and responsible consumption and production of online media. Moving forward, there needs to be mechanism for enforcing appropriate and proportionate laws and regulations, to prevent or punish the creation or dissemination of such illegal or harmful deepfake content, and to provide remedies or redressal mechanisms for the victims or the affected parties.

Related posts

Check Point Software Simplifies Cloud Application Security with AI-Powered WAFaaS

enterpriseitworld

Nxtra by Airtel joins RE100

enterpriseitworld

Trend and NVIDIA to Secure AI-Enabled Private Data Centers in the Region

enterpriseitworld
x