Deepfakes And The DPDP Act: Can The DPDP Act Effectively Combat AI-Generated Misinformation

`
Introduction

In today’s digital world, the rise of deepfake technology is causing serious concerns. According to a recent McAfee survey, more than 75% of Indians have come across some form of deepfake content in the last 12 months, with a concerning 38% having fallen into the trap of deepfake scams. These AI-generated videos can look so real that they easily deceive people, leading to the spread of false information and potentially causing major disruptions, such as influencing elections or damaging reputations. As these deepfakes become more advanced and widespread, it’s more important than ever to have strong laws to protect against their misuse. 

India’s Digital Personal Data Protection (DPDP) Act is designed to safeguard personal data in this increasingly digital environment. However, with the growing
threat of deepfakes, an important question arises: Is the DPDP Act capable of effectively dealing with the unique challenges that deepfakes present? This article
explores how the DPDP Act addresses deepfakes, highlighting its strengths, pointing out areas where it may fall short, and offering recommendations to make it more effective in combating deepfakes. Understanding these aspects is crucial for ensuring that we have the right tools to combat the dangers of AI-generated
misinformation.

Definitions of the Term Deepfake

The term ‘Deepfake’ is coined by combining ‘Deep Learning (DL)’ and ‘Fake’, and it refers to the creation of highly realistic videos or images with the assistance of deep learning. It was named after an unidentified Reddit user in late 2017, who utilised deep learning techniques to replace the face of a person in pornographic videos with the face of another individual, resulting in the production of authentic-looking fake
videos(1)

Later The United States in the US Malicious Deep Fake Prohibition Act (2018) 1: §1041.(b).(2) defined the term Deepfake as an audio-visual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual(2).

Strengths of the DPDP Act in Combating Deepfakes

Personal data is defined clearly by the DPDP Act as any information that is about a person who can be identified. This can include things like names, photos, email
addresses, social media posts, biometric data, and online activities. For example- In 2023, a video emerged on social media platforms appeared online showing
Ukraine’s President Zelenskyy surrendering to Russia. It turns out that it was a deepfake video made using advanced technology. According to the DPDP Act, this
changed video if it was made with privately available personal data without his consent would be liable since it's about the person whose face was used without permission (3) . By providing a clear definition of personal data, this Act ensures that deepfake incidents can be effectively addressed within the legal framework it offers.

Obligations for Data Fiduciaries to Implement Security Safeguards
The DPDP Act requires data fiduciaries (entities responsible for determining the purpose and means of processing personal data) to put in place strong security
measures to avoid personal data breaches (4). These measures consist of encryption, access controls, security audits, and training sessions for staff who deal with
personal data. If a situation arises where a social media site gathers and keeps users facial biometric information for verifying their identity, to follow the DPDP Act, the site needs to put in place strong security steps like encrypting biometric data when storing and sending it, limiting access to only authorized staff, and carrying out frequent security checks to find its weaknesses. And if the data fiduciary doesn’t put in place these security measures and theres a data breach because someone got unauthorized access to the biometric database, leading to deepfake videos made with users facial biometric data. In that case, the DPDP Act makes the data fiduciary responsible for not taking care of personal data properly. The act makes sure that data fiduciaries have to put security measures in place, which help keep personal data safe from malicious people who want to use it to make fake videos. This law encourages being proactive about data security, which helps reduce the dangers of deepfake technology.

Penalties for Breaches Leading to Deepfake Creation

The DPDP Act gives the Data Protection Board the authority to enforce fines on data fiduciaries for major violations of personal data protection provisions [5] (5). These fines are meant to discourage careless or harmful behaviour that might result in the making or spreading of deepfake content with personal data. The European Unions General Data Protection Regulation (GDPR) has seen several high-profile penalties for data breaches that could serve as precedents under the DPDP Act (6). For instance, in 2020, British Airways was fined £20 million for failing to protect the personal data of over 400,000 customers, which was compromised in a cyberattack (7). While this case did not involve deepfakes, it underscores how regulatory bodies can impose significant penalties on organizations that fail to protect personal data, especially if such breaches could lead to harmful uses like deepfake creation. Similarly, the DPDP Act makes sure that those responsible for such breaches face hefty fines, which motivates them to take action to improve data security and stop personal data from being used to make deepfakes.

Challenges Faced by The DPDP Act in Combating Deep Fakes

There is no consent requirement for publicly available data In the fight against deepfakes, the DPDP Act poses a significant challenge when it comes to obtaining consent from individuals. This challenge becomes even more pronounced when dealing with deepfake content created using publicly available data.

Scarlett Johansson, a well-known Hollywood actress, has been a frequent target of deepfake pornography. In 2018, Johansson spoke out about how her images were being used without her consent to create explicit deepfake videos that falsely depicted her in pornographic scenarios. These deepfakes were created using publicly available images and videos of her face, often sourced from movies, interviews, and public appearances. Now if such events would have happened in India and since the DPDP Act primarily regulates private data and not publicly available data, it becomes difficult to penalize such actions under this act.

The issue of consent arises due to several reasons. Firstly, the DPDP Act mandates obtaining explicit permission for collecting and processing personal data that is not publicly available. However, in cases like Scarlett Johansson's, where the images were already in the public domain, obtaining her consent was not mandatory. This means that even if Johansson did not consent to her images being used to create deepfake content, the publicly available nature of those images allows them to be exploited in this way.

Additionally, the images used were originally intended for public consumption, such as in movies or interviews, with no intention of them being repurposed for creating misleading deepfake videos. The malicious actor, therefore, did not seek consent specifically for creating deepfakes, as it was neither practical nor required under the current legal framework. This gap highlights a significant challenge in the DPDP Act when it comes to addressing the misuse of publicly available data in the creation of harmful deepfakes.

The sophistication of Deepfake Technology

The sophistication of Deepfake Technology makes it challenging to distinguish between authentic and manipulated content. Deepfake algorithms continually improve in generating synthetic media that closely mimic real human behaviour and appearance (8). The advancements in generative adversarial networks (GANs) have enabled the creation of deepfake videos that seamlessly blend the face of one person onto another’s body, making it challenging to distinguish between authentic and manipulated content. One notable case is the deepfake video of Facebook CEO Mark Zuckerberg, which surfaced in 2019. In this video, Zuckerberg appeared to give a speech about controlling the data of billions of people, boasting about his power. The video was entirely fake, but it was so convincingly produced that it quickly spread across social media, causing concern and confusion among viewers. This example demonstrates how deepfake technology can be used to create highly realistic videos that can potentially damage a person’s reputation or mislead the public. Another real-world instance is the deepfake video of former U.S. President Barack Obama, created by filmmaker Jordan Peele. In this video, Obama appeared to be delivering a speech in which he said things he never actually said. Peele created this video as a public service announcement to demonstrate how easily deepfake technology could be used to manipulate political figures and spread misinformation.

The International Scope of Deepfakes and Limited Reach of the DPDP Act 

Deepfake technology is becoming a global problem, making it difficult for laws like India’s DPDP Act to address issues when the content comes from other countries. Real-life examples show how challenging it is to deal with deepfakes that affect people across borders.

Example: Russia-Ukraine War-

In March 2022, during the early stages of the Russian invasion of Ukraine, a deepfake video of Ukrainian President Volodymyr Zelenskyy appeared online. In this deepfake, Zelenskyy was falsely depicted as urging Ukrainian soldiers to surrender to Russian forces. The video was quickly identified as a deepfake, but it had already spread on social media, sowing confusion and fear.

The video was likely created by actors outside Ukraine, possibly from Russia or other regions aligned with Russian interests. Because the creators were operating outside of Ukraine, it was challenging for Ukrainian authorities to take direct legal action against them.

Since the deepfake originated from outside Ukraine, prosecuting those responsible was difficult under Ukrainian law. The international nature of this deepfake incident made it challenging to hold the perpetrators accountable, especially without a clear international legal framework for dealing with such cases.
This example illustrates the difficulties the DPDP Act would face if a similar incident would have occurred in India. If a deepfake targeting an Indian leader were created and distributed by foreign actors, the Act’s jurisdictional limits would make it challenging to prosecute the perpetrators. This case underscores the importance of international cooperation and legal frameworks to address the global threat posed by deepfakes effectively.

Balancing Content Verification with Free Speech Concerns
Ensuring the integrity of online information in the face of deepfakes is vital, yet it’s just as crucial to defend the right to free speech and avoid excessive censorship. Striking a balance between fact-checking and safeguarding freedom of expression presents a challenging conundrum, as cracking down on deep fakes might stop individuals from creating and sharing stuff. Just Imagine a situation where a social media platform uses advanced technology to detect and delete fake videos that could be harmful. Although this is meant to keep users safe from false information, there is a chance that some harmless content could be mistakenly removed. For Instance, a video of BJP’s Leader Manoj Tiwari once went viral ahead where he made constructive criticism of Delhi’s C.M. Arvind Kejriwal by using deepfake videos, encouraging voters to vote for the BJP. This might be taken down by the system, even though it’s a type of political expression protected by freedom of speech laws. The platform has to find a way to stop harmful fake videos while still allowing users to share their opinions through criticism, humour, and satire.

Suggestions and Recommendations to improve the existing law to better combat Deepfake.

To tackle the challenge posed by the absence of consent requirements for publicly available data in combating deepfakes, policymakers can implement a range of
solutions.

For that strengthening of data protection measures within the DPDP act is crucial. Firstly, regulations could be enhanced to explicitly prohibit the use of publicly
accessible data for deepfake creation without explicit consent. This would discourage the unauthorized use of publicly available data in deepfake content
production.

Secondly, there must be mandatory disclosure requirements for deepfake content creators to provide clear disclaimers in their deepfake content, indicating the sources of publicly available data used in its creation. In such a case, the deepfake video creator would be required to disclose the timestamp where deepfake videos were used in addition to authentic videos and the platform from where data was obtained. Regular audits could then verify the accuracy and completeness of this disclosed information, ensuring transparency in content creation processes.

Thirdly, implementing consent mechanisms for publicly available data is essential. Suppose an opt-in consent mechanism is integrated into the DPDP Act, allowing individuals to explicitly grant permission for the use of their publicly available data in deepfake creation. In this scenario, content creators would be required to obtain valid consent before utilizing any publicly accessible data. This will ensure that individuals have control over the usage of their data, thereby enhancing privacy protections and mitigating the risks associated with deepfakes.

Moreover, collaborating with technology companies is also essential. Partnerships with leading tech firms could facilitate the development of advanced deepfake
detection algorithms capable of identifying content created using publicly available data. These detection technologies could then be seamlessly integrated into social media platform’s content moderation systems, effectively combating the proliferation of deceptive media and safeguarding individual privacy rights.
Public awareness campaigns also play a vital role in addressing this issue. For that, widespread educational campaign could be launched for targeting social media
users to raise awareness about the risks associated with deepfake technology and the importance of safeguarding personal data. Practical tips and guidelines could
also be provided to individuals, advising them to be cautious about sharing sensitive information on public platforms.

Conclusion

Even though the DPDP act makes some progress in addressing deepfakes and safeguarding online personal data of an individual, but it still has some hurdles to
overcome. This is because it defines what personal data means, requires companies to protect data from harm and punishes them if they fail. On the other hand, there are glaring loopholes such as lack of consent for using public data in creating deepfakes and difficulty in distinguishing between deepfakes and real content
because technology has advanced so much. Moreover, if those involved in creating deepfake live in a different country then there is not much that can be done.

In order to improve these issues, legislators could make the rules tougher under the DPDP Act by making it clear that one must have to seek permission before
employing public information for creating deepfake videos. Policymakers should also cooperate with technology firms to come up with better detectors of deepfake images or videos while also educating society about dark sides of this technology. Additionally, collaboration with other countries may solve the problem caused by the global nature of deepfakes.

Reference 
  1. Md Rana et al., Deepfake Detection: A Systematic Literature Review, IEEE Access, vol. 10, 2022, pp. 1-1, https://doi.org/10.1109/ACCESS.2022.3154404 (accessed May 4, 2024).
  2. US Malicious Deep Fake Prohibition Act 2018 § 1041(b)(2).
  3. Digital Personal Data Protection Act 2023 § 33(a).
  4. Digital Personal Data Protection Act 2023 § 8(5).
  5. Digital Personal Data Protection Act 2023 § 27.
  6. Council Regulation (EU) 2016/679 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) [2016] OJ L119/1.
  7. BBC News, ‘British Airways fined £20m over data breach’ (BBC News, 19 October 2020 <https://www.bbc.com/news/technology-54568784> accessed 20 August 2024.
  8. B. Khoo, R.C.W. Phan & C.H. Lim, Deepfake Attribution: On the Source Identification of Artificially Generated Images, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 12, no. 3, 2022, e1438.
Arhant Kumar
+ posts
Akash Kumar Sahu
+ posts

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top