Placing the Metaphorical Lock on Deepfakes

Reading Time: 8 minutes
Introduction

The rapid rise of Artificial Intelligence (AI) tools and the breakthroughs in Machine Learning (ML) coupled with the increasingly cheap internet access in India have spelt a double-edged victory for the domestic technology sector. Generative AI holds promise to contribute to the Indian digital economy, but worries over its potential misapplication have heightened public anxieties.

Deepfakes refer to synthetic media that has been digitally altered to impersonate someone else’s likeness. This term was introduced in 2017 and comprises digitally manipulated images, sounds, and videos. Modern deep fakes are created using large-scale models specifically trained on Generative Adversarial Networks (GANs); these models use ML (Machine Learning) to create progressively more realistic and desired content.

The proliferation of GAN architecture has made it possible to create a deepfake under 30 seconds. The distribution and commodification of deepfake models have made these sophisticated tools available to the general public. Due to this, there is a legitimate fear that delaying comprehensive deep fake legislation will lead to the exploitation of its potential for misuse.
The unimpeded growth of deep fakes poses a significant threat to our digital sphere. This technology has outpaced the digital regulatory frameworks currently in place, leaving behind a lacunae in law. In this piece, the authors will establish that the proliferation of deep fakes poses significant challenges, undermining efforts to maintain a secure and trustworthy digital environment.

This piece will highlight the necessity of creating legislation focused on deep fakes and the balancing act between overregulation and underregulation. Firstly, the article begins by emphasizing the unique potential for harm inherent in deep fake technology. Secondly, it examines the limits of the current regulatory framework concerning these risks. Thirdly, it undertakes a comparative study of foreign jurisdictions to arrive at the best practices and applies them to the Indian context. Finally, an attempt is made to offer regulatory considerations so that adequate approaches might be found for deepfakes.

The Unique Potential for Misuse

The unscrupulous use of deepfakes allows for the exploitation of trust associated with identifiable features like faces and voices, allowing bad faith actors to create a false sense of credibility by leveraging people’s natural tendency to trust visual evidence. Such an ability gives deepfakes a unique potential for misuse. Consequently, instances of misuse have spread across all sections of our society, including economic, political, social and legal facets of our lives.

In 2021, fraudsters demonstrated this by orchestrating a well-timed voice call ahead of an acquisition, morphing the perpetrator’s voice to resemble that of the victim’s director, and successfully deceiving the victim into transferring USD 35 million into a fraudulent account. As a result of cyber fraud Indians have lost over INR one billion in the first four months of 2024 and now with deep fakes part of the fraudsters toolkit, this figure is expected to balloon in the coming years.

In addition to fraud, deep fakes are powerful tools for spreading misinformation. Notable examples include a deepfaked video of Ratan Tata on Instagram, where he appears to give investment advice, and a fake video of Ranveer Singh, where he appears to endorse a political campaign. Deepfakes can be used for more than just financial fraud. The potential impact of such videos on public opinion is evident from a study conducted in 2021. This study concludes that although people cannot detect deepfakes, they think they can. The novelty, and thus unfamiliarity, means that a large section of society lacks a parallel verification system, compounding this problem.

At an individual level, reputations can be destroyed by the mere circulation of maligning content, real or not. The proliferation of non-consensual deepfakes can have wide-ranging effects on individuals, causing trauma and distress. Nonconsensual pornography where a person’s face is transposed on an existing video, is a gross violation of privacy and can inflict significant harm on the individuals depicted. The proliferation of deepfake pornography could profoundly disrupt Indias social fabric. Moreover, deep faked content has been utilized to spread misinformation, to influence voter opinion and scam individuals.

These examples of deepfake misuse highlight the inadequacy of the current legal regime in effectively addressing the issue. As previously discussed, since deepfakes exploit trust, the Delhi High Court has recognized the uncertainty surrounding media authenticity in the ‘deepfake era’. As a temporary measure, Delhi High Court has placed safeguards to prove the authenticity of photographic evidence, stating that complainants who provide photographic must prove its authenticity in trial. In the absence of robust regulatory frameworks, the evolution of deepfake technology threatens to rapidly erode public trust in digital media, casting doubt on the veracity of online content and undermining societal confidence in the information ecosystem.

The Limitation of Existing Regulations

In India, there is currently no legislation specifically addressing the issues posed by deepfakes. Existing principle-based legislation used to tackle these issues has generally proven ineffective because deepfakes do not neatly fit into pre-existing legal categories Moreover, current regulations are overly focused on what Coglinaese defines as problem-based liabilities, meaning that it focuses on punishing outcomes instead of addressing the source of these problems.

Although, The Ministry of Electronics and Information Technology has stated that existing laws are adequate to curb the challenges posed by deep fakes upon strict enforcement, the offender-centric approach of current laws creates a significant administrative challenge in enforcement. Currently, sections 66D, 66E, 67, and 67A of the IT Act, Sections 153(a) and (b), 463 and 499 of the IPC, and IT Rules, along with a combination of other laws and legislation, regulate the issues surrounding deep fakes. The primary issue is that, even if these laws could be strictly enforced in today’s technological ecosystem, as MeITY suggests – this article will show that the ambiguity regarding deepfakes would merely create a judicial bottleneck for effective prosecution.

Section 66E of the IT Act penalizes the dissemination of non-consensual depictions of private parts of an individual, violating their privacy. However, deep fakes evade the language of this section owing to their nature, where the private area and the face typically belong to different individuals. Consequently, the complainant is unlikely to be the person whose private area is depicted, complicating legal recourse. 

There are fewer technical issues with the implementation of current laws, such as Section 66D of the IT Act and Section 499 of the IPC, which are being used to obtain takedown orders on deep fakes. The individualistic and loss-oriented nature of these laws makes them poorly equipped to manage the scale of modern media, where thousands of iterations and variations of a video can be uploaded. Moreover, the proof which is required to be established for criminal defamation and cheating is fundamentally disjunct from the purpose of misinformation.

Moreover, cyber investigation infrastructure is poor in various parts of India which make such intricate investigations for deepfakes unviable.

IT Rules and Platform Accountability

The Union Government issued an advisory to social media intermediaries (SMIs) addressing deep fakes in December of 2023. The advisory instructed SMIs to exercise due diligence and take reasonable measures to identify content which violates the deep fake rules, regulations and user agreements provided within their Terms of Service (TOS) and the IT Act. The SMIs are further instructed to take swift action addressing content in violation of the IT Act 2000 and the IT Rules 2021. The focus of this advisory was on regulatory frameworks addressing technology centres around intermediary platforms. However, an approach which ultimately emphasizes the final form of the content published and delegates the regulatory tasks to third- party private companies will result in excessive censorship and possible violations of Constitutional Rights as companies want to maintain their safe harbour protections under Section 79 of the IT Act, which absolves their liability from content posted on their platform.

Rule 3(1)(b)(v) and (vi) of the IT Amendment Rules, 2023, attempt to prevent the dissemination of misinformation and fake news on social media. However, the broad and ambiguous wording of these sections in regards to ‘misinformation’ has proven to be a problem, even before it could be used to tackle deepfakes. Rule 3(1)(b)(v) has been stayed by the Supreme Court, as it was challenged on grounds of overbreadth and lack of clear definition surrounding the words “misinformation” and “deceitful” content. The Rule relies on “Fact Checking Unit(s)” created under Rule 79 to fill this gap, which may exercise a monopoly on the perception of facts, creating a statutory duty to censor free speech not included in the TOS of the platform.

Although the rules have been significantly reworked, this framework implemented remains inadequate for addressing the challenges posed by deep fakes. The complaint mechanism created by the rules cannot account for the scale and speed of deepfake creation. It is unrealistic to expect an individual who is the target of a deepfake campaign to report every variation of a non-consensual deepfake which has been posted on multiple platforms. Therefore, the individual applicability limits its utility. Further, advanced deep fakes tend to escape detection from SMIs, rendering the applicability of Rule 3(1)(b)(vi) largely ineffective.

cyber survey conducted by McAfee reported that more than 75% of Indians have been exposed to some audio-visual form of deepfaked content over the last 12 months, which highlights the current scale of the deep fake problem that the complaint mechanism must meet.
Rule 4(2) of the IT Rules 2021, which mandates SMIs to identify the first originator of a message has faced significant enforcement challenges as it is impossible for end-to-end encryption and traceability to co-exist. This rule also creates ambiguity around Rule 3(1)(k) due to the dichotomy between end-to-end encryption and traceability. It has also created questions about the permissibility of laying down encryption for future software that do not have the contemplated effects similar to SMIs. Due to these reasons this rule has also been challenged by Meta.

The deterrent aspect of the current laws could be of some consequence, but enforcing these rules remains impractical. To combat malicious deepfakes and strengthen enforcement measures, this article suggests shifting the responsibility for identifying perpetrators to the telecom industry. This would be a more feasible option as video-sharing platforms like Instagram already track IP addresses and login data. However, it must be kept in mind that legislation developed to this effect must satisfy the proportionality standard established in Puttaswamy.

Comparative Approaches and Regulatory Considerations

Jurisdictions around the world have grappled with similar challenges to address deep fakes, in response to which various legal frameworks have been developed, modelled from the USA and EU. It is pertinent to undertake a comparative study of these frameworks and contextualize them to the Indian diaspora to adequately address the threat of deep fakes and provide worthwhile solutions to this growing problem. This view has been affirmed by the

Delhi High Court which notes that domestic limits on borderless technology like deep fakes are unlikely to garner results without international solutions.
To formulate a comprehensive regulatory approach, there is a need to conduct a market study on AI’s potential harms and address these harms accordingly. First, it is important to narrow the definition and clarify the distinction between ‘malicious deep fakes’ as opposed to those made in good faith. The Deepfake Accountability Act introduced by the US Congress legally recognizes and defines a ‘malicious deepfake’ and categorizes offenses into four classifications, according to their severity. These classifications provide penalties corresponding to the severity of the harm committed.

The EU’s Digital Services Act (DSA) and the EU AI Act include two methods to curtail the problem of deep fakes: first, they build a structured risk architecture based on the ability of tech companies to self-regulate and second, they mandate the disclosure of deepfaked content in social media content. This disclosure of synthesized media can be achieved by adding watermarks and disclaimers to deepfaked content uploaded on social media. Additionally, requiring labelling or embedding synthetic data with a permanent and unique metadata or identifiers through the proposed deepfake legislation would strengthen its enforcement and
facilitate the identification of perpetrators.

Recognizing the disproportionate effect on public figures, Indian deepfake legislation would benefit from establishing a special category for prominent public figures – businessmen, politicians, influencers, actors, etc., who are more susceptible to deep fakes. The instance- based complaint mechanism of the IT Rules may benefit if monitoring mechanisms are implemented to identify and swiftly address deepfakes to a more susceptible group of people owing to their public presence.

The Digital Personal Data Protection Act (‘DPDPA’), is a good starting step in creating a proactive approach towards regulating deepfake technologies, addressing crucial aspects such as data security, transparency, and content management. Mandating service providers to protect personal data and to establish precise guidelines for handling false information, will allow them to mitigate the harms caused by the spread of malicious deep fakes.

The best way to ensure protection against misinformation is by ensuring algorithmic transparency in order to strike a balance between self-regulation and state regulation. The Act further provides guidelines for SMIs to ensure ethical processing of data. Future legislation on deep fakes must echo these measures and necessitate platform accountability along with risk mitigation using regular algorithm reviews, and security measures provided in Sections 8(5) and 8(4) of the DPDP Act. These provisions, especially for high-risk deepfake tools, must be strictly enforced.

Conclusion

Artificial Intelligence casts a formidable shadow over the digital landscape as a double-edged sword, presenting both tremendous potential for productivity and misuse. The technological nature of the problem means that India must regulate all avenues of misuse in a measured and precise legal framework. The law regulating deepfakes must be principled while having the foresight to address emerging threats so that it does not overburden the fledgling industry. Such an approach would prevent overreach on the part of the government while encouraging an environment that is favourable to innovation and responsible AI development. This article aims at underlining the lacunae within the current legal framework adopted to deal with deepfakes. Further, we have examined how existing regulations can be adapted to effectively address the threat posed by malicious deep fakes with best practices followed in foreign legislations. The evolving landscape of technology needs to be addressed by balanced frameworks that curb the menaces of technology without hampering their growth


Arhan Deb Ray
+ posts
Kamal Raj Nambiar
+ posts

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top