Abstract
India’s judiciary, with a backlog of 48 million cases, has taken recourse to artificial intelligence (AI) to become more efficient. This change, however, is fraught with danger. AI systems being trained on colonial period legal precedents and skewed data are perpetuating old injustices and stifling dissent. Systems such as Trinetra, a predictive policing platform, target Dalit communities disproportionately and SUPACE, a judicial AI, is based on old casteist decisions These private company technologies further dissolve the boundaries between state and corporate power, marginalizing already vulnerable communities. The chilling effect on free speech is evident and in that AI deems dissent as criminality resulting in arrests and penalties based on algorithmic “threatscores.” Linguistic exclusion and black-boxed AI decision-making entrench inequality, robbing marginalized voices of constitutional protections. Resistance is on the rise. Grassroots movements such as Kisan AI and Truth-Finder are constructing counter-narratives while legal proceedings call for transparency and accountability. In order to reclaim democracy, India needs to decolonize training data for AI provide linguistic justice and integrate constitutional protections into algorithmic design. The future of Indian justice hinges on whether AI will reinforce caste hierarchies or destroy them. This move demands ethical coding algorithmic advocacy and a transition from Silicon Valley’s “move fast” culture to India’s “think deep” heritage.
I. Introduction: The Algorithmic Gavel and the Silencing of Dissent
In March 2024, a Hyderabad student activist was arrested on “promoting enmity” charges after an artificial intelligence tool that deemed her posting of Annihilation of Caste—a classic work of critique by B.R. Ambedkar—was “seditious” analysed her social media posts. Her arrest, selected not by the will of people but by calculating algorithm, is a drastic and ominous turn for India’s political and legal order.
This is characteristic of a wider construct best defined as an “AI empire”—a transnational configuration of corporate state, and algorithmic power in which control over data, infrastructure, and legal argumentation is in centralized and impermeable form. AI empires in the Global South are increasingly localized in institutions by cooperating with private firms, usually with Global North headquarters or doing business with domestic elite middlemen. They are legitimated by discourses of modernity, efficiency, and neutrality, yet they function in practice to reproduce the social hierarchies of caste, class, and colonial power.
They did not suddenly appear. They are the result of a deliberate line of technologies that stretches back to colonial administrative rationalities—census taking, surveillance, and codification of law—as interested as they were in order as opposed to justice. With my digital age, such legacies are rendered as training data sets and machine learning. India’s 48 millioncase judicial backlog has been taken as a justification to insert AI tools into police stations and courts. But hidden from sight behind the imperative of efficiency is a project of governance that is growing more resistant to public and democratic accountabilities.
This essay considers the intersection of artificial intelligence, democracy, and caste in India through the lens of what can be termed algorithmic coloniality—the manner in which past histories of domination are inscribed and renewed within digital architectures. It contends that justice must be recovered in the age of AI through the deconstruction of these algorithmic empires and the building of bottom-up, constitutional, and culturally situated alternatives.
II. The Anatomy of Algorithmic Truth: How AI Redefines Legal Reality
India’s artificial intelligence of law is digital inheritor of historical injustices and not neutral arbiter of law. Take the Uttar Pradesh predictive policing platform Trinetra. Built on crime data dating back to the Criminal Tribes Act of 1871, a law criminalizing entire communities
such as the Pardhis and Sansis as “hereditary criminals,” the algorithm disproportionately marks Dalit-majority villages as “high-risk” villages. Trinetra identifies Dalit areas as crimeprone 73% more frequently than higher-caste locales with similar crime rates, as per a 2024
Indian Civil Liberties Union (ICLU) study (1). By incorporating colonial reasoning into machine code, the government legitimates centuries of oppression as data-based “truth.”
This phenomenon is not limited to police. Datasets filled with pre-colonial-era decisions power the AI instrument SUPACE (Supreme Court Portal for Aid in Court Efficiency), which makes predictions about court outcomes. In a 2023 case related to free speech, SUPACE cited British-era precedents 82% of the time and included the 1922 Queen-Empress v. Bal Gangadhar Tilak case that equated dissent with sedition. The Court accepted the training data of the tool had never been checked for casteist or colonial biases when attorneys raised concerns about such bias. Legal scholar Upendra Baxi has rightly pointed out, “AI calcifies
law—it does not interpret it”. (2)
Privatization of truth-making casts further complication over this ground. Hired by state governments, private firms such as Staqu and Sigtuple wield disproportionate influence over India’s artificial intelligence landscape. Delhi Police utilize Staqu’s facial recognition tools to scan social media for keywords such as “samvidhan” (constitution) or “azaadi” (freedom) and flag the users for monitoring. The company admitted in 2023 that its AI had been trained on
Home Ministry-chosen datasets, blurring the line between state and corporate dominance over the truth.
III. The Chilling Effect: How AI Suppresses Free Speech
The 2023 farmers’ protests were a dystopian peak of state-corporate power collusion. JARVIS-3, an artificial intelligence tool developed by a Hyderabad startup, was employed by Punjab Police to scan the social media activity of protesters. Posts containing words such as “mandi” (agricultural market) or “andolan” (movement) were assigned as “threat scores,” which led to anticipatory arrests under Section 144 (3) of the CrPc. Algorithmic correlation was the “crime” of 68% of the detained who had no prior criminal record, a Human Rights Watch report states (4). Following an AI flagging her speech on workers’ rights, activist Nodeep Kaur, who spent six months in jail, stated: “The state is using AI to criminalize dissent before it even happens.”
Courtrooms are also not immune from the chilling effect. An AI system in Gujarat employed the “sentiment score” of a satirical cartoon to determine guilt in a defamation case in 2024. Even though the cartoon was covered by protection as political satire, the algorithm trained on pro-government news outlets such as Republic TV and Zee News labelled it as “malicious”. The judge imposed a fine of ₹10 lakh on the artist based on the AI ruling. These cases are the best illustrations of what journalist Ravish Kumar calls “robotic justice”, the simplification of constitutional rights to yes or no outcomes. Speech is stifled even by those tools that are meant to bring justice to the people. Queries related to politically sensitive issues such as Kashmir or the Citizenship Amendment Act (CAA) are banned by the AI chatbot Nyaya Bandhu which is meant to give legal aid in local languages. As per a 2023 Internet Freedom Foundation audit (5), it transforms a means of empowerment into a means of control of narratives by redirecting users to portals controlled by the government instead of raising questions. Dalit rights lawyer Jyoti Mhapsekar added, “It’s like having an attorney working for the prosecution.”
IV. The Black Box Society: Opacity as a Political Tool
This new digital authoritarianism relies on opacity. Refusing to abide by the Kerala High Court’s directive for an audit of the AI case management tool E-Seva Kendra citing trade secrets, the vendor Tech Mahindra undermined the rule of law. The court, not possessing
technological expertise, acquiesced to a precedent that provides corporates with autonomy to operate beyond democratic regulation. Nikhil Pahwa, an activist, argues that “AI vendors have become the East India Company of the digital age.” Linguistic exclusion compounds this secrecy. Indians who don’t speak English are muzzled by India’s legal AI software which operates very much in English. The court came up with a 200-page English rationale when a Tamil Nadu fisherman challenged an AI-generated order
over fishing rights. He asked, “How am I going to outsmart a machine that communicates in a language I don’t speak?” This linguistic discrimination is contrary to Article 350 (6) of the Constitution that guarantees speech in one’s mother tongue.
This algorithmic gaslighting has terrible consequences (7). AI was employed by the National Register of Citizens (NRC) in Assam to strip 1.9 million individuals of their citizenship, most of whom were Muslims whose families had lived in India for generations. When challenged, officials brushed aside human testimony as “anecdotal” and cited the algorithm’s “infallibility.” The manner in which judicial AI deals with marginalized voices mirrors this Orwellian elimination of lived experience. As per a 2023 Oxfam India audit, 54% of sexual assault cases against Dalit women are dismissed by AI systems because they have “lack of corroborative data,” a euphemism for underreporting over the years. Algorithms strip marginalized experience from legal reality by using biased data as if it were neutral.
V. Resistance and Reclamation: Grassroots Counter-Narratives
But resistance is mounting. Kisan AI, a free resource developed by Punjabi farmers, documents land rights abuse in both Gurmukhi and Punjabi. Unlike state systems, it gives more importance to oral evidence than FIRs, creating counter-datasets to challenge official accounts. These datasets helped to overturn 120 illegal land takings in 2024. Baldev Singh, head of the farmer union, said, “We’re using AI to reclaim our history from those who erased it.” Likewise, the Adivasi communities of Jharkhand record land grab incidents in Ho and Santhali via TruthFinder, a blockchain-based platform. Stored on decentralized servers, these permanent records are immune to state-level erasure. Activist Dayamani Barla, who battled a fake land acquisition case for two years, states, “We’re writing our own history now.” There are also mounting legal challenges. The Home Ministry had to make training data for its “anti-terror” AI publicly available after the Internet Freedom Foundation had filed a Public Interest Litigation (PIL) in 2024 that ascertained that 89% of the inputs were from texts composed by upper caste members (8). The Delhi High Court has since mandated caste audits for all state AI tools which is a historic win. “This is the first step toward decolonizing AI”, says lawyer Apar Gupta.
VI. A Blueprint for Algorithmic Democracy
India must refashion its technological future to liberate democracy from algorithmic domination (9). Decolonize Training Data for AI: Establish a National AI Ethics Commission with Dalit and Adivasi scholars on board to analyse legal datasets and remove casteist and colonial antecedents. Recreate datasets on constitutional values by working with institutions like the Tata Institute of Social Sciences.
Mandate for Linguistic Justice: Use NIC’s Bhashini translation platform to modify the Digital India Act to require AI explanations in all 22 scheduled languages. A template is offered by Kerala’s Vanilla Legal AI, which provides land dispute judgment explanations in Malayalam. Promote community-driven AI development initiatives like Grameen AI Labs, where locals work together to develop tools for labor rights and land litigation. The authority is illustrated by Rajasthan’s AI Panchayat Initiative, which reduced caste-based lawsuits by 40%, shows the authority of participatory design.
Transparency Instead of Trade Secrets: Implementing a Right to Algorithmic Accountability would call on vendors to disclose code and data sources (10), thus fostering transparency in place of trade secrets. As shown by the EU’s AI Liability Directive, it include sanctions for noncompliance (11).
Constitutional Safeguards: Incorporate the protections of Article 19(1)(a) (12) into the architecture of AI. Judicial review must be made mandatory prior to the use of speech-related algorithms to ensure that they comply with the parameters laid down in Shreya Singhal v. Union of India (2015) (13).
VII. Conclusion
India’s encounter with AI-governed legal systems is not merely a technological challenge—it is a democratic reckoning. From the predictive policing of Dalit communities to the criminalization of dissenting farmers, the AI empire in India has imposed an epistemology of governance where computation overrides constitutionalism. But resistance is not only possible, it is already underway. The grassroots efforts of farmers in Punjab using Kisan AI or Adivasi communities in Jharkhand leveraging TruthFinder (14), show
that localized, decentralized and linguistically inclusive models of artificial intelligence can challenge dominant narratives. These movements do more than provide counter-data; they represent an epistemic resistance, a refusal to be reduced to “low truth scores” or “threat indices.”
To effectively dismantle AI empires, India must develop systemic frameworks of resistance grounded in constitutional morality. First, this means recognizing AI development as a political act, not merely a technical task. The inclusion of marginalized voices not just as users or test cases but as designers and theorists is essential. Second, constitutional safeguards must be embedded at the level of architecture: algorithmic decision-making must pass tests of fairness, necessity, and proportionality as laid down in Shreya Singhal v. Union of India.
Third, India must align itself with emerging Global South frameworks of ethical AI. Recent work by institutions such as the African Institute for Mathematical Sciences (AIMS) (15) and Nairobi-based Data Justice Lab have emphasized the importance of decolonial data annotation practices and community-owned datasets. These scholars argue that algorithmic bias is not just a technical bug but a reflection of whose realities are deemed legible. A future of AI justice must learn from these perspectives, not simply reproduce Euro-American norms of digital ethics.
Finally, reclaiming the narrative means rejecting Silicon Valley’s logic of “move fast and break things.” The Indian philosophical tradition emphasizes deliberation (manthan) over speed. In this spirit, the development of legal AI must proceed with caution, inclusion and deep democratic engagement. As Justice D.Y. Chandrachud reminds us, “The algorithm should serve the Constitution” (16). The struggle, then, is not simply about machines but about the values they encode. The future of AI in India will be written not by code alone but by the commitments of its people to equality, to dissent and to the idea that technology must serve justice, not the other way around.