Around the world, research has shown that people in positions of power tend to favour those who share their social identity. To examine whether the same is true in Indian courts, this article analyses over five million criminal cases from the period 2010-2018. In contrast to the patterns documented in other countries, it finds no in-group bias in acquittal decisions based on shared religion, gender, or caste.
It is one thing to be judged by your peers, it is another to be judged by someone who shares your religion, gender, or caste. Around the world, researchers have documented that people with power – judges, teachers, bureaucrats – often favour those who share their identity. In various settings, litigants often fare better when they share race or gender with the judge or jury (Shayo and Zussman 2011, Anwar et al. 2012, Choi et al. 2022, Cai et al. 2025). Is the same true in India?
In new research (Ash et al. 2025), we examine this question using novel data on over 5 million Indian criminal cases heard between 2010 and 2018. We investigate whether judges treat defendants more favourably when they share a religion, gender, or caste (proxied using last names).
The promise and peril of judicial representation
India’s courts are under tremendous pressure (Boehm and Oberfield 2020). They are chronically understaffed, overloaded with cases, and widely seen as biased and inaccessible, especially to women, religious minorities, and lower-caste citizens. Women make up just 28% of lower-court judges, and Muslims only 7%, compared to 14% of the population. The same is likely true for Scheduled Castes, though exact figures are difficult to obtain. These disparities raise a core question: Does who sits on the bench affect what happens in the courtroom?
To answer this, we built a new dataset using the Indian eCourts platform, which hosts case records from the country’s 7,000+ trial courts. After filtering for criminal cases and linking case records with judge rosters, we trained a neural network to assign gender and religion to both judges and defendants based on their names. (We also used last names to detect shared caste identity, though this method has known limitations.)
The scale of the data allowed us to detect even differences in outcomes as small as a 0.5 percentage point (p.p.) change in acquittal probability. The rules that assign cases to judges – based largely on charge type, police station, and courtroom rotation – provide quasi-random assignment of judges. This setup allows us to cleanly estimate a causal impact of being assigned to a judge with the same identity characteristics as yourself.
What we find: No systemic in-group bias
In theory, judges may favour defendants who share their identity. And in many countries and contexts, they do. In the US, for example, having one Black juror significantly reduces conviction rates for Black defendants (Anwar et al. 2012). In Israel, Jewish and Arab judges favour their respective in-groups (Shayo and Zussman 2011). Similar patterns have been documented in the Indian banking system (Fisman et al. 2017).
But in India's lower criminal courts, we find no such effect. Women are not more likely to be acquitted by female judges. Muslim defendants do not receive better outcomes from Muslim judges. The average defendant does not benefit from sharing a last name with their judge. In an exception that proves the rule, we find that for people with uncommon last names, sharing a last name with a judge does improve your outcomes. But the total amount of bias caused here is very small, as it is mechanically very unlikely that someone with a rare last name also gets assigned a judge with the same rare last name. In short: the average in-group bias is statistically indistinguishable from zero.
This is one of the most precisely estimated null results in the evidence base. Figure 1 compares our results with prior studies that use similar empirical strategies. Many of these find large effects – 5 to 20 p.p. We can rule out effects even one-tenth that size. Our 95% confidence interval1 caps potential bias effects at just 0.6 p.p.
Figure 1: Comparison with judicial bias estimates from other contexts
This absence of bias holds across multiple outcomes: whether the defendant is acquitted, convicted, or gets a ruling within six months. It holds for both women and men, for Muslims and non-Muslims, across all kinds of crimes, and all parts of the country. It is not because the system is unbiased at every stage – only that in-group favouritism does not appear to drive judge decisions at the point of deciding whether to convict.
Does identity matter in salient contexts?
Despite the striking average null, could bias emerge in specific situations where identity is particularly salient? We did not find much evidence of this either.
We examined four such contexts:
Religious salience during religious festivals: Some prior work suggests that religious festivals could prime identities to be more salient. But we did not find any difference in outcomes for defendants of any religion during Ramadan (Muslim festival), Diwali, Holi, Dussehra, or Rama Navami (some major Hindu festivals), nor a difference in religious in-group bias.
Gendered crimes: In cases of crimes against women – such as sexual assault and kidnapping – we may expect gender bias to be heightened. But even here, we find no evidence that female judges treat female defendants more leniently (or male judges more harshly).
Identity contrast with victims: We test whether bias emerges when a judge shares an identity with the victim but not the defendant, as suggested by research on the US which finds that juries are more likely to rule against Black defendants with White victims. Again, no significant effects appear.
Rare last names: As noted above, we did see some in-group bias emerge when defendants with uncommon names were matched to judges with the same name. The higher salience of the shared identity could well drive this bias; but as we noted above, the total magnitude of the effect here is small once the low incidence is taken into account.
Why are Indian judges different?
Why do Indian judges not show in-group bias on average? Several explanations are possible.
Judicial norms and training may matter: Despite many well-documented problems in India’s courts – delays, opacity, backlogs – it is possible that judges internalise and enforce norms of impartiality. Judges in India are not elected and have secure tenure, potentially shielding them from political or social pressures.
Class distance may mute identity effects: Most judges, regardless of religion or gender, come from relatively elite backgrounds. The social and economic gap between a judge and typical defendant may reduce the salience of shared identity.
Publication bias may cause us to think that in-group bias is more common than it actually is: If it is easier to publish a paper with statistically significant results than with null results, researchers who find null results may abandon projects before even getting to the paper submission stage – this is the file drawer problem.
Figure 2 below shows a ‘funnel plot’, a test of publication bias based on Andrews and Kasy (2019). In the absence of publication bias, we would expect the points from prior studies (the black triangles) to form a symmetric funnel centred around the true average estimate. Regions of the graph that are missing points suggest that there would be studies in those areas, but never made it to publication. The graph below is indeed highly asymmetric, and we see many points from prior studies falling just outside the line demarcating statistical significance at p<0.05. br="">
Figure 2. Standardised errors versus effect sizes from prior studies
The graph is consistent with a substantial degree of publication bias, suggesting that in-group bias in judiciaries may not be as widespread as is suggested by published research.
It is important to understand that our research conducts one important test of bias but does not rule out bias entirely. It is possible that the kind of bias seen elsewhere operates earlier in the system – in policing, charging, or bail decisions – and not during the trial itself. We look only at the final stage, when a case is adjudicated. It is also possible that women or Muslims are more likely to be treated worse by the system – including by judges – but that they get the same bad treatment from both same- and cross-identity judges.
Implications for in-group judicial bias and future research
Our evidence suggests that concerns over in-group bias may be better directed to parts of the justice pipeline other than judge acquittal decisions. We may desire a more representative bench for various reasons, but we should not expect that it will guarantee different judicial outcomes. Representation may help build legitimacy and trust – a subject for future research. More research is needed on the entire criminal justice pipeline: from who gets arrested, to who gets charged, who makes it to trial, and the harshness of sentencing.
A final note: in working on this project, we built and released one of the world’s largest judicial datasets, covering 77 million court cases across India. We released the data early on in our process – when we posted the first working paper, and years before publication. This was in some sense risky – would we get scooped? Instead, our dataset is already enabling others to do original and exciting work on the Indian judiciary that we would never have thought of, including Craigie et al. (2023)’s study on temperature and judicial decisions, Sarmiento and Nowakowski (2023) on air pollution and judicial decisions, and Bharti and Lehne (2024) on judicial aid. Had we followed the standard process of publishing data at time of publication, the data still would not be public. We hope that other researchers will recognise the social value of publishing open data early and often.
This article was first published by VoxDev.
Note:
- A confidence interval is a way of expressing uncertainty about estimated effects. A 95% confidence interval means that, if you were to repeat the experiment with new samples, 95% of the time the calculated confidence interval would contain the true effect.
Further Reading
- Andrews, Isiah and Maximilian Kasy (2019), “Identification of and correction for publication bias,” American Economic Review, 109(8): 2766-2794.
- Anwar, Shamena, Bayer Patrick, and Hjalmarsson Randi (2012), “The impact of jury race in criminal trials,” Quarterly Journal of Economics, 127(2): 1-39. Available here.
- Ash, Elliott, Asher Sam, Bhowmick Aditi, Bhupatiraju Sandeep, Chen Daniel,Devi Tanaya, Goessmann Christoph, Novosad Paul, and Siddiqi Bilal (2025), “In-group bias in the Indian judiciary: Evidence from 5 million criminal cases,” Review of Economics and Statistics. Available here.
- Bharti, NK, and J Lehne (2024), “Justice for all? The impact of legal aid in India.”, Paris School of Economics.
- Boehm, Johannes and Oberfield Ezra (2020), “Misallocation in the market for inputs: Enforcement and the organization of production,” Quarterly Journal of Economics, 135(4): 2007-2058. Available here.
- Cai, X, P Li, Y Lu and H Song (2025), “Do judges exhibit gender bias? Evidence from the universe of divorce cases in China”, RF Berlin - CReAM, Discussion Paper Series No. 23/25.
- Choi, Donghyun Danny, Harris Andrew J, and Shen-Bayh Fiona (2022), “Ethnic bias in judicial decision making: Evidence from criminal appeals in Kenya,” American Political Science Review, 116(3): 1067-1080.
- Craigie, Terry-Ann, Taraz Viz, and Zapryanova Mariyana (2023), “Temperature and convictions: Evidence from India,” Environment and Development Economics 28(6): 538-558.
- Fisman, Raymond, Paravisini Daniel, and Vig Vikrant (2017), “Cultural proximity and loan outcomes,” American Economic Review, 107(2): 457–492.
- Shayo, Moses, and Zussman Asaf (2011), “Judicial ingroup bias in the shadow of terrorism,” Quarterly Journal of Economics, 126(3): 1447-1484. Available here.
- Sarmiento, Luis and Nowakowski Adam (2023), “Court decisions and air pollution: Evidence from ten million penal cases in India,” Environmental and Resource Economics, 86(3): 605-644.




29 September, 2025 










Comments will be held for moderation. Your contact information will not be made public.