Mitigating Bias in AI Systems: A Comprehensive Review of Sources and Strategies
Abstract
Artificial intelligence (AI) is increasingly deployed in high-stakes environments such as healthcare, recruitment, finance and public safety. Although AI promises efficiency and accuracy, substantial evidence shows that it can reproduce or amplify societal inequalities through biased datasets, algorithmic design choices, human cognitive bias and automation bias. This study conducts a systematic literature review of 128 peer-reviewed publications from 2015 to 2025 to examine the sources of bias in AI systems and evaluate the effectiveness of mitigation strategies. Bias is shown to originate at multiple stages of the AI lifecycle, data, model development and real-world deployment, and tends to re-emerge when mitigation methods are applied in isolation. In response, this study introduces the Integrated Bias-Mitigation Framework (IBMF), a lifecycle-oriented model that aligns technical interventions with governance and continuous monitoring. The findings highlight the need for coordinated mitigation strategies rather than one-off technical solutions. The study concludes with policy, organizational and research recommendations to support sustainable fairness and accountability in AI.
