Technological innovations like Artificial Intelligence (AI) and Machine Learning (ML) are showing positive impacts and creating opportunities in revolutionizing the betterment of health care. The health IT domain has the luxury of owning one of the largest data sets compared to other domain areas. Leveraging this data, AI/ML can be utilized for great benefit, but they must be developed and applied carefully to prevent biased models from potentially perpetuating healthcare inequity further.
Our study evaluated the presence of predictive and socioeconomic bias using a data-agnostic and model-agnostic approach. By utilizing synthetic data from Synthea simulated health data, we constructed a classification model for the prediction of risk of progression of chronic kidney disease (CKD) from Stage 4 to Stage 5. Our tool was able to calculate race, gender and income-specific performance metrics and mitigate these biases through iterative cutoff manipulation and optimization. Both current and future healthcare-related machine learning models must incorporate robust methods for assessing and mitigating bias. The CKD Progression Bias Detection/Mitigation Team has developed a powerful bias mitigation tool for promoting equitable healthcare by ensuring the development of fair models. It is highly generalizable and can be applied to most classification models.
Below is a video demonstration of our tools for both bias detection and bias mitigation.
Below is a link to our GitHub Repository. Besides the code a readme.md and requirements.txt file are included.
Github Repository LinkBelow is the link to our supporting documentation as a PDF file.
Supporting Documentation PDF Link