ONC HITAC Highlights: AI in Health & Human Services
The recent Health Information Technology Advisory Committee (HITAC) meeting brought together leading experts to discuss the transformative potential of artificial intelligence (AI) in health and human services. The meeting focused on how AI can improve health outcomes while ensuring fairness and inclusivity. Among many notable key speakers were Dr. Maia Hightower, CEO of EqualityAI, and Dr. Inioluwa Deborah Raji, a researcher at UC Berkeley. Their insights underscore the critical need for responsible AI deployment that addresses biases and promotes equity, which is crucial for public health professionals, local/state health departments, and non-profit social service agencies addressing health and social needs.
Leading the Charge for Equitable AI
Dr. Hightower's journey in AI is driven by her commitment to health equity. As a former Chief Population Health Officer and Chief Medical Information Officer (CMIO), she experienced firsthand the biases embedded in healthcare algorithms. A pivotal moment came when she encountered a biased population health algorithm that unfairly impacted patients like Tammy Dobbs. Tammy Dobbs, a patient with cerebral palsy, saw her care hours drastically reduced due to a flawed Medicaid assessment tool. This experience led Hightower to found EqualityAI, a platform dedicated to auditing, validating, and monitoring AI technologies to ensure they are equitable and effective.
"AI in healthcare should be fair and equitable. It should work and provide value for everyone," Hightower emphasized during her presentation. EqualityAI focuses on integrating equity into the "three Ds" of AI: design, development, and deployment. Hightower also highlighted several key strategies for building equity into AI systems:
AI Validation, Safety, and Management: Ensuring that AI systems are validated for safety and effectiveness across diverse populations.
Bias Mitigation Methods: Implementing methods to detect and mitigate biases throughout the AI lifecycle.
Human-Centered MLOps Solutions: Focusing on human-centered approaches to machine learning operations.
Role of Policy Makers: Encouraging policymakers to define and incentivize standards that prioritize health equity, fund research and innovation, and support education and training for diverse workforces.
Exposing Bias in AI Systems
Inioluwa Deborah Raji has been at the forefront of highlighting biases in AI systems, particularly in facial recognition technology. Her work with the Algorithmic Justice League revealed significant performance disparities in commercial facial recognition systems, such as those from IBM, Microsoft, and Amazon. These systems were found to perform poorly on darker-skinned female faces, highlighting a critical flaw in their deployment for high-stakes scenarios like immigration and law enforcement.
Raji's groundbreaking studies, including "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification" (Buolamwini & Raji, 2019) and "Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products" (Raji & Buolamwini, 2019), have brought much-needed attention to the issue of AI bias. She advocates for comprehensive evaluation of AI systems across diverse populations and contexts, emphasizing the need for transparent communication about AI's strengths and limitations.
"Transparency is crucial for understanding what these systems are doing and how well they work," Raji said. Her work underscores the importance of involving diverse teams in the development and auditing processes of AI systems. By doing so, we can ensure these technologies do not perpetuate existing biases and are more accurately representative of the populations they serve.
Key Takeaways from the HITAC Meeting
Transparency and Accountability: All speakers emphasized the need for transparent communication about AI systems' strengths and limitations. This transparency is crucial for ensuring that AI technologies are used appropriately and ethically.
Comprehensive Evaluation: AI systems must be rigorously evaluated across diverse populations and contexts to ensure they do not perpetuate existing biases. This requires collaboration between researchers, developers, and policymakers.
Community Engagement: Engaging marginalized communities in the AI development process is essential for creating technologies that serve all users fairly. This includes involving diverse teams in the design and deployment of AI systems.
Policy and Regulation: Strong regulatory frameworks are necessary to ensure AI technologies adhere to high standards of fairness and equity. This includes adherence to standards set by organizations like ISO and NIST and robust data governance practices.
The insights shared from all speakers during the HITAC meeting highlight the transformative potential of AI in enhancing public health and well-being, provided it is developed and deployed with a focus on equity and inclusivity. By addressing biases and ensuring diverse representation, we can harness the full potential of AI to improve health outcomes for all.
For further inquiries or more information, please contact us at [email protected]. Let's work together to create a fairer, more inclusive future for health and wellbeing.