AI provides deeper insight into our personal lives when interacting with sensitive data. As humans are inherently vulnerable to biases and are responsible for building AI, human bias could be embedded in the systems we create. The role of a responsible team is to minimize algorithmic bias through ongoing research and data collection that is representative of a diverse population.
UNIVERSITY OF FLORIDA
AI at UF: Far-Reaching Impact
The University of Florida’s AI initiative will make UF a national leader in AI and have far-reaching impacts for the university and its students and faculty. IC3 will be part of the journey and contribute to the compact.
Useful References
- “Bias in Artificial Intelligence: Basic Primer,” Clinical Journal of the American Society of Nephrology, March 2023
An overview of biases in AI and ways to mitigate them. - “The medical algorithmic audit,” Lancet Digital Health, May 2022
A proposed method of auditing medical algorithms for potential errors and mitigating their impact. - “Validation and algorithmic audit of a deep learning system…,” Lancet Digital Health, May 2022
The results of auditing a machine-learning model used to detect femoral fractures on patients in emergency departments. - Presentations from the National AI Research Resource Task Force, Meeting #5, February 2022
Slides from NAIRR Task Force meeting, including a presentation entitled “Privacy, Civil Rights, and Civil Liberties.” - Algorithmic Bias Playbook, Center for Applied AI at Chicago Booth, 2021
A general overview of algorithmic bias and how to test for and address it. - “4 Types of Machine Learning Bias,” Alegion, 2019
A general overview of the types of algorithmic biases. - IC3 Learning about AI page