AI provides deeper insight into our personal lives when interacting with sensitive data. As humans are responsible for building safe and effective AI, unintended algorithmic effects from imbalanced data must not be embedded in the systems we create. The role of a responsible team is to minimize these potential imbalances through ongoing research and data collection that is representative of the population.
UNIVERSITY OF FLORIDA
AI at UF: Far-Reaching Impact
The University of Florida’s AI initiative will make UF a national leader in AI and have far-reaching impacts for the university and its students and faculty. IC3 will be part of the journey and contribute to the impact.

Useful References
- “The medical algorithmic audit,” Lancet Digital Health, May 2022
A proposed method of auditing medical algorithms for potential errors and mitigating their impact. - “Validation and algorithmic audit of a deep learning system…,” Lancet Digital Health, May 2022
The results of auditing a machine-learning model used to detect femoral fractures on patients in emergency departments. - Presentations from the National AI Research Resource Task Force, Meeting #5, February 2022
Slides from NAIRR Task Force meeting, including a presentation entitled “Privacy, Civil Rights, and Civil Liberties.” - Algorithmic…Playbook, Center for Applied AI at Chicago Booth, 2021
A general overview of unintended algorithmic effects and how to test for and address it. - “4 Types of Machine Learning…,” Alegion, 2019
A general overview of the types of unfair algorithms. - IC3 Learning about AI page