Algorithmic transparency is the principle that the factors that influence the decisions made by algorithms should be visible, or transparent, to the people who use, regulate, and are affected by systems that employ those algorithms [1].
Others [2], argue that it can serve multiple purposes:
- Discrimination Discovery, which refers to the ability to identify discrimination against sensitive groups in the population, caused by biases in an algorithmic system.
- Explainability Promotion, which is the ability to explain the decisions made by algorithmic systems to users.
- Fairness Managing, which refers to the ability to ensure fairness with regard to sensitive groups in the population.
- Auditing, which refers to the ability to audit the results of the algorithm (e.g. study correlation between inputs/outputs)
- Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital journalism, 5(7), 809-828.
- Tal, A. S., Batsuren, K., Bogina, V., Giunchiglia, F., Hartman, A., Loizou, S. K., … & Otterbacher, J. (2019, June). “End to End” Towards a Framework for Reducing Biases and Promoting Transparency of Algorithmic Systems. In 2019 14th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP) (pp. 1-6). IEEE.