"Visual analytic techniques for interpretable algorithmic ranking syste" by Jun Yuan

Author ORCID Identifier

0009-0007-7160-031X

Document Type

Dissertation

Date of Award

12-31-2024

Degree Name

Doctor of Philosophy in Data Science - (Ph.D.)

Department

Data Science

First Advisor

Aritra Dasgupta

Second Advisor

James Geller

Third Advisor

Chase Qishi Wu

Fourth Advisor

Amy K. Hoover

Fifth Advisor

Julia Stoyanovich

Abstract

Rankings have a profound impact on the increasingly data-driven society. From leisurely activities like the movies to watch, the restaurants to patronize; to highly consequential decisions, like making educational and occupational choices or getting hired by companies— these are all driven by sophisticated yet mostly opaque algorithmic rankers. A small change in how these rankers order the data items can have profound consequences, like deterioration of the prestige of a university or a job applicant missing out on being on the list of the top candidates for an organization. These scenarios necessitate data-driven and human-centered innovation to make rankers accessible, interpretable, and accountable to stakeholders across the socio-technical divide, such as job candidates, hiring managers, administrators, etc.

To address this, data scientists' workflows were studied using qualitative methods, leading to interactive visualization tools for calibrating ranker properties like stability and diversity. Explainable AI (XAI) was integrated with visualizations to explore interpretability in socio-technical contexts. The work extended to dynamic environments, modeling the competition among ranked data subjects and enabling recourse strategies in multi-agent scenarios. Vulnerabilities in interpretability methods like LIME and SHAP were exposed, and mitigation strategies aligned with decision-making demands were proposed. Additionally, large language models (LLMs) were leveraged as conversational agents for ranking systems, with novel interfaces and metrics developed to align outputs with user expectations.

This work is a step in the direction of addressing algorithmic accountability, whereby it has become imperative to develop transparent methods for demystifying how rankings are produced, who has agency to change them, and what metrics of socio-technical impact one must use for informing the context of use.

Included in

Data Science Commons

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.