Personal Understanding Research: XAI in Finance
Part I of this project argues that making AI models in finance explainable can do more than just open the “black box.” It lays out how clear, user-friendly explanations can teach everyday consumers the basics of credit, saving, and investing, which in turn boosts overall financial literacy and trust. Drawing on fairness-accountability-transparency (FAT) principles, cognitive-modeling insights, and bias-detection tools like MMD-critic, I show how XAI can widen access to financial services, support fairer outcomes, and help workers reskill for an AI-driven economy. The paper also offers practical guidance (start with simple, transparent systems such as “Gall’s Law"), add complexity only when users can follow it, and require regulators to test explainability before new products launch. All of this to ensure the economic gains of AI reach a broad population.
Link to paper: https://www.researchgate.net/publication/391988555_XAI_in_Finance_Part_I_Improving_Financial_Literacy_and_Maximizing_Socioeconomic_Benefits_with_XAI
Part II of this project extends those ideas with a slightly more rigorous focus on Fairness, Accountability, and Transparency (FAT) frameworks, cognitive modeling for human-aligned explanations, advanced bias detection via Maximum Mean Discrepancy (MMD), and novel algorithmic techniques for interpretable and fair AI. The goal is to guide financial AI system developments that not only make accurate predictions but also explain themselves in mathematically sound ways that foster trust and learning.
Link to paper: https://www.researchgate.net/publication/392331052_XAI_in_Finance_Part_II_A_FAT_Framework_with_Cognitive_Modeling_and_MMD-Critic
GitHub Link showcasing how to generate charts in Part II: https://github.com/glombardo/Research/blob/main/Explainable_AI_in_Finance_A_FAT_Framework_with_Cognitive_Modeling_and_MMD_Critic_Guido_Lombardo.ipynb