2024 CAIRFI fellowships & research awardees announced

Capital One announces 2024 award recipients out of the Center for Responsible Financial Innovation at Columbia University RFP.

In January 2024, Capital One & Columbia University’s School of Engineering and Applied Science established the Center for AI and Responsible Financial Innovation (CAIRFI) to accelerate research, education and the responsible advancement of AI in financial services. Among the key initiatives of the Center include 2 PhD fellowships and 2 faculty-led research projects, selected and funded annually. These opportunities create pathways to engage talent in new ways and support emerging research in the field of financial services.

We are thrilled to announce that after thoughtful consideration, the 2024 CAIRFI academic year award recipients have been selected! 

PhD fellowship awards recipients

Headshot of Leonardo Toso, a PhD student in the Department of Electrical Engineering at Columbia University

Leonardo Toso is researching Bayesian Priors for Efficient Multi-task Representation Learnings

Leonardo is a second-year PhD student in the Department of Electrical Engineering at Columbia University, where he is advised by Professor James Anderson. His research interests lie in the intersection between control theory, machine learning and optimization. 

His research will investigate the problem of learning latent representations from multi-task, non-independent and identically distributed, and non-isotropic datasets while leveraging prior information on the local and global latent variables to enhance the recovery process. 

A fundamental idea underpinning recent advances in machine learning is the ability to extract shared features from diverse task data. Intuitively, utilizing all available data to unveil a latent representation across multiple tasks reduces computational complexity and enhances statistical generalization by minimizing the number of parameters that require fine-tuning for a specific task. This encompasses and is not limited to the setting where the objective is to make accurate financial portfolio recommendations based on the client's personal investment preferences. 

Since multiple clients in a database may share common interests, unveiling such features (i.e., learning a representation) is paramount to performing accurate and efficient predictions on the clients' predilections to meet their long-term financial objectives. Moreover, prior information on the representation (e.g., sparsity, low-rankness, structural information and engineer's knowledge, among others) is often available. Accurately handling such prior beliefs may be critical for a more efficient multi-task representation learning framework. 

A headshot of Sachit Menon, a PhD student in Computer Science at Columbia Engineering

Sachit Menon is researching Towards Trustworthy Decision Making in Artificial Intelligence

Sachit is a PhD student in Computer Science at Columbia Engineering advised by Professor Carl Vondrick. His research centers around models trained at scale and ways to use them for novel tasks, such as using large language models to perform visual reasoning.

While recent advances in artificial intelligence, such as large language models (LLMs), have enabled unprecedented capabilities, they can fail unpredictably and unsafely. This lack of trust makes them unsuitable for many applications in the real world, such as finance, where safety is critical. This research aims to bridge this gap by developing systems that justify their decisions, give reasons to diagnose failure, and critically provide recourse, exposing simple ways for users to prevent failures moving forward. 

The applications of this research in computer vision can be used to understand economic trends from satellite, social media images and other ground images, improving financial forecasting. Sachit’s previous work has shown that these methods open up entirely new avenues to combat hallucination and failure in large models, with a notable application being correction of cross-cultural bias. 

Faculty-led research award recipients

A headshot of Richard Zemel, the Trianthe Dakolias Professor of Engineering and Applied Science and professor of computer science at Columbia Engineering

Richard Zemel is researching A Framework for Responsible LLM Deployment in a Changing World

Richard Zemel, the Trianthe Dakolias Professor of Engineering and Applied Science and professor of computer science at Columbia Engineering, is also the Director of the NSF AI Institute for Artificial Intelligence and Natural Intelligence (ARNI). He was the co-founder and inaugural Research Director of the Vector Institute for Artificial Intelligence. He is a Canadian Institute for Advanced Research AI Chair, an Amazon Scholar, and is on the Advisory Board of the Neural Information Processing Society. His research contributions include foundational work on systems that learn useful representations of data with little or no supervision; graph-based machine learning; and algorithms for fair and robust machine learning.

Our world is open-ended, non-stationary and constantly evolving; thus what we talk about and how we talk about it change over time. The inherently dynamic nature of language—constantly adapting to integrate new information and conditions—contrasts with the current static language modeling paradigm, which trains and evaluates models on utterances from overlapping time periods. Despite impressive progress, recent results have shown that LLMs perform worse in the realistic setup of predicting future utterances from beyond their training period and model performance becomes increasingly worse with time. A fundamental aim in deploying these LLMs is to obtain some performance guarantee. While most techniques for evaluating these models focus on average performance on a validation set, this can lead to a deployment where unexpectedly poor responses are generated, which is especially dangerous in the financial services domain. 

This research team has developed a framework for deriving rigorous bounds on the worst-case performance of any AI model. We offer methods for producing bounds on a diverse set of metrics, including quantities that measure worst-case responses and disparities in model responses across the population of users. The focus of this proposal is to extend the underlying statistical techniques used to produce these bounds in order to accommodate distribution shifts in deployment and demonstrate our framework's application to the important setting of temporal adaptation. 

A headshot of Anish Agarwal, an Assistant Professor at Columbia University

Anish Agarwal is researching User Session-Level Counterfactual Simulator

Anish Agarwal’s research interests are in designing and analyzing methods for causal machine learning, and applying it to critical problems in social and engineering systems. He received his PhD in EECS from MIT, where he was advised by Alberto Abadie, Munther Dahleh and Devavrat Shah. For his dissertation, he received the INFORMS George B. Dantzig best thesis award (2nd place) and the ACM SIGMETRICS outstanding thesis award (2nd place). Prior to coming to Columbia University, he was a postdoctoral scientist at Amazon, Core AI and was also a fellow at the Simons Institute, UC Berkeley. He has served as a technical consultant to TauRx Therapeutics and Uber Technologies on questions related to experiment design and causal inference. Prior to the PhD, he was a management consultant at Boston Consulting Group.

With AI being deployed to make decisions in critical areas, counterfactual reasoning through causal inference is crucial to personalized decision-making. In digital platforms such as Capital One, the most fine-grained data that is collected about users is “session-level trace data.” Such data record highly specific behavior of users such as which buttons they click on, what items they view and purchase, etc. Session-level data also records the state of the digital platform as users navigate it, such as what ads/text/images they are shown on various parts of the platform. Toward the goal of truly personalized decision-making, the essential question at hand is as follows: For a given set of interactions between a user and a platform thus far during a session, what is the predicted trajectory for a user if the platform intervened and changed a state of the system in some way? That is, in real-time during a session, could a platform create a “counterfactual” simulator of that user’s predicted trajectory (e.g., what features they click on, what their engagement will be) if a specific text/image is shown? The goal of this research is to build such a counterfactual simulator by effectively leveraging historical session-level trace data.

Capital one is advancing the frontier of AI

We know industry and academia are uniquely positioned to accelerate new capabilities when they partner. We are excited about these AI research projects because they will allow us to advance the cutting edge of AI applications to better serve our customers. Partnerships with academic institutions like Columbia allow us to broaden our AI research efforts. 

Learn more about our AI research efforts.

AI Research at Capital One

See how we're advancing the state of the art in AI for financial services.


Capital One Tech

Stories and ideas on development from the people who build it at Capital One.

Related Content

Building applied research programs for enterprise tech
Article | March 28, 2024 |3 min read
vector image of scattered blue thought bubbles, with a half brain/half thought bubble in the middle
Article | January 24, 2022 |2 min read
Article | February 29, 2024 |5 min read