About Me

Hi!
I'm Ashwin Kumar, a PhD student at Washington University in St. Louis, advised by Prof. William Yeoh. I'm interested in the intersection of AI and fairness, and how we can use AI to improve fairness and transparency in various people-facing problems.
When I'm not working, I can be found reading a book or biking in Forest Park. I also enjoy playing the piano, when I can get my hands on one. One of my favorite things to do is to travel, and I've been fortunate enough to visit many places around the world. I'm always looking for new things to do, new books to read, and interesting music to listen to, so if you have any recommendations, please let me know!
Side note: I designed this website using an agentic LLM. Given the recent success of generative AI, I'm confident the next big leaps will come from agentic systems and reinforcement learning. I'm excited to see what the future holds!
Research Interests
My research focuses on ensuring AI systems operate fairly and transparently, particularly in complex decision-making scenarios involving multiple agents and temporal dynamics. I work extensively on temporal Resource Allocation problems, where resources and agents interact over time. Ensuring fairness in such problems is challenging, especially when balancing the trade-off with efficiency. Early work focused on improving fairness in ridesharing systems using simple incentives for improved driver and passenger experiences. More recently, I've explored learning fair allocation policies in multi-agent settings using reinforcement learning (DECAF framework), developing methods to bridge myopic and long-term fairness considerations, and investigating emerging fairness challenges such as detecting prefix bias in LLM-based reward models. My goal is to bridge the gap between algorithmic fairness techniques and efficient resource allocation in dynamic systems.
Beyond fairness, I work on explainability and human-AI interaction. Alongside Stylianos Vasileiou, I focus on Human-Aware AI problems, seeking to improve how humans understand and interact with AI systems. We've explored visualization techniques for explaining agent behavior. Our recent work, published at KR 2024, introduces a framework for dialectical reconciliation using structured argumentative dialogues. This allows users to engage in a conversation with AI agents to resolve knowledge discrepancies and better understand the agent's reasoning.
You can find more information about my research on the publications page.
Selected Publications
News
Contact
You can reach me via email or find me on the following platforms:
- Email: ashwinkumar@wustl.edu
- GitHub: kumar-ashwin
- Google Scholar: Profile
- LinkedIn: Profile
- ORCID: 0009-0003-8782-1388