DECAF: Learning to be Fair in Multi-Agent Resource Allocation

Published in RL Safety Workshop @Reinforcement Learning Conference (RLC) 2024, 2024

Recommended citation:

Kumar, A.; and Yeoh, W. 2024. "DECAF: Learning to be Fair in Multi-Agent Resource Allocation." In RL Safety Workshop (co-located with the Reinforcement Learning Conference).

Download Link

In this paper, I explore how we can improve the fairness of multi-agent resource allocation problems that learn to predict utilities by also learning to predict the effect of actions on long-term fairness. We show our approach can trade off utility and fairness reliably and in a pareto-optimal fashion.