Defining Problem from Solutions: Inverse Reinforcement Learning (IRL) and Its Applications for Next-Generation Networking

Yinqiu Liu, Ruichen Zhang, Hongyang Du, Dusit Niyato, Jiawen Kang, Zehui Xiong, and Dong In Kim
Nanyang Technological University

Abstract

Performance optimization is a critical concern in networking, on which Deep Reinforcement Learning (DRL) has achieved great success. Nonetheless, DRL training relies on precisely defined reward functions, which formulate the optimization objective and indicate the positive/negative progress towards the optimal. With the ever-increasing environmental complexity and human participation in Next-Generation Networking (NGN), defining appropriate reward functions become challenging. In this article, we explore the applications of Inverse Reinforcement Learning (IRL) in NGN. Particularly, if DRL aims to find optimal solutions to the problem, IRL finds a problem from the optimal solutions, where the optimal solutions are collected from experts, and the problem is defined by reward inference. Specifically, we first formally introduce the IRL technique, including its fundamentals, workflow, and difference from DRL. Afterward, we present the motivations of IRL applications in NGN and survey existing studies. Furthermore, to demonstrate the process of applying IRL in NGN, we perform a case study about humancentric prompt engineering in Generative AI-enabled networks. We demonstrate the effectiveness of using both DRL and IRL techniques and prove the superiority of IRL.

BibTeX

@article{zhang2024interactive,
  title={Defining Problem from Solutions: Inverse Reinforcement Learning (IRL) and Its Applications for Next-Generation Networking},
  author={Yinqiu Liu, Ruichen Zhang, Hongyang Du, Dusit Niyato, Jiawen Kang, Zehui Xiong, and Dong In Kim},
  journal={arXiv preprint arXiv:2404.01583},
  year={2024}}