Please use this identifier to cite or link to this item:
http://repositorio.ufc.br/handle/riufc/60409
Type: | Artigo de Evento |
Title: | A Q-learning Based Approach to Spectral Efficiency Maximization in Multiservice Wireless Systems |
Title in English: | A Q-learning Based Approach to Spectral Efficiency Maximization in Multiservice Wireless Systems |
Authors: | Saraiva, Juno Vitorino Monteiro, Victor Farias Lima, Francisco Rafael Marques Maciel, Tarcísio Ferreira Cavalcanti, Francisco Rodrigo Porto |
Keywords: | Radio resource allocation;Satisfaction guarantees;Machine learning;Reinforcement learning;Q-Learning |
Issue Date: | 2019 |
Publisher: | https://www.sbrt.org.br/sbrt2019 |
Citation: | SARIAVA, Juno Vitorino; MONTEIRO, Victor Farias; LIMA, Francisco Rafael Marques; MACIEL, Tarcísio Ferreira; CAVALCANTI, Francisco Rodrigo Porto. A Q-learning based approach to spectral efficiency maximization in multiservice wireless systems. In: SIMPÓSIO BRASILEIRO DE TELECOMUNICAÇÕES - SBrT, XXXIII., 29 set.-02 out. 2019, Petrópolis-RJ., SP. Anais […], Petrópolis-RJ., SP., 2019. |
Abstract: | In this article, we study Radio Resource Allocation (RRA) as a non-convex optimization problem, aiming at maximizing the spectral efficiency subject to satisfaction guarantees in multiservice wireless systems. This problem has already been previously investigated and efficient heuristics have been proposed. However, in order to assess the performance of Machine Learning (ML) algorithms when solving optimization problems in the context of RRA, we revisit that problem and propose a solution based on a Reinforcement Learning (RL) framework. Specifically, our proposal is based on the Q-learning technique, where an agent gradually learns a policy by interacting with its local environment, until reaching convergence. Thus, in this article, the task of searching for an optimal solution in a combinatorial optimization problem is transformed into finding an optimal policy in Q-learning. Lastly, through computational simulations we compare the state-of-art proposals of the literature with our approach and we show a near optimal performance of the latter for a well-trained agent. |
URI: | http://www.repositorio.ufc.br/handle/riufc/60409 |
Appears in Collections: | DETE - Trabalhos apresentados em eventos |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2019_eve_ jvsaraiva.pdf | 453,76 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.