Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3128128.3128143acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicsdeConference Proceedingsconference-collections
research-article

Using Markov decision process in cognitive radio networks towards the optimal reward

Published: 21 July 2017 Publication History

Abstract

The Learning is an indispensable phase in the cognition cycle of cognitive radio network. It corresponds between the executed actions and the estimated rewards. Based on this phase, the agent learns from past experiences to improve his actions in the next interventions. In the literature, there are several methods that treat the artificial learning. Among them, we cite the reinforcement learning that look for the optimal policy, for ensuring the maximum reward.
The present work exposes an approach, based on a model of reinforcement learning, namely Markov decision process, to maximize the sum of transfer rates of all secondary users. Such conception defines all notions relative to an environment with finite set of states, including: the agent, all states, the allowed actions with a given state, the obtained reward after the execution of an action and the optimal policy. After the implementation, we remark a correlation between the started policy and the optimal policy, and we improve the performances by referring to a previous work.

References

[1]
Jason Bernard, Ting-Wen Chang, Elvira Popescu, and Sabine Graf. 2017. Learning Style Identifier: Improving the Precision of Learning Style Identification Through Computational Intelligence Algorithms. Expert Systems with Applications (2017).
[2]
Emna Trigui et al. 2012. On using multi agent systems in cognitive radio networks: A survey. International Journal of Wireless and Mobile Networks (IJWMN) 4 (2012), 2127--2159.
[3]
Albert H.R.K, Robert Sabourin, and Franois Gagnon. 2013. Performance of distributed multi-agent multi-state reinforcement spectrum management using different exploration schemes. ELSEVIER, Expert Systems with Applications 40, 10 (2013), 4115--4126.
[4]
Sumit J.Daraka, Sumedh Dhabub, Christophe Moyc, Honggang Zhanga, Jacques Palicotc, and A.P.Vinodb. 2015. Low complexity and efficient dynamic spectrum learning and tunable bandwidth access for heterogeneous decentralized cognitive radio networks. Digital Signal Processing 37 (2015), 13--23.
[5]
Eun K. Lee, Hariharasudhan Viswanathan, and Dario Pompili. 2016. RescueNet: Reinforcement-learning-based communication framework for emergency networking. ComputerNetworks. 98 (2016), 14--28.
[6]
Mitola.J. 2000. Cognitive radio : an integrated agent architecture for software defined radio. PhD dissertation, Royal Institute of Technology (KTH), Kista Sweden.
[7]
N. Morozs, T.Clarke, and D.Grace. 2016. Cognitive spectrum management in dynamic cellular environments: A case-based Q-learning approach. Engineering Applications of Artificial Intelligence 55 (2016), 239--249.
[8]
Ibrahim Mustaphaa, Borhanuddin M. Ali, A. Sali, M.F.A. Rasid, and H. Mohamadc. 2017. An energy efficient Reinforcement Learning based Cooperative Channel Sensing for Cognitive Radio Sensor Networks. Pervasive and Mobile Computing 35 (2017), 165--184.
[9]
Babak Pourpeighambar, Mehdi Dehghan, and Masoud Sabaei. 2017. Non-Cooperative Reinforcement Learning Based Routing in Cognitive Radio Networks. Computer Communications (2017).
[10]
Yasir Saleema, Kok-Lim Alvin Yaua, Hafizal Mohamadb, Nordin Ramli, and Mubashir Husain Rehmani. 2015. SMART: A SpectruM-Aware clusteR-based rouTing scheme for distributed cognitive radio networks. Computer Networks (2015).
[11]
Feten Slimenia, Bart Scheersb, Zied Chtouroua, Vincent Le Nirb, and Rabah Attiac. 2015. Cognitive Radio Jamming Mitigation using Markov Decision Process and Reinforcement Learning. In Procedia Computer Science, Vol. 35. 199--208.
[12]
Richard S. Sutton and Andrew G. Barto. 2012. Reinforcement Learning: An Introduction. MIT, Cambridge. 71--136 pages.
[13]
Xinglong Wang, Liusheng Huang, Hongli Xu, and He Huang b. 2017. Auction-based resource allocation for cooperative cognitive radio networks. Computer Communications 97 (2017), 40--51.
[14]
Gerhard Weiss. 1999. Multiagent Systems, A Modern Approach to Distributed Modern Approach to Artificial Intelligence. MIT Press, Cambridge.
[15]
Xin Xu, Lei Zuo, and Zhenhua Huang. 2014. Reinforcement learning algorithms with function approximation: Recent advances and applications. Information Sciences 261 (2014), 1--31.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICSDE '17: Proceedings of the 2017 International Conference on Smart Digital Environment
July 2017
245 pages
ISBN:9781450352819
DOI:10.1145/3128128
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 July 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Markov decision process
  2. agent
  3. cognitive radio networks
  4. optimal policy
  5. reinforcement learning

Qualifiers

  • Research-article

Conference

ICSDE '17

Acceptance Rates

ICSDE '17 Paper Acceptance Rate 36 of 139 submissions, 26%;
Overall Acceptance Rate 68 of 219 submissions, 31%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 75
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 22 Sep 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media