Design a Hierarchical Resource Allocating Approach by Using Dual Q-Learning in Deep Reinforcement Algorithm
Downloads
With the advent of the Internet of Things (IoT), the number of devices that work in these environments has greatly increased, which has caused major changes in the structure of data processing and information storage. In this regard, cloud computing is an Internet-based computing platform that provides the necessary processing resources for users. However, due to the ever-increasing volume, speed and communication technologies, the current model of cloud processing can hardly provide satisfactory performance quality. So, proper management of the incoming requests is very critical for the continuous operation. In this project, in order to increase the speed of resource allocation operation, virtual machines are ranked by game theory, which significantly reduces the time and computational complexity of the whole process. The combination of these three approaches and their integration in sequence increases the speed of allocation performance and reduces the costs associated with it. In this article, two characteristics of activity completion time and service quality have been evaluated. According to the obtained results, applying the proposed structure on the Borg dataset provided by Google has yielded results in the range of more than 35% for reduction in CPU usage cost compared to other existing methods.
Bonomi, Flavio, Rodolfo Milito, Jiang Zhu, and Sateesh Addepalli. "Fog computing and its role in the internet of things." In Proceedings of the first edition of the MCC workshop on Mobile cloud computing, pp. 13-16. 2012.
Osanaiye, Opeyemi, Shuo Chen, Zheng Yan, Rongxing Lu, Kim-Kwang Raymond Choo, and Mqhele Dlodlo. "From cloud to fog computing: A review and a conceptual live VM migration framework." IEEE Access 5 (2017): 8284-8300.
Yi, Shanhe, Zijiang Hao, Zhengrui Qin, and Qun Li. "Fog computing: Platform and applications." In 2015 Third IEEE workshop on hot topics in web systems and technologies (HotWeb), pp. 73-78. IEEE, 2015.
Chiang, Mung, and Tao Zhang. "Fog and IoT: An overview of research opportunities." IEEE Internet of things journal 3, no. 6 (2016): 854-864.
Zhang, Lingxin, Qi Qi, Jingyu Wang, Haifeng Sun, and Jianxin Liao. "Multi-task deep reinforcement learning for scalable parallel task scheduling." In 2019 IEEE International Conference on Big Data (Big Data), pp. 2992-3001. IEEE, 2019.
Cheng, Mingxi, Ji Li, and Shahin Nazarian. "DRL-cloud: Deep reinforcement learning-based resource provisioning and task scheduling for cloud service providers." In 2018 23rd Asia and South pacific design automation conference (ASP-DAC), pp. 129-134. IEEE, 2018.
Ali, Tariq, Umar Draz, Sana Yasin, Javeria Noureen, Ahmad Shaf, and Munwar Ali. "An Efficient Participant's Selection Algorithm for Crowdsensing." Int. J. Adv. Comput. Sci. Appl 9 (2018): 399-404.
Graesser, Laura, and Wah Loon Keng. Foundations of deep reinforcement learning: theory and practice in Python. Addison-Wesley Professional, 2019.
Kumar, Neetesh, Syed Shameerur Rahman, and Navin Dhakad. "Fuzzy inference enabled deep reinforcement learning-based traffic light control for intelligent transportation system." IEEE Transactions on Intelligent Transportation Systems (2020).
Nan, Yucen, Wei Li, Wei Bao, Flavia C. Delicato, Paulo F. Pires, and Albert Y. Zomaya. "A dynamic tradeoff data processing framework for delay-sensitive applications in cloud of things systems." Journal of Parallel and Distributed Computing 112 (2018): 53-66.
Li, He, Kaoru Ota, and Mianxiong Dong. "Deep reinforcement scheduling for mobile crowdsensing in fog computing." ACM Transactions on Internet Technology (TOIT) 19, no. 2 (2019): 1-18.
Huang, Liang, Xu Feng, Cheng Zhang, Liping Qian, and Yuan Wu. "Deep reinforcement learning-based joint task offloading and bandwidth allocation for multi-user mobile edge computing." Digital Communications and Networks 5, no. 1 (2019): 10-17.
Zhai, Junyong. "Dynamic output-feedback control for nonlinear time-delay systems and applications to chemical reactor systems." IEEE Transactions on Circuits and Systems II: Express Briefs 66, no. 11 (2019): 1845-1849.
Huang, Liang, Xu Feng, Cheng Zhang, Liping Qian, and Yuan Wu. "Deep reinforcement learning-based joint task offloading and bandwidth allocation for multi-user mobile edge computing." Digital Communications and Networks 5, no. 1 (2019): 10-17.
Graesser, Laura, and Wah Loon Keng. Foundations of deep reinforcement learning: theory and practice in Python. Addison-Wesley Professional, 2019.
Liu, Yan, Bin Guo, Yang Wang, Wenle Wu, Zhiwen Yu, and Daqing Zhang. "TaskMe: Multi-task allocation in mobile crowd sensing." In Proceedings of the 2016 ACM international joint conference on pervasive and ubiquitous computing, pp. 403-414. 2016.
Chen, Jianwei, Huadong Ma, Dong Zhao, and David SL Wei. "Participant density-independent location privacy protection for data aggregation in mobile crowd-sensing." Wireless Personal Communications 98 (2018): 699-723.
Buhussain, A. A., R. E. D. Grande, and A. Boukerche. "Performance analysis of Bio-Inspired scheduling algorithms for cloud." In IEEE International parallel and distributed processing symposium workshops, pp. 776-785. 2016.
Zhang, Jialin, Xianxian Li, Zhenkui Shi, and Cong Zhu. "A reputation-based and privacy-preserving incentive scheme for mobile crowd sensing: a deep reinforcement learning approach." Wireless Networks (2022): 1-14.
Yu, Yan, Qian Shi, and Hak-Ke۱ung Lam. "Fuzzy sliding mode control of a continuum manipulator." In 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 2057-2062. IEEE, 2018.
Nair, Arun, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam et al. "Massively parallel methods for deep reinforcement learning." arXiv preprint arXiv:1507.04296(2015).
Liu, Ruishan, and James Zou. "The effects of memory replay in reinforcement learning." In 2018 56th annual allerton conference on communication, control, and computing (Allerton), pp. 478-485. IEEE, 2018.
Chen, Yanpei, Archana Sulochana Ganapathi, Rean Griffith, and Randy H. Katz. "Analysis and lessons from a publicly available google cluster trace." EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2010-95 94 (2010).
Kiumarsi, Bahare, Kyriakos G. Vamvoudakis, Hamidreza Modares, and Frank L. Lewis. "Optimal and autonomous control using reinforcement learning: A survey." IEEE transactions on neural networks and learning systems 29, no. 6 (2017): 2042-2062.