SOFTWARE: ASP.NET | VB.NET | C#.NET | RAZOR MVC 4 ASP.NET | RESTful Web services
Heterogeneous computing powered by remote clouds and local fogs is a promising technology to improve the perfor-mance of user terminals in the Internet of Things (IoT). In this paper, two semi-Markov decision process (SMDP)-based coordi-nated virtual machine (VM) allocation methods are proposed to balance the tradeoff between the high cost of providing services by the remote cloud and the limited computing capacity of the local fog. We first present a model-based planning method in which it is necessary to train the state transition probabilities and the expected time intervals between adjacent decision epochs. To facilitate training them, the SMDP is degraded into a continuous-time Markov decision process (CTMDP) in which the service requests and ongoing service completions follow a continuous-time Markov chain (CTMC). The relative value iterative algo-rithm for the CTMDP is used to find an asymptotically optimal VM allocation policy. In addition, we also propose a model-free reinforcement learning method where an optimal coordinated VM allocation policy is approximated by learning from the states and rewards of feedback. The simulation results show that the performance of the model-free reinforcement learning method can converge to a level similar to that of the model-based planning method and outperform the greedy VM allocation method
In this module, user have to register their details and login. User can choose their resource to process.
The two host servers are located from one another, the greater the delay becomes. In addition, the data transfer traffic would consume more network resources. To avoid this situation, it is desireable to place all VMs in a subnet that contains (m+k) available host servers.
A heuristic algorithm is used to solve this problem. Two heuristic conditions are adopted to narrow the searching space. If there are even number of available servers in a subnet, the number of backup vm in the subnet should be less or equal to the number of primary vm in the subnet.
When one or more VMs fail, a recovery strategy has to be decided upon, and each failed VM has to be mapped to a backup VM. All tasks in the waiting queue of the failed VM are rescheduled to its mapping backup VM, and the data to be processed have to be retrieved again to the backup VM.
If the VM fails because of a software fault, the particular data block may be obtained from the host server on which the failed VM resides.
IN recent years, the Internet of Things (IoT) has develope-d rapidly and received much attention in academia and industry . In general, IoT terminal devices have limited computing capabilities due to the requirements of the de-ployment costs and the energy consumption. Therefore, a lot of applications in IoT terminal devices must be offloaded to remote clouds to be processed.
However, the conventional cloud computing paradigm, is insufficient because offloading applications to remote clouds needs multi-hop information transfer in wide area networks (WANs), which can cause problems for latency-sensitive applications such as real-time IoT analytics
We present a model-based planning method for finding an asymptotically optimal VM allocation policy. Differ-ent from other SMDP-based VM allocation methods, we present the modeling and solution methods for a generic semi-Markov decision problem. To handle the challenge in training the state transition probabilities and the expected time intervals between adjacent decision epochs, we degrade the generic SMDP into a continuous-time Markov decision process (CTMDP) to model the coordinated VM allocation problem under the assumption that the service requests and ongoing service completions follow a continuous-time Markov Chain (CTMC). The relative value iteration (RVI) algorithm is used to obtain an asymptotically optimal VM allocation policy in the CTMDP model framework.
We propose a model-free RL method to solve the coor-dinated VM allocation problem. The average reward RL algorithm based on the SMDP is used to explore and exploit an optimal VM allocation policy from the states and rewards of the feedback without training the state transition probabilities and the expected time intervals between adjacent decision epochs before execution.
In this paper, we presented two SMDP-based coordinated VM allocation methods for a cloud-fog computing system. We analyzed the difficulty of training the state transition probabil-ities and the expected time intervals between adjacent decision epochs for a generic SMDP, and used the CTMDP model to simplify the generic SMDP. The relative value iteration algorithm was used to find an asymptotically optimal VM allocation policy.
To avoid the negative impact of the discrep-ancy between the assumption and the real model, the average reward reinforcement learning algorithm was leveraged to obtain an approximately optimal VM allocation policy. The simulation results show that the performance of the model-free reinforcement learning method can converge to a level similar to that of the model-based planning method and outperform the greedy VM allocation method.