This is the code for unpublished papers. Note:(The title of the paper will be attached afterwards)
- Python 3.8
- SimPy, You can install SimPy easily via pip http://pypi.python.org/pypi/pip_:
$ pip install -U simpy
- tensoflow 1 version
The startup program simply runs the main function directly, then adjusts the number of edge servers and users in the participating environment by changing the global variable parameters,such as simulation time simeTime, the processing speed of edge server rho, Edge server cache pool buffer, the cache pool indicates that the server reads unloaded tasks from the queue pool. The implement of offloading algorithm by changing different name in Offloading_Strategy.py file.
The simulation has two parts, one is Simulation_NONRL without reinforcement learning, and the other is Simulation_RL with reinforcement learning.The fault module uses SimPy discrete event simulation to represent the interruption of the edge server. When the server resources fail instantaneously, an interval of event recovery occurs. After recovery, priority is given to the tasks performed before the interruption.
./dataset:contains edgeResources-melbCBD.csv and users-melbdbd-generated.csv files from EUA datasets../results: Dueling DQN algorithm results./userName: Recording the results of different reinforcement learning algorithms on different edge servers with different users./usermove: Recording the latitude and longitude of random movements of usersOffloading_Strategy.py: repsents the offloading strategy of userRL_DDQN.py:represents the Double Deep-Q Network algorithmRL_DQN.py:represents the Deep Q network alogrithmRL_Dueling DQN.py: represents the Dueling Deep Q network algorithmRL_PRDQN.py: represents the Deep Q network with the priority experience replaymain.py: task offloading main function with fault tolerancesysMpdel.py:Includes task generation, user movement, edge server resources, edge server fault recovery.