Assign 5
Assign 5
Explanation: The Actor main network uses the current state plus added
random noise to select an action for exploration.
Answer: c
Explanation: In a public cloud setup, the total cost is calculated by summing the
costs of various cloud instance types (on-demand, reserved, spot) together with
the edge node’s cost.
4. What is the role of the Critic network in the Deep Deterministic Policy Gradient (DDPG)
algorithm?
a. To directly perform actions based on the policy.
b. To generate resource allocation policies independently.
c. To store experience in the replay pool.
d. To evaluate the Actor’s performance using a value function.
Answer: d
Explanation: The Critic network evaluates the Actor's performance by estimating
the value function, which guides the Actor’s policy updates.
5. What is the main goal of the resource allocation algorithms in cloud-edge computing?
a. To maximize the number of VMs allocated
b. To minimize the long-term cost of the system
c. To increase the computing time duration
d. To maximize the reward function
Answer: b
Explanation: The goal is to minimize the long-term cost of the system over the T
time slots by minimizing the sum of the costs over all time slots.
8. What does the Deep Deterministic Policy Gradient (DDPG) algorithm involve?
a. Only Actor networks.
b. Only Critic networks.
c. Both Actor and Critic networks.
d. Neither Actor nor Critic networks.
Answer: c
Explanation: DDPG involves both Actor and Critic networks to guide the
decision-making process in resource allocation.
10. What is the Markov Decision Process (MDP) used for in resource allocation?
a. To model sequential decision-making problems.
b. To predict user demand.
c. To manage cloud costs.
d. To optimize edge node performance.
Answer: a
Explanation: MDP is used to model the resource allocation problem as a
sequential decision-making process