[go: up one dir, main page]

Next Article in Journal
Information Fusion in Autonomous Vehicle Using Artificial Neural Group Key Synchronization
Previous Article in Journal
A Wearable System for Jump Detection in Inline Figure Skating
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Agent Reinforcement Learning for Joint Cooperative Spectrum Sensing and Channel Access in Cognitive UAV Networks

1
Communication Measurement and Control Center, Chongqing University, Chongqing 400044, China
2
Faculty of Engineering, Bar Ilan University, Ramat Gan 5290002, Israel
3
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2022, 22(4), 1651; https://doi.org/10.3390/s22041651
Submission received: 28 December 2021 / Revised: 5 February 2022 / Accepted: 7 February 2022 / Published: 20 February 2022
(This article belongs to the Section Sensor Networks)
Figure 1
<p>The network structure of CUAVs coexisting with PUs.</p> ">
Figure 2
<p>Occupancy state transition diagram of PU channel <span class="html-italic">m</span>.</p> ">
Figure 3
<p>Structure of one time slot for the joint channel sensing and access protocol.</p> ">
Figure 4
<p>IL-Q-UCB-H of CUAV <span class="html-italic">n</span> for joint sensing and access.</p> ">
Figure 5
<p>IL-DQN-UCB-H of CUAV <span class="html-italic">n</span> for joint sensing and access.</p> ">
Figure 6
<p>Evolution of the average reward and the sensing accuracy with training (<math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>4</mn> <mo>,</mo> <mi>M</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>).</p> ">
Figure 7
<p>Evolution of the average reward and the sensing accuracy with training (<math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>6</mn> <mo>,</mo> <mi>M</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>).</p> ">
Figure 8
<p>Evolution of the average reward and the sensing accuracy of the proposed algorithms in cooperative and non-cooperative scenarios (<math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mi>M</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>).</p> ">
Figure 9
<p>Evolution of the channel utilization of four algorithms (<math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mi>M</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>).</p> ">
Figure 10
<p>Evolution of the average reward of four algorithms with different bandwidths (<math display="inline"><semantics> <mrow> <mi>N</mi> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mn>4</mn> <mo>,</mo> <mi>M</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>).</p> ">
Figure 11
<p>Evolution of the average reward of four algorithms with different PU state transition probabilities (<math display="inline"><semantics> <mrow> <mi>N</mi> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mn>4</mn> <mo>,</mo> <mi>M</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>).</p> ">
Versions Notes

Abstract

:
This paper studies the problem of distributed spectrum/channel access for cognitive radio-enabled unmanned aerial vehicles (CUAVs) that overlay upon primary channels. Under the framework of cooperative spectrum sensing and opportunistic transmission, a one-shot optimization problem for channel allocation, aiming to maximize the expected cumulative weighted reward of multiple CUAVs, is formulated. To handle the uncertainty due to the lack of prior knowledge about the primary user activities as well as the lack of the channel-access coordinator, the original problem is cast into a competition and cooperation hybrid multi-agent reinforcement learning (CCH-MARL) problem in the framework of Markov game (MG). Then, a value-iteration-based RL algorithm, which features upper confidence bound-Hoeffding (UCB-H) strategy searching, is proposed by treating each CUAV as an independent learner (IL). To address the curse of dimensionality, the UCB-H strategy is further extended with a double deep Q-network (DDQN). Numerical simulations show that the proposed algorithms are able to efficiently converge to stable strategies, and significantly improve the network performance when compared with the benchmark algorithms such as the vanilla Q-learning and DDQN algorithms.

1. Introduction

Recent years have witnessed remarkable success of unmanned aerial vehicle (UAV) clusters in a variety of scenarios ranging from disaster relief to commercial applications of unmanned swarm operations [1,2]. As one backbone technology for UAV systems, communication protocol design for UAVs, thus, naturally receives intensive attention from both academia and industry [3,4]. However, due to the ad hoc nature of UAV networks, directly applying the off-the-shelf wireless access protocols for vehicle-to-vehicle (V2V) becomes a difficult task, especially when the UAVs have to overlay upon the spectrum occupied by an existing infrastructure and ensure zero interference. In this regard, the adoption of cognitive radio (CR) technologies [5,6] into UAV systems becomes a tempting solution, since it not only avoids a series of problems caused by the rigid fixed-spectrum authorization model [5,6], but also has the potential to adapt to a complex and time-varying radioactive environment. Nevertheless, UAVs are typically constrained by their on-device computation capabilities, but are required to quickly respond to the radio environment changes with limited coordination. Therefore, designing an intelligent mechanism to efficiently perform spectrum sensing and distributed channel access becomes a challenge of vital importance.
So far, pioneering studies have established a number of different frameworks for spectrum sensing in CR networks [7,8,9]. For instance, in [7], an iterative signal compression filtering scheme is proposed to improve the spectrum sensing performance of CR-enabled UAV (CUAV) networks. Its core idea is to adaptively eliminate the primary user (PU) component in the identified sub-channel, and directly update the measured value to detect other active users. In [8], the space–time spectrum sensing problem for CUAV network in the three-dimensional heterogeneous spectrum space is discussed. Spectrum detection is improved based on the fusion of sensing results over both the time domain and the space domain. In [9], aiming to reflect the dynamic topology change of the CUAV network, a clustering method based on the maximum and minimum distances of nodes is proposed to improve the performance of cooperative spectrum sensing. With these spectrum sensing methods, channel access schemes, such as channel rendezvous for opportunistic channel reservation [10] and one-shot optimization-based channel allocation [11], can be deployed for throughput-optimal allocation for CUAVs.
The above studies tackle the channel sensing and allocation problem in CUAV networks by assuming that the radioactive environment is static, and a centralized information aggregator (e.g., a leader UAV) exists. However, more than frequently, a practical CUAV network not only faces a time-varying channel environment, i.e., the UAV–ground communication with multiple antennas will incur a 3D nonstationary geometry-based stochastic channel [12] or the ultra-wideband communication with the Saleh-Valenzuela time-varying statistical channel model [13], but is also deployed in an ad hoc manner. Therefore, it is necessary to develop a distributed sensing–allocation mechanism that causes an affordable level of overhead due to V2V information exchange. For this reason, a series of distributed allocation mechanisms, in particular, based on reinforcement learning (RL), are proposed to replace the traditional self-organizing schemes for spectrum sensing or channel access [6,14,15,16,17,18]. In [6], a novel Q-learning-based method is proposed for secondary users (SUs) to select cooperative sensing nodes using the discounted upper confidence bound (D-UCB) for strategy exploration and reducing the number of sensing samples. In [14], a neighbor-based cooperative sensing mechanism using Q-learning is proposed for collaborative channel sensing by SUs. In [15], a robust joint sensing–allocation scheme is proposed based on RL to counter the impact of adversary SUs (e.g., spectrum sensing data falsification attackers). Compared with these tabular-search-based RL methods, deep neural networks (e.g., deep Q-network) are adopted for state-value approximation [16]. Still, for the cooperative spectrum sensing problem [17], a multi-agent deep reinforcement learning method was adopted, and each secondary user learns an efficient sensing strategy from the sensing results to avoid interference to the primary users, also in which the upper confidence bound with Hoeffding-style bonus is used to improve the efficiency of exploration. Furthermore, cooperative multi-agent RL (MARL) methods are proposed for dynamic spectrum sensing and aggregation [18], typically with the aim of maximizing the number of successful transmissions without interrupting PUs.
In summary, most of the existing studies treat the problems of high-precision spectrum sensing and dynamic access channel allocation separately. However, how to jointly optimize the cooperative channel sensing and spectrum access processes remains open issues, especially in the time-varying radio environment. In addition, although a plethora of distributed algorithms (some based on RL [19,20,21]) have been proposed in the literature, most of them are subject to rigid assumptions and cannot be directly adopted by CUAV applications, which, for example, usually emphasize network/spectrum scalability or face real-world constraints such as limited sensing/signaling capabilities and limited energy/computation resources. These concerns naturally lead to the consideration of formulating the joint sensing-and-access problem from the perspective of UAVs. As a result, the decision process of CUAVs may face more complex coupling problems in terms of sensing-and-access strategies when compared with the purely cooperative methods. Based on these considerations, this paper investigates the semicompetitive channel-sensing-and-access problem in CUAV networks where the spectrum sensing phase is organized cooperatively based on the exchange of binary sensing results. An MARL-based framework of strategy searching is proposed in the form of two distributed execution algorithms that address state-value representation differently. The main contribution of this paper is summarized as follows:
To coordinate the behaviors of various CUAVs for efficient utilization of idle spectrum resources of PUs, a CUAV channel exploration and utilization protocol framework based on sensing–fusion–transmission is proposed.
A problem maximizing the expected cumulative weighted rewards of CUAVs is formulated. Considering the practical constraints, i.e., the lack of prior knowledge about the dynamics of PU activities and the lack of a centralized access coordinator, the original one-shot optimization problem is reformulated into a Markov game (MG). A weighted composite reward function combining both the cost and utility for spectrum sensing and channel access is designed to transform the considered problem into a competition and cooperation hybrid multi-agent reinforcement learning (CCH-MARL) problem.
To tackle the CCH-MARL problem through a decentralized approach, UCB-Hoeffding (UCB-H) strategy searching and the independent learner (IL) based Q-learning scheme are introduced. More specifically, UCB-H is introduced to achieve a trade-off between exploration and exploitation during the process of Q-value updating. Two decentralized algorithms with limited information exchange among the CUAVs, namely, the IL-based Q-learning with UCB-H (IL-Q-UCB-H) and the double deep Q-learning with UCB-H (IL-DDQN-UCB-H), are proposed. The numerical simulation results indicate that the proposed algorithms are able to improve network performance in terms of both the sensing accuracy and channel utilization.
The rest of this paper is organized as follows. Section 2 presents the network model and formulates the problem from a centralized perspective. Section 3 casts the problem into the context of MARL, and Section 4 proposes the RL-based solutions for joint spectrum sensing and channel access. Simulation results and analyses are presented in Section 5. Section 6 concludes the paper.

2. System Model

2.1. Network Model

Consider a coexistence network scenario, as shown in Figure 1, where a cluster of N CUAVs try to access M orthogonal primary spectrum resources in an overlaying mode over the airspace of interest. Herein, the cluster CUAVs perform cooperative area sensing and data backhaul tasks (e.g., geological survey, target monitoring, etc.) [22,23]. The CUAVs perform cooperative spectrum sensing to opportunistically exploit the idle spectrum resources of the primary users (PUs). For our considered CUAV network, since the communication demands are mainly from the task cooperation among the CUAVs, the communication channels used by CUAVs are dominated by the line-of-sight (LoS) air-to-air (A2A) channels [24]. Meanwhile, due to the platooning characteristics of the CUAV cluster, the communication channels between any two CUAVs can be treated as quasi-static over the task period [25].
Due to the limit on hardware capabilities, we consider that a CUAV performs narrow-band spectrum sensing and can sense and access at most one single PU channel at a given time slot [14,17]. Meanwhile, it is possible that not all of the active PUs are within the sensing range of all the CUAVs. This results in poor reliability of the sensing result by a single CUAV, and thus cooperative sensing is desired for CUAVs to improve the sensing performance collectively. Furthermore, we assume that the PU networks over different target frequency bands provide heterogeneous services to their users, such as data communications, radar, or other dynamic spectrum occupancy services. The heterogeneous channel bandwidth of PU channel m is denoted by B m . In addition, we assume that PU services are bursty, and can be described by a slotted (discrete-time) Markov process of two states (i.e., busy and idle) [18] as shown in Figure 2 with a pair of state transition probabilities ( α m , β m ) .

2.2. Framework of Channel Sensing and Access

To enable the coexistence of multiple CUAVs over a limited number of PU frequency bands, we need a protocol framework to coordinate the channel sensing-and-access behaviors of these CUAVs. We assume that the CUAVs are able to access and synchronize over a dedicated common control channel (CCC), i.e., C H 0 in Figure 3, which the spectrum sensing results and channel selection decisions can be shared among the CUAVs. We also assume that the CUAVs operating on the same PU channel transmit with nonorthogonal spectrum sharing techniques. The processes of spectrum sensing and channel access are organized in time slots (see Figure 3). More specifically, each time slot of PU channel sensing and utilization by CUAVs is divided into three consecutive sub-frames of sensing ( τ s ), cooperation ( τ c ), and access/transmission ( τ t ). At the beginning of the sub-frame of sensing, CUAVs decide on which channels to sense and access by switching their transceiver operations to the corresponding channels. Note that in this sub-frame, some of the CUAVs may stay idle and select no channel. In the subsequent sub-frame of cooperation, each CUAV broadcasts their own sensing results over the CCC in an orderly manner. Based on the received sensing results, each CUAV is able to perform the local sensing-result fusion and obtain a uniform vector of state observation as the other CUAVs. The local fusion results will be used for deciding on whether to access or not in the last sub-frame of access.
We assume that the messages exchanged over the CCC are reliable (cf. [17]), and for cooperation, we assume that the same fusion rule, such as the “K-out-of-N” or “AND” rules [26,27], is adopted by all the CUAVs. This ensures that all the CUAVs obtain a consistent observation about the status (i.e., busy or idle) of PU channels. Obviously, the more CUAVs participating in sensing the same channel, the higher accuracy of the sensing result is [28]. However, since the CUAVs choose to access the same channel that they sense, this will also lead to a higher congestion level over the PU channel. Therefore, the CUAVs need to develop a proper channel selection strategy to balance between the spectrum sensing accuracy (i.e., to reduce transmission failure probability) and the quality of transmissions (i.e., to avoid severe congestion over the selected channel).

2.3. Problem Formulation

Given the presented network model and the proposed access protocol, we know that the network performance is determined by the channel selection strategies of the CUAVs for joint channel sensing-and-access. Our goal is to find an appropriate approach to jointly reflect the system cost in cooperative spectrum sensing and the utility in successful transmissions. Furthermore, we aim to derive an optimal joint strategy of the CUAVs for channel selection in the time-varying radio environment, such that the utility of PU channels is maximized. Therefore, from a Genie’s perspective, we can formulate the following centralized optimization problem for the considered CUAV network:
max { c n , m t } E t = 0 T n = 1 N m = 0 M c n , m t γ t r n , m t s . t . m = 0 M c n , m t 1 , n = 1 , , N , c n , m t { 0 , 1 } ,
where T is the total number of time slots for CUAV network operation. In practice, T is typically not known in advance. c n , m t is the binary decision variable of CUAV n on PU channel m in time slot t, and c n , m t = 1 if CUAV n selects PU channel m to sense and access at time slot t. r n , m t is the reward of CUAV n on PU channel m at time slot t, and is determined by the weighted sum of user sensing access cost and utility. For ease of discussion, we defer the detailed definition of r n , m t to Section 3. Obviously, we have r n , m t = 0 if c n , m t = 0 . γ 0 , 1 is the reward discount factor to translate the future rewards into the reward at t = 0 [29].
In addition, the expectation operation E · is calculated over the PU channel evolution model (see also Figure 2). Without considering the expectation operation E [ · ] , (1) will degrade to be an one-shot, NP-hard binary programming problem. However, in the real world, the PU channel evolution model is not known in advance, and it is impractical to assign a centralized coordinator in the CUAV cluster, due to the constraints of on-device computation/signaling capability. Therefore, in the following, we reformulate the static problem as described in (1) into a CCH-MARL problem based on MG, and then resort to the MARL-based algorithms for deriving the channel selection strategies of the CUAVs.

3. Problem Modeling Based on MARL

3.1. Markov Game-Based Problem Formulation

Before proceeding to the reformulation of the considered problem, we provide the definition of MG as follows.
Definition 1
(Markov game [30]). An MG is defined by a sixtuplet as
N , S , { A n } n N , P , { r n } n N , γ , where
  • N = { 1 , , N } is the set of agents.
  • S is the state space observed consistently by all agents.
  • A n is the action space of agent n, and the joint action space of all the agents is A : = A 1 × A N .
  • P : S × A Δ ( S ) is the transition probability from any state s S to any state s S for any given joint action a = ( a 1 , a 2 , . . . , a n ) A .
  • The reward function r n : S × A × S R determines the instant reward received by agent n in the controlled Markov process from ( s , a ) to s .
  • γ [ 0 , 1 ] is the reward discount factor.
Based on Definition 1, we are able to map the considered optimization problem from (1) into the following MG:
  • Agent Set N consists of the N CUAVs (agents), i.e., N = { 1 , . . . , N } .
  • State space S of the MG is defined as
    S = { s t = ( s 0 t , s M t , o 1 t , , o M t ) } ,
    where s m t { 0 , 1 , , N } is the number of CUVAs that select PU channel m to sense and access in the previous time slot. In particular, s 0 t is the number of CUVAs that do not select any PU channel. Since each CUAV can select at most one single PU channel for sensing-and-access, m = 0 M s m t = N . o m t { 0 , 1 } is the observed occupancy state of PU channel m in the previous time slot. Following (2), the size of the state space is | S | = 2 M · ( M + 1 ) N .
  • Action space A n for CUAV n is defined as A n = { 0 , 1 , , M } . Let a n t A n denote the PU channel selected by agent n at time slot t, a n t = 0 indicates that no channel is selected. The joint action space A = n = 1 N A n can be defined as the Cartesian product of all the CUAVs, and the joint action at time slot t is a t = ( a 1 t , , a N t ) A .
  • State transition probability P consists of the transition maps P ( s | s , a ) for all s , s and a . Note that for the elements of transition o m o m , the transition probability is determined by the two-state Markov process shown in Figure 2.
  • Reward function r n t + 1 of CUAV n, is observed at time slot t + 1 after the CUAVs taking a joint action a n t . The details of the reward r n t + 1 are presented in the next subsection.

3.2. Definition of CUAVs’ Reward Function

Let m and N m t + 1 denote the PU channel selected by CUAV n (i.e., a n t = m ) and the CUAV set selecting the same channel at time slot t + 1 , respectively. For the considered CUAV network, the reward of each CUAV is defined by the weighted sum of the cost due to its spectrum exploration (spectrum sensing) and the utility obtained from channel utilization (channel access). The reward r n t + 1 for CUAV n is defined as
r n t + 1 ( s t + 1 , s t , a t ) = E s s , n t + 1 , if a n t = m , o m t + 1 = d m t + 1 = 1 , E s s , n t + 1 E d t , n t + 1 , if a n t = m , o m t + 1 = 1 , d m t + 1 = 0 , η E s s , n t + 1 μ E d t , n t + 1 + ( 1 η μ ) R n t + 1 , if a n t = m , o m t + 1 = d m t + 1 = 0 , η E s s , n t + 1 ( 1 η ) R n t + 1 , if a n t = m , o m t + 1 = 0 , d m t + 1 = 1 , 0 , if a n t = 0 ,
where d m t + 1 { 0 , 1 } is the sensing fusion result of the cooperative CUAVs over PU channel m at time slot t + 1 . d m t + 1 is a function of a t , i.e., d m t + 1 = f ( a t ) , and the form of f ( · ) is determined by the adopted sensing fusion rule. We note that due to the inevitable missed detection and false alarm [14], the real PU channel state o m t + 1 may not be consistent with the sensing fusion result d m t + 1 and thus we have the first four cases in (3). In (3), E s s , n t + 1 and E d t , n t + 1 are the spectrum sensing and channel access cost for CUAV n, respectively. More specifically, the cost of sensing/access is mainly incurred by the energy consumption of the transceiver for spectrum sensing and data transmission. R n t + 1 is the reward corresponding to the amount of successively transmitted data during time slot t + 1 for CUAV n. η ( 0 , 1 ) and μ ( 0 , 1 ) are the weighting factors for the spectrum sensing and channel access cost, respectively. The five cases in (3) are further explained as follows:
(i)
If PU channel m is busy, and the sensing fusion result is the same, i.e., o m t + 1 = d m t + 1 = 1 , the reward of CUAV n is solely determined by the spectrum sensing cost E s s , n t + 1 .
(ii)
If PU channel m is busy but the sensing fusion result leads to a missed detection, i.e., o m t + 1 = 1 , d m t + 1 = 0 , CUAV n’s reward is determined by the sum of spectrum sensing cost E s s , n t + 1 and the cost due to the failed data transmission, E d t , n t + 1 .
(iii)
If PU channel m is idle and the sensing fusion result is the same, i.e., o m t + 1 = d m t + 1 = 0 , CUAV n’s reward is determined by the weighted sum of the sensing cost, E s s , n t + 1 , the cost for data transmission, E d t , n t + 1 , and the utility of successful transmission, R n t + 1 .
(iv)
If PU channel m is idle but the fusion result leads to a false alarm, i.e., o m t + 1 = 0 , d m t + 1 = 1 , the reward of CUAV n is determined by the weighted sum of spectrum sensing cost E s s , n t + 1 and the lost transmission utility R n t + 1 .
(v)
If CUAV n does not select any PU channel, i.e., a n t = 0 , the reward is 0.
Furthermore, we adopt the following forms of E s s , n t + 1 , E d t , n t + 1 , and R n t + 1 in (3):
  • Spectrum sensing cost E s s , n t + 1 for CUAV n at time slot t + 1 is defined as the energy consumed for spectrum sensing, namely, a function proportional to the working voltage V D D of the receiver, the bandwidth of the sensed channel B, and the sensing duration τ t , n [31]:
    E s s , n t + 1 = τ t , n V D D 2 B m .
  • Data transmission cost E d t , n t + 1 for CUAV n in time slot t + 1 is defined as the energy consumed for data transmission during the time slot,
    E d t , n t + 1 = τ s , n p s , n ,
    where τ s , n and p s , n are the data transmission duration and transmit power, respectively. τ t , n , τ s , n , and p s , n are assumed to be the same for all the CUAVs, i.e., τ t , n = τ t , τ s , n = τ s , p t , n = p t , n N .
  • Transmission utility R n t + 1 for CUAV n in time slot t + 1 of (cf. Cases iii and iv) is measured as the amount of data transmitted over the time slot. We consider that the quality of transmission is evaluated based on the throughput over a given channel under the co-channel interference:
    R n t + 1 = τ t B m log 2 ( 1 + S I N R n , m t + 1 ) ,
    where S I N R n , m t + 1 is the received signal-to-interference-to-noise ratio (SINR) for CUAV n over its selected PU channel m. S I N R n , m t + 1 can be expressed as
    S I N R n , m t = g n , m p t j N m t , j n g j , m n p t + σ 2 ,
    where σ 2 is noise power. g n , m is the channel gain of CUAV n on PU channel m and g j , m n is the channel gain between CUAV j and CUAV n on PU channel m. As mentioned earlier, with platooning of the CUAV cluster, the channel gains among the CUAVs could be considered as quasi-static over the period of interest. j N m t , j n g j , m n p t is the co-channel interference from the other CUAVs sharing the same PU channel m. Since the spatial positions and the transmitting–receiving relationship of the CUAVs over the same channel are not necessarily the same, the channel gains between different CUAVs are different, and thus the SINR of the received signals of each CUAV are different.
Finally, we examine the impact of fusion rules on the sensing fusion result d m t + 1 = f ( a t ) in (3). In this paper, the “K-out-of-N” spectrum sensing fusion rule [26] is adopted to obtain the final spectrum sensing fusion result, namely,
d m t + 1 = 1 , if i N m t + 1 1 { d i , m t + 1 = 1 } K , 0 , others ,
where 1 { A = B } is the indicator function taking the value of 1 if the condition A = B is true and 0 otherwise. Especially, it is known that for (8), if K = 1 , the “K-out-of-N” rule degrades to the “OR” rule, while if K = N , the “K-out-of-N” rule becomes the “AND” rule [26]. We assume that the observation of each CUAV follows an independent, stationary observation process on the binary Markov process in Figure 2.

3.3. MARL Algorithm Framework

When the model of the state transition in the established MG is unknown to the CUAVs, we aim to learn to optimize the long-term statistical performance of the CUAV network. From the perspective of a single CUAV n, the problem of social optimization in (1) is transformed into the following local optimization problem n N :
max π n v n ( s 0 , π n , π n ) = t = 0 + γ t E ( r n t + 1 | π n , π n , s 0 ) ,
where the value of the discount factor γ reflects the effect of future rewards on optimal decision-making particularly. π n denotes the joint policy taken by the other CUAVs except CUAV n. v n ( s 0 , π n , π n ) is the value function for the given state s 0 and joint policy ( π n , π n ) . Herein, the policy of CUAV n is defined as π n : S n Δ ( A n ) , where Δ ( A n ) is the collection of probability distributions over CUAV n’s action space A n . π n ( a n t | s n t ) in π n ( s n t ) = { π n ( a n t | s n t ) | a n t A n } is the probability of CUAV n choosing action a n t at state s n t during time slot t ( π n ( a n t | s n t ) [ 0 , 1 ] ). For this MARL process, each CUAV aims to find a strategy π n to maximize its average cumulative discounted reward, given the (implicit) impact of the adversary strategies of the other CUAVs.
It is known that without considering the influences of the other CUAVs’ actions, the solution of (9) is a fixed point of the following Bellman equation, and an iterative search method can be used to find its solution,
v n ( s 0 , π n * ) = max a n t A n { r n t + 1 ( s t , a n t ) + γ s t + 1 P ( s t + 1 | s t , a n t ) v n ( s t + 1 , π n * ) } ,
where r n t + 1 ( s t , a n t ) is the instant reward of CUAV n if it takes action a n t over system state s t at time slot t. P ( s t + 1 | s t , a n t ) is the state transition probability as described in Section 3.1.
Based on (10), the classical Q-learning method [29] can be adopted by each CUAV to approximate the solution to (10) by treating the adversary CUAVs as part of the stationary environment. Then, the Q-function is updated as
q n t + 1 s t , a n t 1 α t q n t s t , a n t + α t r n t + 1 s t , a n t + γ max a q n t s t + 1 , a ,
where q n t + 1 ( s t + 1 , a n t ) is estimated state–action value at t + 1 if CUAV n takes action a n t at state s t , α t 0 , 1 is the time-varying learning rate. It is proved in [32] that if t = 1 α t = , t = 1 ( α t ) 2 < and the assumption of stationary environment holds, the iterative sequence based on Equation (11) converges to q n t + 1 ( s t , a n t ) as each state is visited enough times.
Based on (10), we now consider the impact of the adversary policies on the performance of CUAV n explicitly. Let π = ( π n , π n ) and a n t denote the actions of all the CUAVs except CUAV n in time slot t. Then, (9) can be rewritten as follows,
max π n v n ( s 0 , ( π n , π n ) ) = max π n t = 0 + γ t E ( r n t + 1 ( s t , ( π n , π n ) ) | s 0 , ( π n , π n ) ) .
With (12), for s 0 S , each CUAV searches for the optimal π n to maximize its value function v n ( s 0 , ( π n , π n ) ) , given the stationary adversary policy π n . The joint solution to (12) for all n N leads to a Nash equilibrium (NE) solution, which can be mathematically defined as follows.
Definition 2
(Nash equilibrium [30]). An NE of the MG (as given in Definition 1)
N , S , { A n } n N , P , { R n } n N , γ is a joint policy π * = ( π n * , π n * ) , s.t. for any s 0 S and n N ,
v n ( s 0 , ( π n * , π n * ) ) v n ( s 0 , ( π n , π n * ) ) , π n .
Although there always exists an NE for discounted MGs [33], guaranteeing the convergence to an NE through decentralized learning without exchanging the reward/policy information still remains an open problem. To tackle our considered problem in a decentralized manner, we leverage the idea of IL [22], and propose a Q-learning-based algorithm and a DDQN-based algorithm in Section 4. Fortunately, we are able to show the convergence of the proposed algorithms through numerical simulations in Section 5.

4. Algorithm Design Based on Independent Learner

In this section, we introduce exploration strategy based on UCB-H, with which we develop two MARL algorithms in the framework of IL. The information exchanging overhead and execution complexity of the proposed algorithms are also discussed.

4.1. UCB-H Strategy

The main aim of introducing UCB-based action exploration strategy is to avoid the drawbacks of the traditional ϵ -greedy strategy, which imposes no preference for the actions that are nearly greedy or particularly uncertain [29]. The original UCB strategy is proposed for the multi-armed bandit scenario without discerning the underlying state evolution [29]:  
a n t = arg max a Q n t ( a ) + c ln t N n t ( a ) ,
where N n t ( a ) is the times that action a has been selected prior to time slot t, and c > 0 controls the degree of exploration. With (14), actions with lower estimated values or that have already been selected frequently will be selected with decreasing frequency over time [29]. For our concerned problem of channel selection, modification is needed to replace N n t ( a ) by the times of selecting the state–action pair ( s t , a t ) .
For our studied problem, we introduce the UCB-H strategy to achieve a trade-off between action exploration and exploitation (cf. [17,34]). Specifically, it also helps to balance a CUAV’s strategy between preferring cooperation during sensing and incurring competition with more interference in channel access. Based on (14), the corresponding Q-value updating method now becomes (15) from (11):
Q n t + 1 s t , a t 1 α t Q n t s t , a t + α t r n t + 1 + max a n t + 1 Q n t s t + 1 , a t + 1 + b t ,
where
b t = c H 3 ln ( | S | | A | T / p ) N n t ( s t , a t ) .
In (15) and (16), α t is learning rate that varies with time. b t is the confidence bonus indicating how certain the algorithm is about the current state–action pair. N n t ( s t , a t ) is the times that state–action pair ( s t , a t ) has been visited prior to time slot t. T is the total number of time slots of the CUAV network operation. p is an arbitrary small value to ensure that the total regret of the learning process is upper-bounded by O ( H 4 | S | | A | T ln ( | S | | A | T / p ) ) with probability 1 p . H is the steps in each episode of episodic Markov decision process (MDP) where H = 1 in general MDP, i.e., our considered scenario [17,34].

4.2. IL-Q-UCB-H Algorithm

By treating the other CUAVs as part of the environment, the IL-Q-UCB-H algorithm can be developed based on standard Q-learning with UCB-H. This essentially approximates the original MARL problem in the MG by a group of single-agent RL problems, as shown in Figure 4.
For ease of generalization, we provide in (17) the traditional IL-Q algorithm that adopts ϵ -greedy strategies for action selection. UCB-H can be conveniently incorporated into (17) by modifying the temporal difference term therein, as in (15).
Q n t + 1 s t , a t 1 α t Q n t s t , a t + α t r n t + 1 + γ max a n t + 1 Q n t s t + 1 , a t + 1 .
For n N , we set the learning rate of IL-Q uniformly as [24]
α t = 1 ( t + c α ) φ α ,
where c α > 0 , φ α 0.5 , 1 . For either (15), the action update is obtained through tabular search:
a n t + 1 = arg max a n t + 1 Q n t + 1 ( s t , a t + 1 ) .
In summary, the IL-Q-UCB-H algorithm based on standard IL-Q learning is described in Algorithm 1.
Algorithm 1: IL-Q-UCB-H algorithm.
1:
Initialize : Set t = 0 , choose p ( 0 , 1 ) , c > 0   c α > 0 , φ α 0.5 , 1 , and set the maximum time slots T;
2:
for all agent n N do
3:
   initialize Q n t ( s t , a t ) = 0 and s 0 ;
4:
end for
5:
while t < T do
6:
   for agent n N do
7:
      Update the learning rate α t according to (18);
8:
      Select an action a n t at s t according to (19);
9:
      Take action a n t to select channel for spectrum sensing and produce sensing decision d n , m t + 1 ;
10:
      Feedback sensing information D n t = { n , a n t , d n , m t + 1 } on CCC;
11:
      Receive sensing fusion decision d m t according to (8);
12:
      Access channel based on sensing fusion decision, and receive reward r n t + 1 according to (3) and observe s t + 1 ;
13:
      Update Q n t + 1 ( s t , a t ) according to (15);
14:
   end for
15:
    t = t + 1 and s t s t + 1 ;
16:
end while

4.3. IL-DDQN-UCB-H Algorithm

The proposed IL-Q-UCB-H algorithm requires each CUAV to construct a Q-table of size | S | × | A n | . Then, with the increasing number of PU channels, the IL-Q-UCB-H algorithm faces the curse of dimensionality. To handle such a problem, we adopt the framework of DDQN [35] for value space approximation with deep neural networks which replace the IL-Q-UCB-H algorithm with the IL-DDQN-UCB-H algorithm. Compared with the vanilla DQN algorithm, the core of the IL-DDQN-UCB-H algorithm decomposes the maximization operation into a neural network for action selection and a target neural network for action evaluation [35]. The main functional components [18] are illustrated in Figure 5, and each component is described in detail as follows.
Input Layer: The input of DDQN is a vector of size ( 2 M + 1 ) , corresponding to the system state s t = ( s 0 t , , s M t , o 1 t , , o M t ) in time slot t, where the first M + 1 value corresponds to the number of CUAVs that select each PU channel to sense, or does not select any PU channel, and the last M value indicates the occupancy state of each PU channel, respectively.
Output Layer: The output of DDQN is a vector of size ( M + 1 ) , corresponding to the Q-value estimation of all optional actions given the current system state, i.e., Q n t = [ Q n , 0 t , Q n , 1 t , , Q n , M t ] .
Experience Replay: In DDQN, the experience replay component stores the accumulated samples in history in the form of experience tuples ( s t , a n t , r n t + 1 , s t + 1 ) which are composed of the current state s t , action a n t , reward r n t + 1 , and the next state s t + 1 . During the learning process, the agent randomly samples a batch of experience tuples of length B from the experience replay to fit the deep network to the Q-values, aiming to eliminate the temporal correlation of historical samples.
Current Q-Network: The current Q-network (i.e., Q-table fitting deep neural network) realizes the mapping of the input state s t to the corresponding Q-value Q n t + 1 ( s t , a t + 1 ; θ n t ) of each action a n t , where θ n t is the parameters of the current Q-network. The experience tuples are mainly used to train the current Q-network to update its own set of parameters θ n t until convergence. After training, an action will be selected based on the output Q-values.
Target Q-Network: The target Q-network has the same structure as the current Q-network, also with the same initial parameters. The output target Q-value Q n t + 1 ( s t , a t + 1 ; θ ^ n t ) is mainly used to supervise the iterative training of the current Q-network, where θ ^ n t is the parameters of the target Q-network. In DDQN, θ ^ n t is updated after a fixed rounds F of training. It directly assigns the value of θ n t to θ ^ n t , which is known as the fixed Q-targets in DDQN.
Action selection strategy: To prevent the actions falling into the local optimum during the period of unconverged deep neural network training stage, the greedy strategy is introduced during action selection (cf. (19)),
a n t + 1 = arg max a n t + 1 Q n t + 1 ( s t , a t + 1 ; θ n t ) .
Loss Function: The loss function used in training the current Q-Network is defined as follows:
L n t ( θ n t ) = 1 B i = 1 B ( y n , i Q n , i t ( s t , a t ; θ n t ) ) 2 ,
where B is the batch size and y n , i is the target Q-value. With UCB-H, the updating method of the target Q-value is
a n t , m a x = arg max a n t Q n , i t ( s t , a t ; θ n t ) ,
with
y n , i = r n , i t + 1 + γ max a n t Q n , i t ( s t + 1 , a n t , m a x ; θ ^ n t ) + b t .
We note that the loss function is a mean square error between the output Q-value of the target Q-network and that of the current Q-network. After receiving the value of the loss function, the gradient descent method is used to update θ n t iteratively, i.e.,
θ n t + 1 θ n t + ζ θ n t L n t ( θ n t )
with a learning rate ζ . The gradient θ n t L n t ( θ n t ) is calculated following (25)
θ n t L n t ( θ n t ) = θ n t 1 B i = 1 B y n , i Q n , i t ( s t , a t ; θ n t ) 2 .
For the considered CUAV network, the framework of the IL-DDQN-UCB-H algorithm is given in Algorithm 2 based on the aforementioned functional components.
Algorithm 2: IL-DDQN-UCB-H Algorithm.
1:
Initialize : Set t = 0 , choose γ 0 , 1 , p ( 0 , 1 ) , c > 0 , and set the maximum time slots T, experience replay size C, batch size B, target Q-Network update period F, DDQN learning rate ζ ;
2:
for all agent n N do
3:
   Randomly initialize the current Q-network parameters θ n t , target Q-network parameters θ ^ n t and s 0 ;
4:
end for
5:
while t < T do
6:
   for all agent n N do
7:
      Select an action a n t at s t according to (20);
8:
      Take action a n t to select channel for spectrum sensing and produce sensing decision d n , m t + 1 ;
9:
      Feedback sensing information D n t = { n , a n t , d n , m t + 1 } on CCC;
10:
      Receive sensing fusion decision d m t according to (8);
11:
      Access channel based on sensing fusion decision, and receive reward r n t + 1 according to (3) and observe s t + 1 ;
12:
      Store ( s t , a n t , r n t + 1 , s t + 1 ) into experience replay;
13:
      if  t > C  then
14:
        Remove the old experience tuples from experience replay;
15:
      end if
16:
      Randomly select a batch size B experience tuples ( s t , a n t , r n t + 1 , s t + 1 ) from experience replay;
17:
      Calculate loss function L n t ( θ n t ) according to (21) and (25);
18:
      Update parameter θ n t according to (24);
19:
      if  t mod F = 0  then
20:
         θ ^ n t θ n t ;
21:
      end if
22:
   end for
23:
    t = t + 1 and state s t s t + 1 ;
24:
end while

4.4. Algorithm Complexity Analysis

  • IL-Q-UCB-H algorithm: Since each CUAV executes the IL-Q-UCB-H algorithm independently, its information interaction overhead is mainly caused by broadcasting its own sensing decision information. The amount of information interaction increases linearly with the increase of CUAVs. For algorithm execution, each CUAV needs to store a Q-table of size N · 2 M ( M + 1 ) N according to the number of states and actions. It increases exponentially with the numbers of CUAVs and PU channels. The computational cost for each CUAV is dominated by the linear update of the Q-table and the search for the optimal action, which are both of constant time complexity.
  • IL-DDQN-UCB-H algorithm: The cost of information exchange is the same as the IL-Q-UCB-H algorithm. For algorithm execution, since a deep neural network is used to fit the Q-values, the storage cost mainly depends on the structure of the deep neural network. Since the IL-DDQN-UCB-H algorithm involves updating two Q-networks, the computational complexity is dependent of the neural network structure (i.e., the network parameters) at the training stage.

5. Simulation and Analysis

In this section, the performance of the proposed algorithms is evaluated in the same CUAV network through numerical simulations. Specifically, the experiments are carried out with respect to several indicators, including the average reward, sensing accuracy, and channel utilization. The average reward is evaluated as the average instant reward of all the CUAVs, r ¯ t + 1 = N 1 n = 1 N r n t + 1 . The sensing accuracy is evaluated as a c c = ( N a c c t / M ) × 100 % , where N a c c t is the number of PU channels over which the sensing fusion produces correct observation of the channel states. The channel utilization is evaluated as u t i = ( N u t i t / M ) × 100 % where N u t i t is the number of PU channels selected by CUAVs in time slot t. The main parameters used throughout the simulations are given in Table 1. The binary Markov model for PU activities are randomly initialized as ( α m , β m ) , m = 1 , , M . The hyperparameters of all the RL algorithms are given in Table 2. The learning rate α t is initialized as 0.9.
To demonstrate that the proposed algorithms are able to handle the network congestion, the simulations in Figure 6 and Figure 7 evaluate the average reward and sensing accuracy for two cases of N = 4 , M = 5 and N = 6 , M = 5 .
We observe from Figure 6a that all of the four algorithms are able to converge with sufficient training epochs. We note that the two IL-DDQN algorithms are able to obtain higher average reward than the two IL-Q algorithms. The reason lies in that DDQN not only reduces the correlation of sampled data, but also prevents overfitting to handle the excessive state–action space more efficiently. At the same time, the UCB-H-enabled algorithms are able to achieve higher average rewards than their ϵ -greedy counterparts. This indicates that the UCB-H strategy is able avoid the performance degradation caused by the randomness due to ϵ -greedy exploration and the local optimality caused by insufficient exploration when using myopic strategy to select actions.
Figure 6b evaluates the sensing accuracy of the four algorithms with N = 4 , M = 5 . It can be seen that, similar to Figure 6a, the performance of IL-Q-UCB-H and IL-DDQN-UCB-H is also better than ϵ -greedy IL-Q and IL-DDQN. In addition, the ϵ -greedy-enabled algorithms fluctuate more severely in the early stage of training. The reason is that the Q-values using the ϵ -greedy strategy bear little difference at the early stage, and this makes the agents select actions randomly. The UCB-H-enabled algorithms are relatively smooth in the early stage of training, thanks to the confidence bonus, which makes the Q-values discernible. In summary, Figure 6 shows that the proposed IL-DDQN-UCB-H algorithm is able to achieve the best performance, in terms of the average reward and the sensing accuracy, when the number of CUAVs are less than that of PUs and the congestion does not exist.
Figure 7 shows the performance in terms of the average reward and sensing accuracy of the four algorithms with N = 6 , M = 5 . As can be seen from the figure, the UCB-H-enabled algorithms are able to achieve better performance in the condition of congestion. In addition, comparing Figure 6b and Figure 7b, we note that when there are more CUAVs, the sensing accuracy rate can be increased by 10% to 15%. This demonstrates the efficiency of the cooperative sensing mechanism.
A further illustration of the trade-off between the sensing accuracy and network congestion is provided by Figure 8 with N = 10 . It can be seen that the performance of CUAV cooperation is significantly better than that of non-cooperation. In particular, the sensing accuracy of the IL-DDQN-UCB-H algorithm in the cooperative scenario can reach 97%. At the same time, the achieved average reward of cooperation is less than the cases of N = 4 or N = 6 , which indicates that the improved accuracy may not fully compensate the degradation of transmission due to congestion.
Considering the cases where some CUAVs do not select a channel for sensing and access in every time slot, another simulation is performed with channel utilization as an performance indicator. Figure 9 shows the channel utilization performance of the four algorithms. It can be seen that the four algorithms can achieve a channel utilization of more than 42%, especially the IL-DDQN-UCB-H algorithm which has a channel utilization of 49%. It shows that the proposed cooperative sensing and access algorithms can find idle PU channels in time and significantly improve the channel utilization.
We note from Section 3.2 that there are four situations for CUAVs to sense and access PU channels. The obtained reward is dependent on the channel bandwidth in these four situations. This is mainly reflected in the spectrum sensing cost and the available data transmission volume (utility). By the definition of the reward function, the spectrum sensing cost E s , n t + 1 is a negative reward and has a negative correlation with the channel bandwidth, while R n t + 1 > 0 with a positive correlation with the channel bandwidth. As the channel bandwidth increases, the absolute values corresponding to the cost and utility will also increase, resulting in a decrease in the system reward. The simulation analyzes the relationship between the average reward and PU channel bandwidth. PU channel bandwidth is taken as B m { 50 , 60 , 70 , 80 , 90 , 100 } MHz and the result is shown in Figure 10. It can be found that as the channel bandwidth increases, the system average reward also increases. This indicates that the cost due to sensing a larger bandwidth can be compensated by the utility gained from channel utilization. Namely, choosing a PU channel with a large channel bandwidth to construct a set of candidate sensing channels generally leads to better performance of the CUAV network.
The average reward of the four algorithms under different PU channel state transition probabilities is analyzed with ( α m , β m ) varying as α m = β m { 0.1 , 0.3 , 0.5 , 0.7 , 0.9 } . Figure 11 shows that when the state transition probabilities ( α m , β m ) increase from 0.1 to 0.5, the average reward decreases. Comparatively, when it gradually increases from 0.5 to 0.9, the average reward increases. As shown in Figure 2, the randomness of PU channel state is small when ( α m , β m ) is either very large or small. In this situation, the CUAVs estimate PU channel states more accurately based on the historical experience, and greater rewards can be obtained based on this decision. However, PU channel state transition is highly random when ( α m , β m ) is about 0.5. In this situation, the reward will decrease based on the historical experience of the CUAVs and so will the sensing accuracy.

6. Conclusions

In this paper, the problem of joint spectrum sensing and channel access for a CUAV communication network in a time-varying radio environment was studied. In a situation where the information about the primary network dynamics is not known in advance, a competition–cooperation protocol framework was proposed for CUAVs to implicitly cooperate over the channels to sense and access. An MG-based model was introduced to translate the centralized one-shot network optimization problem into a group of MARL problems that locally optimize the cumulative sensing–transmission reward of each CUAV. To avoid excessive information exchange overhead for channel cooperation, an independent Q-learning algorithm and an independent DDQN algorithm were proposed to approximate the equilibrium strategies of the MG. The proposed learning algorithms were improved with the UCB-H-based action–exploration strategy. Numerical simulation results showed that the proposed algorithms can increase the system average reward, sensing accuracy, and channel utilization efficiently.

Author Contributions

Conceptualization, W.J., W.Y. and T.H.; methodology, W.Y.; software, W.Y.; validation, W.Y. and W.J.; formal analysis, W.J., W.Y. and W.W.; investigation, W.Y.; resources, W.Y.; data curation, W.Y.; writing—original draft preparation, W.J. and W.Y.; writing—review and editing, W.J., W.W. and T.H. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by National Natural Science Foundation of China (Grant No. 62001067) and Pre-research Fund Project (Grant No. 61405180409).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the editor and reviewers for providing helpful suggestions for improving the quality of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ye, L.; Zhang, Y.; Li, Y.; Han, S. A Dynamic Cluster Head Selecting Algorithm for UAV Ad Hoc Networks. In Proceedings of the 2020 International Wireless Communications and Mobile Computing (IWCMC), Limassol, Cyprus, 15–19 June 2020; pp. 225–228. [Google Scholar]
  2. Wu, F.; Zhang, H.; Wu, J.; Song, L. Cellular UAV-to-device communications: Trajectory design and mode selection by multi-agent deep reinforcement learning. IEEE Trans. Commun. 2020, 68, 4175–4189. [Google Scholar] [CrossRef] [Green Version]
  3. Ma, Z.; Ai, B.; He, R.; Wang, G.; Niu, Y.; Yang, M.; Wang, J.; Li, Y.; Zhong, Z. Impact of UAV rotation on MIMO channel characterization for air-to-ground communication systems. IEEE Trans. Veh. Technol. 2020, 69, 12418–12431. [Google Scholar] [CrossRef]
  4. Jingnan, L.; Pengfei, L.; Kai, L. Research on UAV communication network topology based on small world network model. In Proceedings of the 2017 IEEE International Conference on Unmanned Systems (ICUS), Beijing, China, 27–29 October 2017; pp. 444–447. [Google Scholar]
  5. Liu, X.; Sun, C.; Zhou, M.; Wu, C.; Peng, B.; Li, P. Reinforcement learning-based multislot double-threshold spectrum sensing with Bayesian fusion for industrial big spectrum data. IEEE Trans. Ind. Inform. 2020, 17, 3391–3400. [Google Scholar] [CrossRef]
  6. Ning, W.; Huang, X.; Yang, K.; Wu, F.; Leng, S. Reinforcement learning enabled cooperative spectrum sensing in cognitive radio networks. J. Commun. Netw. 2020, 22, 12–22. [Google Scholar] [CrossRef]
  7. Xu, W.; Wang, S.; Yan, S.; He, J. An efficient wideband spectrum sensing algorithm for unmanned aerial vehicle communication networks. IEEE Internet Things J. 2018, 6, 1768–1780. [Google Scholar] [CrossRef] [Green Version]
  8. Shen, F.; Ding, G.; Wang, Z.; Wu, Q. UAV-based 3D spectrum sensing in spectrum-heterogeneous networks. IEEE Trans. Veh. Technol. 2019, 68, 5711–5722. [Google Scholar] [CrossRef]
  9. Nie, R.; Xu, W.; Zhang, Z.; Zhang, P.; Pan, M.; Lin, J. Max-min distance clustering based distributed cooperative spectrum sensing in cognitive UAV networks. In Proceedings of the ICC 2019-2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–6. [Google Scholar]
  10. Feng, P.; Bai, Y.; Huang, J.; Wang, W.; Gu, Y.; Liu, S. CogMOR-MAC: A cognitive multi-channel opportunistic reservation MAC for multi-UAVs ad hoc networks. Comput. Commun. 2019, 136, 30–42. [Google Scholar] [CrossRef]
  11. Liang, X.; Xu, W.; Gao, H.; Pan, M.; Lin, J.; Deng, Q.; Zhang, P. Throughput optimization for cognitive UAV networks: A three-dimensional-location-aware approach. IEEE Wirel. Commun. Lett. 2020, 9, 948–952. [Google Scholar] [CrossRef]
  12. Zhu, Q.; Wang, Y.; Jiang, K.; Chen, X.; Zhong, W.; Ahmed, N. 3D non-stationary geometry-based multi-input multi-output channel model for UAV-ground communication systems. IET Microw. Antennas Propag. 2019, 13, 1104–1112. [Google Scholar] [CrossRef]
  13. Khawaja, W.; Ozdemir, O.; Erden, F.; Guvenc, I.; Matolak, D.W. Ultra-Wideband Air-to-Ground Propagation Channel Characterization in an Open Area. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 4533–4555. [Google Scholar] [CrossRef]
  14. Lunden, J.; Kulkarni, S.R.; Koivunen, V.; Poor, H.V. Multiagent reinforcement learning based spectrum sensing policies for cognitive radio networks. IEEE J. Sel. Top. Signal Process. 2013, 7, 858–868. [Google Scholar] [CrossRef]
  15. Chen, H.; Zhou, M.; Xie, L.; Wang, K.; Li, J. Joint spectrum sensing and resource allocation scheme in cognitive radio networks with spectrum sensing data falsification attack. IEEE Trans. Veh. Technol. 2016, 65, 9181–9191. [Google Scholar] [CrossRef]
  16. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef] [PubMed]
  17. Zhang, Y.; Cai, P.; Pan, C.; Zhang, S. Multi-agent deep reinforcement learning-based cooperative spectrum sensing with upper confidence bound exploration. IEEE Access 2019, 7, 118898–118906. [Google Scholar] [CrossRef]
  18. Li, Y.; Zhang, W.; Wang, C.X.; Sun, J.; Liu, Y. Deep reinforcement learning for dynamic spectrum sensing and aggregation in multi-channel wireless networks. IEEE Trans. Cogn. Commun. Netw. 2020, 6, 464–475. [Google Scholar] [CrossRef]
  19. Cai, P.; Zhang, Y.; Pan, C. Coordination Graph-Based Deep Reinforcement Learning for Cooperative Spectrum Sensing under Correlated Fading. IEEE Wirel. Commun. Lett. 2020, 9, 1778–1781. [Google Scholar] [CrossRef]
  20. Lo, B.F.; Akyildiz, I.F. Reinforcement learning for cooperative sensing gain in cognitive radio ad hoc networks. Wirel. Netw. 2013, 19, 1237–1250. [Google Scholar] [CrossRef]
  21. Zhang, M.; Wang, L.; Feng, Y. Distributed cooperative spectrum sensing based on reinforcement learning in cognitive radio networks. AEU-Int. J. Electron. Commun. 2018, 94, 359–366. [Google Scholar] [CrossRef]
  22. Kaur, A.; Kumar, K. Energy-efficient resource allocation in cognitive radio networks under cooperative multi-agent model-free reinforcement learning schemes. IEEE Trans. Netw. Serv. Manag. 2020, 17, 1337–1348. [Google Scholar] [CrossRef]
  23. Nobar, S.K.; Ahmed, M.H.; Morgan, Y.; Mahmoud, S. Resource Allocation in Cognitive Radio-Enabled UAV Communication. IEEE Trans. Cogn. Commun. Netw. 2021; in press. [Google Scholar]
  24. Cui, J.; Liu, Y.; Nallanathan, A. Multi-agent reinforcement learning-based resource allocation for UAV networks. IEEE Trans. Wirel. Commun. 2019, 19, 729–743. [Google Scholar] [CrossRef] [Green Version]
  25. Chandrasekharan, S.; Gomez, K.; Al-Hourani, A.; Kandeepan, S.; Rasheed, T.; Goratti, L.; Reynaud, L.; Grace, D.; Bucaille, I.; Wirth, T.; et al. Designing and implementing future aerial communication networks. IEEE Commun. Mag. 2016, 54, 26–34. [Google Scholar] [CrossRef] [Green Version]
  26. Chen, Z.; Qiu, R.C. Cooperative spectrum sensing using q-learning with experimental validation. In Proceedings of the 2011 Proceedings of IEEE Southeastcon, Nashville, TN, USA, 17–20 March 2011; pp. 405–408. [Google Scholar]
  27. Han, W.; Li, J.; Tian, Z.; Zhang, Y. Efficient cooperative spectrum sensing with minimum overhead in cognitive radio. IEEE Trans. Wirel. Commun. 2010, 9, 3006–3011. [Google Scholar] [CrossRef]
  28. Abdi, N.; Yazdian, E.; Hoseini, A.M.D. Optimum number of secondary users in cooperative spectrum sensing methods based on random matrix theory. In Proceedings of the 2015 5th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 29 October 2015; pp. 290–294. [Google Scholar]
  29. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  30. Zhang, K.; Yang, Z.; Başar, T. Multi-agent reinforcement learning: A selective overview of theories and algorithms. In Handbook of Reinforcement Learning and Control; Springer: Berlin/Heidelberg, Germany, 2021; pp. 321–384. [Google Scholar]
  31. Zhang, X.; Shin, K.G. E-MiLi: Energy-minimizing idle listening in wireless networks. IEEE Trans. Mob. Comput. 2012, 11, 1441–1454. [Google Scholar] [CrossRef]
  32. Watkins, C.J.; Dayan, P. Q-learning. Mach. Learn. 1992, 8, 279–292. [Google Scholar] [CrossRef]
  33. Filar, J.; Vrieze, K. Competitive Markov Decision Processes; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  34. Jin, C.; Allen-Zhu, Z.; Bubeck, S.; Jordan, M.I. Is Q-learning provably efficient? arXiv 2018, arXiv:1807.03765. [Google Scholar]
  35. Van Hasselt, H.; Guez, A.; Silver, D. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
  36. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. The network structure of CUAVs coexisting with PUs.
Figure 1. The network structure of CUAVs coexisting with PUs.
Sensors 22 01651 g001
Figure 2. Occupancy state transition diagram of PU channel m.
Figure 2. Occupancy state transition diagram of PU channel m.
Sensors 22 01651 g002
Figure 3. Structure of one time slot for the joint channel sensing and access protocol.
Figure 3. Structure of one time slot for the joint channel sensing and access protocol.
Sensors 22 01651 g003
Figure 4. IL-Q-UCB-H of CUAV n for joint sensing and access.
Figure 4. IL-Q-UCB-H of CUAV n for joint sensing and access.
Sensors 22 01651 g004
Figure 5. IL-DQN-UCB-H of CUAV n for joint sensing and access.
Figure 5. IL-DQN-UCB-H of CUAV n for joint sensing and access.
Sensors 22 01651 g005
Figure 6. Evolution of the average reward and the sensing accuracy with training ( N = 4 , M = 5 ).
Figure 6. Evolution of the average reward and the sensing accuracy with training ( N = 4 , M = 5 ).
Sensors 22 01651 g006
Figure 7. Evolution of the average reward and the sensing accuracy with training ( N = 6 , M = 5 ).
Figure 7. Evolution of the average reward and the sensing accuracy with training ( N = 6 , M = 5 ).
Sensors 22 01651 g007
Figure 8. Evolution of the average reward and the sensing accuracy of the proposed algorithms in cooperative and non-cooperative scenarios ( N = 10 , M = 5 ).
Figure 8. Evolution of the average reward and the sensing accuracy of the proposed algorithms in cooperative and non-cooperative scenarios ( N = 10 , M = 5 ).
Sensors 22 01651 g008
Figure 9. Evolution of the channel utilization of four algorithms ( N = 10 , M = 5 ).
Figure 9. Evolution of the channel utilization of four algorithms ( N = 10 , M = 5 ).
Sensors 22 01651 g009
Figure 10. Evolution of the average reward of four algorithms with different bandwidths ( N = 4 , M = 5 ).
Figure 10. Evolution of the average reward of four algorithms with different bandwidths ( N = 4 , M = 5 ).
Sensors 22 01651 g010
Figure 11. Evolution of the average reward of four algorithms with different PU state transition probabilities ( N = 4 , M = 5 ).
Figure 11. Evolution of the average reward of four algorithms with different PU state transition probabilities ( N = 4 , M = 5 ).
Sensors 22 01651 g011
Table 1. Simulation parameters.
Table 1. Simulation parameters.
ParametersValue
PU channels M5
CUAV number N4, 5, 10
Channel bandwidth B m 50∼100 MHz
False alarm probability P f 0.1 [17]
Detection probability P d 0.9
Transmission power P t 23 dBm [24]
Sensing time τ s 0.1 ms
Transmission time τ t 0.5 ms
Weights of sensing/access cost η , μ 0.01, 0.05
Table 2. Hyperparameters of the RL algorithms.
Table 2. Hyperparameters of the RL algorithms.
Hyper-ParametersValue
Greedy rate ϵ 0.1
Discount factor γ 0.9
Parameters of the learning rate c α , φ α 0.5, 0.8 [24]
Parameters of UCB-H p , c 0.01, 2 [17]
Parameters of CNN(2, 2, 10)
Activation functionReLu [19]
OptimizerAdam [36]
Batch size B64
Target Q-Network update period F100
Experience replay size C20,000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiang, W.; Yu, W.; Wang, W.; Huang, T. Multi-Agent Reinforcement Learning for Joint Cooperative Spectrum Sensing and Channel Access in Cognitive UAV Networks. Sensors 2022, 22, 1651. https://doi.org/10.3390/s22041651

AMA Style

Jiang W, Yu W, Wang W, Huang T. Multi-Agent Reinforcement Learning for Joint Cooperative Spectrum Sensing and Channel Access in Cognitive UAV Networks. Sensors. 2022; 22(4):1651. https://doi.org/10.3390/s22041651

Chicago/Turabian Style

Jiang, Weiheng, Wanxin Yu, Wenbo Wang, and Tiancong Huang. 2022. "Multi-Agent Reinforcement Learning for Joint Cooperative Spectrum Sensing and Channel Access in Cognitive UAV Networks" Sensors 22, no. 4: 1651. https://doi.org/10.3390/s22041651

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop