[go: up one dir, main page]

Next Article in Journal
Integer-Valued Split-BREAK Process with a General Family of Innovations and Application to Accident Count Data Modeling
Next Article in Special Issue
Optimizing Controls to Track Moving Targets in an Intelligent Electro-Optical Detection System
Previous Article in Journal
The AA-Viscosity Algorithm for Fixed-Point, Generalized Equilibrium and Variational Inclusion Problems
Previous Article in Special Issue
Partial Singular Value Assignment for Large-Scale Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finite-Time Passivity and Synchronization for a Class of Fuzzy Inertial Complex-Valued Neural Networks with Time-Varying Delays

School of Information Engineering, Wuhan Business University, Wuhan 430056, China
Axioms 2024, 13(1), 39; https://doi.org/10.3390/axioms13010039
Submission received: 12 December 2023 / Revised: 29 December 2023 / Accepted: 3 January 2024 / Published: 7 January 2024
(This article belongs to the Special Issue Control Theory and Control Systems: Algorithms and Methods)
Figure 1
<p>Bule line stands for transient behavior of variables <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math> and red line stands for transient behavior of variables <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>m</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math> of FICVNNs (65).</p> ">
Figure 2
<p>Bule line stands for transient behavior of variables <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <msub> <mi>v</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math> and red line stands for transient behavior of variables <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>m</mi> <mo>(</mo> <msub> <mi>v</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math> of FICVNNs (65).</p> ">
Figure 3
<p>Bule line stands for state trajectory of variables <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mo>(</mo> <msub> <mi>v</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math> and red line stands for state of trajectory of variables <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>m</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>m</mi> <mo>(</mo> <msub> <mi>v</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math> of FICVNNs (65) without control.</p> ">
Figure 4
<p>The curves of error states <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, external input <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, and output <math display="inline"><semantics> <mrow> <msub> <mi>g</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math> under controller (19).</p> ">
Figure 5
<p>Trajectory of error states <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math> under controller (19).</p> ">
Figure 6
<p>Trajectories of error <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>z</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math> with controller <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </semantics></math>.</p> ">
Figure 7
<p>The synchronization curve of error states <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>z</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math> with controller (19).</p> ">
Figure 8
<p>PRNG produced by FICVNNs.</p> ">
Figure 9
<p>Original signals.</p> ">
Figure 10
<p>Encrypted signals.</p> ">
Versions Notes

Abstract

:
This article investigates finite-time passivity for fuzzy inertial complex-valued neural networks (FICVNNs) with time-varying delays. First, by using the existing passivity theory, several related definitions of finite-time passivity are illustrated. Consequently, by adopting a reduced-order method and dividing complex-valued parameters into real and imaginary parts, the proposed FICVNNs are turned into first-order real-valued neural network systems. Moreover, appropriate controllers and the Lyapunov functional method are established to obtain the finite-time passivity of FICVNNs with time delays. Furthermore, some essential conditions are established to ensure finite-time synchronization for finite-time passive FICVNNs. In the end, corresponding simulations certify the feasibility of the proposed theoretical outcomes.

1. Introduction

As is known to all, the neural networks as a hot direction of non-linear systems have aroused the interest of experts and scholars due to their fruitful applications, including artificial intelligence, pattern recognition, associative memories, etc. Many worthy related results have been explored in recent years [1,2,3].
However, plenty of existing articles mainly gather in the first derivative of the states instead of the inertial term, which is the second derivative of voltage concerning time. Hence, based on the Hopfield networks, inertial neural networks (INNs) were put forward by Babcock. As mentioned in this document [4], the dynamic behaviors of second-derivative neural networks could be more sophisticated. During these years, some corresponding results on inertial neural networks (INNs) have been reported [5,6,7,8,9]. Tu et al. [10] discussed the issue of global dissipativity for INNs with memristor-based neutral type using the Filippov theory and LMI approach. In [11], Shanmugasundaram et al. introduced the event-triggered impulsive control mechanism to guarantee synchronization for INNs. Zhang and Cao [12] considered the INNs using inequality techniques to obtain finite-time synchronization. As to the asymptotical stabilization problem, Han et al. [13] used a direct method to analyze the Cohen–Grossberg INNs and constructed two adaptive controllers to make the proposed model realize asymptotical and adaptive stabilization. In [14], the authors were concerned with the issue of global exponential convergence for impulsive INNs and presented an exponential convergence ball with a specified convergence rate.
Admittedly, time delays can easily provoke certain undesirable and unexpected dynamical behaviors and diverse types of time delays, including proportional delay [15,16], time-varying delay [17,18], and mixed delays [19,20]. Moreover, the exchange of neural network systems depends on the current state and the paste state or the variations of the paste state. From the theoretical and particle view, time delays are deeply researched for INN instability [21,22], passivity [23,24], synchronization [11,12,16,25,26], dissipativity [27,28], and so forth.
In many circumstances, the complex-valued neural networks (CVNNs), which activate functions, connection parameters, neuron states, and so forth, are both complex-valued [29,30,31,32,33,34,35]. Consequently, it becomes an expansion of real-valued networks. The original intention of investigating CVNNs is to research novel dynamic behaviors and to conquer some puzzles that real-valued networks can not describe [36]. Furthermore, the inertial complex-valued neural networks (ICVNNs) have become a hot theme that has attracted some researchers’ attention, especially stability and synchronization [37]. Tang and Jian [38] carry out the exponential convergence for impulsive ICVNNs by developing novel delay-dependent conditions. In [39], the authors emphasized the non-reduced order method to deal with ICVNNs holistically to discuss the question of exponential and adaptive synchronization by constructing a complex-valued feedback control input. Long and Zhang et al. observed the finite-time stabilization and fixed-time synchronization of ICVNNs by utilizing Lyapunov theory and inequality techniques applied theoretical results into practical [40,41].
As a typical and essential problem, fuzzy logic has extensively emerged because it can approximate non-linear functions with arbitrary accuracy, which can be viewed as a potential method by which to emulate human thinking and sensation [42]. Hence, Combining fuzzy logic into a neural network system has been received high concern [43]. On the other hand, as [9] reveals, stabilization and synchronization of many practical engineering systems are required in a finite-time, which lead to the previous results of asymptotic stabilization and synchronization control inoperative. Therefore, it is vital to shorten the convergence time to achieve finite-time synchronization of complex-valued neural networks. In [44], the authors dealt with the question on fixed-time stabilization of fuzzy inertial neural networks (FINNs). The issues of synchronization for FINNs have been addressed in [16,19]. Xiao et al. study the passive and passification for FINNs on time scales inspired by the LMI method and analytical approaches [45]. Furthermore, the authors of [46] developed fuzzy rules into CVNNs and established a class of fuzzy inertial complex-valued neural networks (FICVNNs) to solve the adaptive synchronization problem.
Passivity is a powerful tool for investigating the internal stability of non-linear systems. It is originally from circuit analysis methods that have received much attention from the engineering fields and dynamical neural networks. And some related passivity problems for neural networks have been published [23,24,33,35,45,47,48,49]. The authors in [24,35,45,48,49] studied the passivity problem for different types of neural networks. Huang et al. [33] further learned the passivity issues for CVNNs with coupled weights. Motivated by the above analysis, based on the existing passivity theory, when it comes to discussing FICVNNs with time-varying delays, some related puzzles naturally arise; for example, how to ensure the proposed FICVNNs realize finite-time passivity (FTP), finite-time input strict passivity (FTISP), as well as finite-time output strictly passivity (FTOSP)? If it can be done, what kind of Lyapunov functional and control inputs are supposed to realize the corresponding passivity goal? Is there any relation between the FTP and FTS? As far as we are concerned, few documents carried out the FTP and FTS of FICVNNs, making this work remarkable and valuable. The main contributions of this article are as follows.
First, by resorting to existing passivity definitions, three concepts of finite-time passivity are illustrated. Moreover, the neural model built in this paper is concerned with inertial terms, complex-valued parameters, fuzzy logic, and time delays, which will increase the difficulty and complexity of solving the neural systems internally stable.
Second, compared with [24,35,45,48,49], we use effective control inputs and the appropriate Lyapunov function to gain some finite-time passivity criteria for delayed FICVNNs.
Third, based on the finite-time passivity, we further discuss the finite-time synchronization issue and apply the simulations about pseudorandom number generators to support the feasibility of the obtained results.
The article parts are arranged as follows: Finite-time passivity definitions and necessary lemmas are put forward in Section 2. The analytical processes and some simulations are obtained in Section 3. The conclusions are expressed in Section 4.

2. Preliminaries

Model, Assumption, Definitions, and Lemmas

Here,  C n  and  R n , respectively, denote the n-dimensional complex vector space and the real vector space with n-dimensional. For any  w = w R + i w I C , i is the imaginary unit and meets  i = 1 w R  implies the real part of v, and  w I R  is the imaginary part.  = { 1 , 2 , , n } τ j = sup k { τ j ( t 0 ) , σ } , t 0 0 .  For any  = ( 1 , 2 , , n ) T R n = i = 1 n | i | 2 .   L k = max { | L k | | L k + | } k .
Now, a class of FICVNNs with time-varying delays is given:
x ¨ k ( t ) = a k x k ( t ) b k x ˙ k ( t ) + j = 1 n c k j f j ( x j ( t ) ) + j = 1 n d k j f j ( x j ( t τ j ( t ) ) ) + j = 1 n w k j f j ( x j ( t τ j ( t ) ) ) + j = 1 n q k j f j ( x j ( t τ j ( t ) ) ) ,
where  k t 0 x k ( t ) C  is the state of kth neural at time t a k  and  b k  are positive constants, and  c k j , d k j C  denote the feedback connection weights of system (1).  f j ( · ) C  presents the feedback function,  τ j ( t )  is the time-varying delay, and  0 τ j ( t ) τ j  and  τ ˙ j ( t ) ζ j < 1 w k j  and  q k j  imply the fuzzy feedback MIN and MAX template.  and  represent fuzzy AND, OR. The initial conditions of FIVCNNs (1) are
x k ( ) = Ω k x ( ) , x ˙ k ( ) = Ψ k ( ) , [ t 0 τ j , t 0 ] , k ,
where  Ω k ( ) = Ω k R ( ) + i Ω k I Ψ k ( ) = Ψ k R + i Ψ k I ( ) Ω ( ) = ( Ω 1 ( ) , Ω 2 ( ) , , Ω n ( ) ) T , Ψ ( ) = ( Ψ 1 ( ) , Ψ 2 ( ) , , Ψ n ( ) ) T , and  Ω ( ) , Ψ ( ) C ( [ t 0 τ j , t 0 ] , C n ) . With regard to active function  f k ( · ) , we introduce the following assumption.
Assumption 1.
As to activation function  f k ( x ) x = ρ + i ϑ , it can be divided by the real part and the imaginary part, such as  f k ( x ) = f k R ( ρ ) + i f k I ( ϑ ) . And, for any  ı 1 , ı 2 R , the real part  f k R ( · ) , as well as the imaginary part  f k I ( · )  of the activation function  f k ( · ) , can be characterized by
| f k R ( · ) | F k R , | f k I ( · ) | F k I ,
| f k R ( ı 1 ) f k R ( ı 2 ) | η R | ı 1 ı 2 | ,
| f k I ( ı 1 ) f k I ( ı 2 ) | η I | ı 1 ı 2 | ,
where  F k R ( · ) , F k I ( · ) , η R , η I  are non-negative constants.
For certain positive scalar  R , we make the variable transformation:
v k ( t ) = x ˙ k ( t ) + k x k ( t ) , k ,
then, we have
x ˙ k ( t ) = k x k ( t ) + v k ( t ) v ˙ k ( t ) = [ a k + k ( k b k ) ] x k ( t ) ( b k k ) v k ( t ) + j = 1 n c k j f j ( x j ( t ) ) + j = 1 n d k j f j ( x j ( t τ j ( t ) ) ) + j = 1 n w k j f j ( x j ( t τ j ( t ) ) ) + j = 1 n q k j f j ( x j ( t τ j ( t ) ) ) .
Considering system (4) as the driver system, let  x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T ,   v ( t ) = ( v 1 ( t ) , v 2 ( t ) , , v n ( t ) ) T A = d i a g { a 1 + 1 ( 1 b 1 ) , a 2 + 2 ( 2 b 2 ) , , a n + n ( n b n ) } ,   B = d i a g { b 1 1 , b 2 2 , , b n n } H = d i a g { 1 , 2 , n } f ( x ( t ) ) = ( f 1 ( x 1 ( t ) , f 1 ( x 1 ( t ) ,   , f n ( x n ( t ) ) T C n f ( x ( t ) ¯ ) = ( f 1 ( x 1 ( t τ 1 ( t ) ) ) , f 2 ( x 2 ( t τ 2 ( t ) ) ) , , f n ( x n ( t τ n ( t ) ) ) ) T C n C = C R + i C I D = D R + i D I C R = ( c k j R ) n × n C I = ( c k j I ) n × n D R = ( d k j R ) n × n D I = ( d k j I ) n × n W = ( w k j ) n × n Q = ( q k j ) n × n .
W f ( x ( t ) ¯ ) = ( j = 1 n w 1 j f j ( x j ( t τ j ( t ) ) ) , j = 2 n w 2 j f j ( x j ( t τ j ( t ) ) ) , , j = n n w n j f j ( x j ( t τ j ( t ) ) ) ) T ,
Q f ( x ( t ) ¯ ) = ( j = 1 n q 1 j f j ( x j ( t τ j ( t ) ) ) , j = 1 n q 2 j f j ( x j ( t τ j ( t ) ) ) , , j = 1 n q n j f j ( x j ( t τ j ( t ) ) ) ) T .
Therefore, the matrix form of system (4) can be described by
x ˙ ( t ) = H x ( t ) + v ( t ) v ˙ ( t ) = A x ( t ) B v k ( t ) + C f ( x ( t ) ) + D f ( x ( t ) ¯ ) W f ( x ( t ) ¯ ) + Q f ( x ( t ) ¯ ) ,
then, the matrix form of the response system is as follows:
u ˙ ( t ) = H u ( t ) + y ( t ) + Δ ( t ) y ˙ ( t ) = A u ( t ) B y k ( t ) + C f ( u ( t ) ) + D f ( u ( t ) ¯ ) W f ( u ( t ) ¯ ) + Q f ( u ( t ) ¯ ) + m ( t ) + I ( t ) ,
where  Δ ( t ) , m ( t )  are control schemes; that is,  Δ ( t ) = Δ R ( t ) + i Δ I ( t )  and  m ( t ) = m R ( t ) + i m I ( t ) I ( t )  represents external input which  I ( t ) = I R ( t ) + i I I ( t ) . Based on Assumption 1, system (7) can be transformed as follows:
x ˙ R ( t ) = H x R ( t ) + v R ( t ) v ˙ R ( t ) = A x R ( t ) B v k R ( t ) + C R f R ( x R ( t ) ) C I f I ( x I ( t ) ) + D R f R ( x R ( t ) ¯ ) D I f I ( x I ( t ) ¯ ) + W f R ( x R ( t ) ¯ ) + Q f R ( x R ( t ) ¯ ) , x ˙ I ( t ) = H x I ( t ) + v I ( t ) v ˙ I ( t ) = A x I ( t ) B v k I ( t ) + C R f I ( x I ( t ) ) + C I f R ( x R ( t ) ) + D R f I ( x I ( t ) ¯ ) D I f R ( x R ( t ) ¯ ) + W f I ( x I ( t ) ¯ ) + Q f I ( x I ( t ) ¯ ) ,
and response system (8) can be represented by
u ˙ R ( t ) = H u R ( t ) + y R ( t ) + Δ R ( t ) y ˙ R ( t ) = A u R ( t ) B y k R ( t ) + C R f R ( u R ( t ) ) C I f I ( u I ( t ) ) + D R f R ( u R ( t ) ¯ ) D I f I ( u I ( t ) ¯ ) + W f R ( u R ( t ) ¯ ) + Q f R ( u R ( t ) ¯ ) + m R ( t ) + I R ( t ) , u ˙ I ( t ) = H u I ( t ) + y I ( t ) + Δ I ( t ) y ˙ I ( t ) = A u I ( t ) B y k I ( t ) + C R f I ( u I ( t ) ) + C I f R ( u R ( t ) ) + D R f I ( u I ( t ) ¯ ) + D I f R ( u R ( t ) ¯ ) + W f I ( u I ( t ) ¯ ) + Q f I ( u I ( t ) ¯ ) + m I ( t ) + I I ( t ) .
Considering  e R ( t ) = u R ( t ) x R ( t ) e I ( t ) = u I ( t ) x I ( t ) z R ( t ) = y R ( t ) v R ( t ) z I ( t ) = y I ( t ) v I ( t ) F R ( e R ( t ) ) = f R ( u R ( t ) ) f R ( x R ( t ) ) F I ( e R ( t ) ) = f I ( u I ( t ) ) f I ( x I ( t ) ) F R ( e R ( t ) ¯ ) = f R ( u R ( t ) ¯ ) f R ( x R ( t ) ¯ ) = f R ( u R ( t τ ( t ) ) ) f R ( x R ( t τ ( t ) ) ) F I ( e I ( t ) ¯ ) = f I ( u I ( t ) ¯ ) f I ( x I ( t ) ¯ ) = f I ( u I ( t τ ( t ) ) ) f I ( x I ( t τ ( t ) ) ) . Through (9) and (10), we obtain the following error system
e ˙ R ( t ) = H e R ( t ) + z R ( t ) + Δ R ( t ) z ˙ R ( t ) = A e R ( t ) B z k R ( t ) + C R F R ( e R ( t ) ) C I F I ( e I ( t ) ) + D R F R ( e R ( t ) ¯ ) D I F I ( e I ( t ) ¯ ) + W F R ( e R ( t ) ¯ ) + Q F R ( e R ( t ) ¯ ) + m R ( t ) + I R ( t ) , e ˙ I ( t ) = H e I ( t ) + z I ( t ) + Δ I ( t ) z ˙ I ( t ) = A e I ( t ) B z k I ( t ) + C R F I ( e I ( t ) ) + C I F R ( e R ( t ) ) + D R F I ( e I ( t ) ¯ ) + D I F R ( e R ( t ) ¯ ) + W F I ( e I ( t ) ¯ ) + Q F I ( e I ( t ) ¯ ) + m I ( t ) + I I ( t ) .
Next, we give some necessary definitions as follows.
Definition 1
([47]). Suppose that the output in system  g ( t ) C N  and input  I ( t ) C N  obtain finite-time passivity (FTP) for any  0 < ε < 1  and  0 < μ R , if
U ˙ ( t ) + μ ( U ( t ) ) ε ( I R ( t ) ) T g R ( t ) + ( I I ( t ) ) T g I ( t ) ,
where  U ( t )  stands for a non-negative function.
Definition 2
([47]). Suppose that the output in system  g ( t ) C N  and input  I ( t ) C N  obtain finite-time input strict passivity (FTISP) for any  0 < ε < 1  and  0 < μ R , if
U ˙ ( t ) + μ ( U ( t ) ) ε ( I R ( t ) ) T g R ( t ) + ( I I ( t ) ) T g I ( t ) γ 1 ( I R ( t ) ) T I R ( t ) + ( I I ( t ) ) T I I ( t ) ,
where  U ( t )  stands for a non-negative function.
Definition 3
([47]). Suppose that the output in system  g ( t ) C N  and input  I ( t ) C N  obtain finite-time input strict passivity (FTOSP) for any  0 < ε < 1  and  0 < μ R , if
U ˙ ( t ) + μ ( U ( t ) ) ε ( I R ( t ) ) T g R ( t ) + ( I I ( t ) ) T g I ( t ) γ 2 ( w R ( t ) ) T w R ( t ) + ( w I ( t ) ) T w I ( t ) ,
where  U ( t )  stands for a non-negative function.
Lemma 1
([50]). It is assumed that with a continuous and non-negative function  U ( t )  which satisfies
U ˙ ( t ) μ ( U ( t ) ) ε , t 0 , U ( 0 ) 0
where  0 < ε < 1  and  0 < μ R , it can determined that,
( U ( t ) ) 1 ε ( U ( 0 ) ) 1 ε μ ( 1 ε ) t , 0 t T ,
and
U ( t ) = 0 , t T ,
thus, we obtain
T = U ( 0 ) 1 ε μ ( 1 ε ) .
Remark 1.
Many compelling results about passivity have been reported [48,49]. However, these references only pat attention to the input or output passivity problems, making the system only realize the infinite-time input or output passivity. Moreover, as [24,35,47] reveals, passivity can be an effective tool for discussing infinite-time synchronization for neural systems. But, from a practical perspective, the setting time of synchronization should be finite is more reasonable. Hence, the research in the paper is more general and extends the existing passivity theory.
Lemma 2
([3]). For any  0 < 1 s i R i , then one has
( | s 1 | + | s 2 | + + | s n | ) | s 1 | + | s 2 | + + | s n | .
Lemma 3
([5]). Suppose  π ( t ) = ( π 1 ( t ) , π 2 ( t ) , , π n ( t ) ) T  and  Ξ ( t ) = ( Ξ 1 ( t ) , Ξ 2 ( t ) ,   , Ξ n ( t ) ) T  imply two states of system (1); then, one obtains
| = 1 n k f ( Ξ ) = 1 n k f ( π ) | = 1 n | k | f ( Ξ ) f ( π ) ,
| = 1 n k f ( Ξ ) = 1 n k f ( π ) | = 1 n | k | f ( Ξ ) f ( π ) .
Remark 2.
Passivity analysis for real-valued neural networks (RVNNs) is widely observed [23,24,35,45,47,48,49], in which activation function, connection weight, input, and output are real-valued. Compared with RVNNs, CVNNs can viewed as a more general case because of more complex dynamic characteristics. Such symmetry detection and XOR issues are expected to be solved by CVNNs easily but cannot be settled by RVNNs [36]. In addition, compared with [23,24,33,35,49], inertial terms and fuzzy logic cases are supposed to cause complex essential impacts on dynamics behaviors for network systems. To our knowledge, few corresponding outcomes focus on the model with these elements. Therefore, it is significant to devote our effort to providing a guide for this analysis.
Remark 3.
Suppose that an energy function  U ( · )  stands for the energy stored in this system, and the energy supply bounded over finite-time intervals, then we call the system passive. As to storage function  U ( · )  and supply rate  ( g , I ) , compared with [23,47], we develop FTP, FTISP, and FTOSP from real-valued into complex-valued, and different supply rate reveals that the dissipative of inside the system  U ( t 2 ) U ( t 1 )  is not more than the external source  t 1 t 2 ( g ( t ) , I ( t ) ) d t .

3. Main Results

3.1. Finite-Time Passivity

Let the controller be  Δ ( t ) = Δ R ( t ) + i Δ I ( t ) m ( t ) = m R ( t ) + i m I ( t ) , and  Δ R ( t ) Δ I ( t ) m R ( t ) m I ( t )  are constructed as follows:
Δ R ( t ) = λ R e R ( t ) ε s i g n ( e R ( t ) ) | e R ( t ) | β , m R ( t ) = φ R z R ( t ) ε j = 1 n 2 t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s β + 1 2 z R ( t ) z R ( t ) 2 ε s i g n ( z R ( t ) ) | z R ( t ) | β G R s i g n ( z R ( t ) ) Δ I ( t ) = λ I e I ( t ) ε s i g n ( e I ( t ) ) | e I ( t ) | β , m I ( t ) = φ I z I ( t ) ε j = 1 n 2 t τ j ( t ) t η j I e j I ( s ) 2 1 ζ d s β + 1 2 z I ( t ) z I ( t ) 2 ε s i g n ( z I ( t ) ) | z I ( t ) | β G I s i g n ( z I ( t ) ) ,
in which  λ R = d i a g λ 1 R , λ 2 R λ n R R n × n  and  λ I = d i a g λ 1 I , λ 2 I λ n I R n × n  denote the positive definite gain matrices;  0 < ε R  and  0 < β < 1 ; when  z ( t ) 0 ,
G R = d i a g G 1 R , G 2 R , G n R , G I = d i a g G 1 I , G 2 I , G n I , | e R ( t ) | β = | e 1 R ( t ) | β , | e 2 R ( t ) | β , , | e n R ( t ) | β T , | e I ( t ) | β = | e 1 I ( t ) | β , | e 2 I ( t ) | β , , | e n I ( t ) | β T , s i g n ( e R ( t ) ) = d i a g s i g n e 1 R ( t ) , s i g n e 2 R ( t ) , , s i g n e n R ( t ) , s i g n ( e I ( t ) ) = d i a g s i g n e 1 I ( t ) , s i g n e 2 I ( t ) , , s i g n e n I ( t ) ;
| z R ( t ) | β | z I ( t ) | β  are the same defined as  | e R ( t ) | β | e I ( t ) | β , respectively.  s i g n ( z R ( t ) )  and  s i g n ( z I ( t ) )  are the same, defined as  s i g n ( e R ( t ) )  and  s i g n ( z I ( t ) ) , respectively.
What is more,
Δ R ( t ) = λ R e R ( t ) ε s i g n ( e R ( t ) ) | e R ( t ) | β , m R ( t ) = 0 , Δ I ( t ) = λ I e I ( t ) ε s i g n ( e I ( t ) ) | e I ( t ) | β , m I ( t ) = 0 ,
when  z ( t ) = 0 .
We define the output vector  g ( t ) C  of system (11):
g ( t ) = M 1 z ( t ) + M 2 e ( t ) + M 3 I ( t ) ,
where  M 1 , M 2 , M 3 R n × n . For convenience, we let
χ R = d i a g ( η 1 R ) 2 , ( η 2 R ) 2 , ( η n R ) 2 , χ I = d i a g ( η 1 I ) 2 , ( η 2 I ) 2 , ( η n I ) 2 , Λ = d i a g 1 1 ζ 1 , 1 1 ζ 2 , , 1 1 ζ n , g ( t ) = g 1 ( t ) , g 2 ( t ) , g n ( t ) T , e R ( t ) = e 1 R ( t ) , e 2 R ( t ) , e n R ( t ) T , e I ( t ) = e 1 I ( t ) , e 2 I ( t ) , e n I ( t ) T , z R ( t ) = z 1 R ( t ) , z 2 R ( t ) , z n R ( t ) T , z I ( t ) = z 1 I ( t ) , z 2 I ( t ) , z n I ( t ) T , I ( t ) = I 1 ( t ) , I 2 ( t ) , I n ( t ) T .
Theorem 1.
The network system (11) obtain FTP under control inputs (19) and (20) if there exist
λ R = D i a g ( λ 1 R , λ 2 R , λ n R ) , λ I = D i a g ( λ 1 I , λ 2 I , λ n R ) , φ R = D i a g ( φ 1 R , φ 2 R , φ n R ) , φ I = D i a g ( φ 1 I , φ 2 I , φ n I ) , G R = D i a g ( G 1 R , G 2 R , G n R ) , G I = D i a g ( G 1 I , G 2 I , G n I ) R n × n ,
satisfying such conditions as
W F R + Q F R G R 0 , W F I + Q F I G I 0 ,
Φ 1 R δ 1 R ϕ 1 R ( δ 1 R ) T ϖ 1 R Ξ 1 R ( ϕ 1 R ) T ( Ξ 1 R ) T θ 1 R 0 a n d Φ 1 I δ 1 I ϕ 1 I ( δ 1 I ) T ϖ 1 I Ξ 1 I ( ϕ 1 I ) T ( Ξ 1 I ) T θ I 1 0 ,
Π 1 R ϕ 1 R ( ϕ 1 R ) T θ 1 R 0 a n d Π 1 I ϕ 1 I ( ϕ 1 I ) T θ 1 I 0 ,
where  Φ 1 R = 2 H + 2 χ R + 2 χ R Λ 2 λ R ,   Φ 1 I = 2 H + 2 χ I + 2 χ I Λ 2 λ I ,   δ 1 R = δ 1 I = E A ,   ϕ 1 R = ϕ 1 I = M 2 2 ,   ϖ 1 R = 2 B + C R + C I + D R + D I 2 φ R ,   ϖ 1 R = 2 B + C R + C I + D R + D I 2 φ I ,   Ξ 1 R = Ξ 1 I = 2 E M 1 2 ,     θ 1 R = θ 1 I = M 3 .   Π 1 R = 2 H 2 λ R ,   Π 1 I = 2 H 2 λ I , and E is the identity matrix.
Proof. 
Case 1. When  z ( t ) 0 , we build the Lyapunov function as follows:
V ( t ) = V 1 ( t ) + V 2 ( t ) ,
where
V 1 ( t ) = ( e R ( t ) ) T e R ( t ) + ( z R ( t ) ) T z R ( t ) + 2 j = 1 n t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s ,
V 2 ( t ) = ( e I ( t ) ) T e I ( t ) + ( z I ( t ) ) T z I ( t ) + 2 j = 1 n t τ j ( t ) t η j I e j I ( s ) 2 1 ζ d s .
Then, when  z ( t ) 0 , we arrange the derivative of  V 1 ( t )  as
V ˙ 1 ( t ) = 2 ( e R ( t ) ) T ( H e R ( t ) + z R ( t ) λ R e R ( t ) ε s i g n ( e R ( t ) ) × | e R ( t ) | β ) + 2 ( z R ( t ) ) T ( A e R ( t ) B z k R ( t ) + C R F R ( e R ( t ) ) C I F I ( e I ( t ) ) + D R F R ( e R ( t ) ¯ ) D I F I ( e I ( t ) ¯ ) + W F R ( e R ( t ) ¯ ) + Q F R ( e R ( t ) ¯ ) φ R z R ( t ) G R s i g n ( z R ( t ) ε s i g n ( z R ( t ) ) | z R ( t ) | β + I R ( t ) ε j = 1 n ( 2 t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s ) β + 1 2 z R ( t ) z R ( t ) 2 ) + 2 ( e R ( t ) ) T χ R Λ e R ( t ) 2 ( e R ( t ) ¯ ) T χ R e R ¯ ( t ) .
V ˙ 2 ( t ) = 2 ( e I ( t ) ) T ( H e I ( t ) + z I ( t ) λ I e I ( t ) ε s i g n ( e I ( t ) ) | e I ( t ) | β ) + 2 ( z I ( t ) ) T ( A e I ( t ) B z k I ( t ) + C R F I ( e I ( t ) ) + C I F R ( e R ( t ) ) + D R F I ( e I ( t ) ¯ ) + D I F R ( e R ( t ) ¯ ) + W F I ( e I ( t ) ¯ ) + Q F I ( e I ( t ) ¯ ) φ I z I ( t ) G I s i g n ( z I ( t ) ε s i g n ( z I ( t ) ) | z I ( t ) | β + I I ( t ) ε j = 1 n ( 2 t τ j ( t ) t ( η j R e j I ( s ) ) 2 1 ζ d s ) β + 1 2 z I ( t ) z I ( t ) 2 + I I ( t ) ) + 2 ( e I ( t ) ) T χ I Λ e I ( t ) 2 ( e I ( t ) ¯ ) T χ I e I ¯ ( t ) .
under Assumption 1, from (28), then
2 ( z R ( t ) ) T C R F R ( e R ( t ) ) = 2 k = 1 n | z R ( t ) | c k R η k R | e k R ( t ) | k = 1 n ( z R ( t ) ) 2 ( c k R ) 2 + k = 1 n ( η k R ) 2 ( e k R ( t ) ) 2 = ( z R ( t ) ) T C R z R ( t ) + ( e R ( t ) ) T χ R e R ( t ) .
In addition, according to Lemma 3, one has
2 ( z R ( t ) ) T W F R ( e R ( t ) ¯ ) 2 | ( z R ( t ) ) T | W F R ,
2 ( z R ( t ) ) T Q F R ( e R ( t ) ¯ ) 2 | ( z R ( t ) ) T | Q F R .
Moreover,
2 ( z R ( t ) ) T C I F I ( e I ( t ) ) ( z R ( t ) ) T C I z R ( t ) + ( e I ( t ) ) T χ I e I ( t ) ,
2 ( z R ( t ) ) T D R F R ( e R ( t ) ¯ ) ( z R ( t ) ) T D R z R ( t ) + ( e R ( t ) ¯ ) T χ R e R ( t ) ¯ ,
2 ( z R ( t ) ) T D I F I ( e I ( t ) ¯ ) ( z R ( t ) ) T D I z R ( t ) + ( e I ( t ) ¯ ) T χ I e I ( t ) ¯ .
Likewise, from Lemma 2, one has
( e R ( t ) ) T s i g n ( e R ( t ) ) | e R ( t ) | β = k = 1 n | e k R ( t ) | β + 1 k = 1 n ( ( e k R ( t ) ) 2 ) β + 1 2 = ( ( e R ( t ) ) T e R ( t ) ) β + 1 2 ,
( z R ( t ) ) T s i g n ( z R ( t ) ) | z R ( t ) | β ( ( z R ( t ) ) T z R ( t ) ) β + 1 2 .
Similarly,
2 ( z I ( t ) ) T C R F I ( e I ( t ) ) ( z I ( t ) ) T C R z I ( t ) + ( e I ( t ) ) T χ I e I ( t ) ,
2 ( z I ( t ) ) T C I F R ( e R ( t ) ) ( z I ( t ) ) T C I z I ( t ) + ( e R ( t ) ) T χ R e R ( t ) ,
2 ( z I ( t ) ) T D R F I ( e I ( t ) ¯ ) ( z I ( t ) ) T D R z I ( t ) + ( e I ( t ) ¯ ) T χ I e I ( t ) ¯ ,
2 ( z I ( t ) ) T D I F R ( e R ( t ) ¯ ) ( z I ( t ) ) T D I z I ( t ) + ( e R ( t ) ¯ ) T χ R e R ( t ) ¯ ,
( e I ( t ) ) T s i g n ( e I ( t ) ) | e I ( t ) | β ( ( e I ( t ) ) T e I ( t ) ) β + 1 2 ,
( z I ( t ) ) T s i g n ( z I ( t ) ) | z I ( t ) | β ( ( z I ( t ) ) T z I ( t ) ) β + 1 2 .
What is more,
2 ( z I ( t ) ) T W F I ( e I ( t ) ¯ ) 2 | ( z I ( t ) ) T | W F I ,
2 ( z I ( t ) ) T Q F I ( e I ( t ) ¯ ) 2 | ( z I ( t ) ) T | Q F I .
Because of (30)–(35), it is arranged by
V ˙ 1 ( t ) ( e R ( t ) ) T ( 2 H 2 λ R + 2 χ R + 2 χ R Λ ) e R ( t ) + ( e R ( t ) ) T [ 2 ( E A ) ] z R ( t ) + ( z R ( t ) ) T ( 2 B + C R + C I + D R + D I 2 φ R ) z R ( t ) + 2 | ( z R ( t ) ) T | ( W F R + Q F R G R ) + 2 ( z R ( t ) ) T I R ε j = 1 n 2 t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s β + 1 2 2 ε ( ( e R ( t ) ) T e R ( t ) ) β + 1 2 2 ε ( ( z R ( t ) ) T z R ( t ) ) β + 1 2 ,
and
V ˙ 2 ( t ) ( e I ( t ) ) T ( 2 H 2 λ I + 2 χ I + 2 χ I Λ ) e I ( t ) + ( e I ( t ) ) T [ 2 ( E A ) ] z I ( t ) + ( z I ( t ) ) T ( 2 B + C R + C I + D R + D I 2 φ I ) z I ( t ) + 2 | ( z I ( t ) ) T | ( W F I + Q F I G I ) + 2 ( z I ( t ) ) T I I ε j = 1 n 2 t τ j ( t ) t η j I e j I ( s ) 2 1 ζ d s β + 1 2 2 ε ( ( e I ( t ) ) T e I ( t ) ) β + 1 2 2 ε ( ( z I ( t ) ) T z I ( t ) ) β + 1 2 .
Furthermore,
V ˙ ( t ) I R ( t ) T g R ( t ) + I I ( t ) T g I ( t )
Γ R ( t ) T Φ 1 R δ 1 R ϕ 1 R ( δ 1 R ) T ϖ 1 R Ξ 1 R ( ϕ 1 R ) T ( Ξ 1 R ) T θ 1 R Γ R ( t ) 2 ε e R ( t ) T e R ( t ) β + 1 2
2 ε z R ( t ) T z R ( t ) β + 1 2 ε j = 1 n 2 t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s β + 1 2
+ Γ I ( t ) T Φ 1 I δ 1 I ϕ 1 I ( δ 1 I ) T ϖ 1 I Ξ 1 I ( ϕ 1 I ) T ( Ξ 1 I ) T θ 1 I Γ I ( t ) 2 ε e I ( t ) T e I ( t ) β + 1 2
2 ε z I ( t ) T z I ( t ) β + 1 2 ε j = 1 n 2 t τ j ( t ) t η j I e j I ( s ) 2 1 ζ d s β + 1 2
+ 2 | z R ( t ) T | ( W F R + Q F R G R ) + 2 | z I ( t ) T | ( W F I + Q F I G I )
2 ε ( e R ( t ) T e R ( t ) + z R ( t ) T z R ( t ) + e I ( t ) T e I ( t ) + z I ( t ) T z I ( t ) + j = 1 n ( t τ j ( t ) t η j R e j R ( s ) ) 2 1 ζ d s + j = 1 n t τ j ( t ) t η j R e j R ( s ) ) 2 1 ζ d s β + 1 2 = 2 ε V 1 ( t ) + V 2 ( t ) β + 1 2 = 2 ε ( V ( t ) ) β + 1 2 ,
where
Γ R ( t ) = e R ( t ) , z R ( t ) , ( I R ( t ) T a n d Γ I ( t ) = e I ( t ) , z R ( t ) , ( I I ( t ) T .
Consequently, it concludes that
I R ( t ) T g R ( t ) + I I ( t ) T g I ( t ) V ˙ ( t ) + 2 ε ( V ( t ) ) β + 1 2 = V ˙ ( t ) + ε ˜ ( V ( t ) ) β ˜ .
where  ε ˜ = 2 ε β ˜ = β + 1 2 0 < ε ˜ R  and  0 < β ˜ < 1 .
Case 2. When  z ( t ) = 0 , the Lyapunov function is
V ˜ ( t ) = e R ( t ) T e R ( t ) + e I ( t ) T e I ( t ) .
Arranging the derivative of  V ˜ ( t ) , one has
V ˜ ˙ ( t ) = 2 e R ( t ) T ( H e R ( t ) λ R e R ( t ) ε s i g n ( e R ( t ) ) | e R ( t ) | β ) + 2 e I ( t ) T ( H e I ( t ) λ I e I ( t ) ε s i g n ( e I ( t ) ) | e I ( t ) | β ) e R ( t ) T ( 2 H 2 λ R ) e R ( t ) + e I ( t ) T ( 2 H 2 λ I ) e I ( t ) 2 ε e R ( t ) T e R ( t ) β + 1 2 2 ε e I ( t ) T e I ( t ) β + 1 2 .
Moreover,
V ˜ ˙ ( t ) I R ( t ) T g R ( t ) + I I ( t ) T g I ( t ) = V ˜ ˙ ( t ) ( I R ( t ) T M 2 e R ( t ) + I R ( t ) T M 3 I R ( t ) ) + I I ( t ) T M 2 e I ( t ) + I I ( t ) T M 3 I I ( t ) )
R ( t ) T Π 1 R ϕ 1 R ( ϕ 1 R ) T θ 1 R I ( t ) 2 ε e R ( t ) T e R ( t ) β + 1 2
+ I ( t ) T Π 1 I ϕ 1 I ( ϕ 1 I ) T θ 1 I I ( t ) 2 ε e I ( t ) T e I ( t ) β + 1 2
2 ε e R ( t ) T e R ( t ) + e I ( t ) T e I ( t ) β + 1 2 = V ˙ ( t ) + ε ˜ ( V ˜ ( t ) ) β ˜ ,
where  R ( t ) = e R ( t ) , ( I R ( t ) T  and  I ( t ) = e I ( t ) , ( I I ( t ) T .
Based on the above analysis, according to Definition 1, system (11) can realize FTP under controller (19). At the instant time t when  z ( t ) = 0 , we also can infer that system (11) achieves FTP by controller (20). Therefore, system (9) can reach FTP under control schemes (19) and (20). □
Theorem 2.
Under the condition of Theorem 1, the network (11) can realize FTISP through controllers (19) and (20) if there are
W F R + Q F R G R 0 , W F I + Q F I G I 0 ,
Φ 1 R δ 1 R ϕ 1 R ( δ 1 R ) T ϖ 1 R Ξ 1 R ( ϕ 1 R ) T ( Ξ 1 R ) T θ 2 R 0 a n d Φ 1 I δ 1 I ϕ 1 I ( δ 1 I ) T ϖ 1 I Ξ 1 I ( ϕ 1 I ) T ( Ξ 1 I ) T θ 2 I 0 ,
Π 1 R ϕ 1 R ( ϕ 1 R ) T θ 2 R 0 a n d Π 1 I ϕ 2 I ( ϕ 1 I ) T θ 2 I 0 ,
where  Φ 1 R ,   Φ 1 I ,   δ 1 R ,   ϕ 1 R ,   ϖ 1 R ,   ϖ 1 R ,   Ξ 1 R ,   Π 1 I ,   Π 1 R   have the same meanings as in Theorem 1, and  θ 2 R = θ 2 I = γ 1 E + θ 1 R .
Proof. 
Building the same Lyapunov function as (25), combining with (46) and (47), we can carry out
V ˙ ( t ) ( I R ( t ) ) T g R ( t ) + ( I I ( t ) ) T g I ( t ) + γ 1 ( I R ( t ) ) T I R ( t ) + ( I I ( t ) ) T I I ( t ) ( e R ( t ) ) T ( 2 H 2 λ R + 2 χ R + 2 χ R Λ ) e R ( t ) + ( e R ( t ) ) T [ 2 ( E A ) ] z R ( t ) + ( z R ( t ) ) T ( 2 B + C R + C I + D R + D I 2 φ R ) z R ( t ) + 2 | ( z R ( t ) ) T | ( W F R + Q F R G R ) + 2 ( z R ( t ) ) T I R ε j = 1 n 2 t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s β + 1 2 2 ε ( ( e R ( t ) ) T e R ( t ) ) β + 1 2 2 ε ( ( z R ( t ) ) T z R ( t ) ) β + 1 2 + ( e I ( t ) ) T ( 2 H 2 λ I + 2 χ I + 2 χ I Λ ) e I ( t ) + ( e I ( t ) ) T [ 2 ( E A ) ] z I ( t ) + ( z I ( t ) ) T ( 2 B + C R + C I + D R + D I 2 φ I ) z I ( t ) + 2 | ( z I ( t ) ) T | ( W F I + Q F I G I ) + 2 ( z I ( t ) ) T I I ε j = 1 n 2 t τ j ( t ) t η j I e j I ( s ) 2 1 ζ d s β + 1 2 2 ε ( ( e I ( t ) ) T e I ( t ) ) β + 1 2 2 ε ( ( z I ( t ) ) T z I ( t ) ) β + 1 2 + γ 1 ( I R ( t ) ) T I R ( t ) + ( I I ( t ) ) T I I ( t ) [ I R ( t ) T M 1 z R ( t ) + I R ( t ) T M 2 e R ( t ) + I R ( t ) T M 3 I R ( t ) + I I ( t ) T M 1 z I ( t ) + I I ( t ) T M 2 e I ( t ) + I I ( t ) T M 3 I I ( t ) ]
Γ R ( t ) T Φ 1 R δ 1 R ϕ 1 R ( δ 1 R ) T ϖ 1 R Ξ 1 R ( ϕ 1 R ) T ( Ξ 1 R ) T θ 2 R Γ R ( t ) 2 ε e R ( t ) T e R ( t ) β + 1 2
2 ε z R ( t ) T z R ( t ) β + 1 2 ε j = 1 n 2 t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s β + 1 2
+ Γ I ( t ) T Φ 1 I δ 1 I ϕ 1 I ( δ 1 I ) T ϖ 1 I Ξ 1 I ( ϕ 1 I ) T ( Ξ 1 I ) T θ 2 I Γ I ( t ) 2 ε e I ( t ) T e I ( t ) β + 1 2
2 ε z I ( t ) T z I ( t ) β + 1 2 ε j = 1 n 2 t τ j ( t ) t η j I e j I ( s ) 2 1 ζ d s β + 1 2
+ 2 | z R ( t ) T | ( W F R + Q F R G R ) + 2 | z I ( t ) T | ( W F I + Q F I G I )
2 ε ( e R ( t ) T e R ( t ) + z R ( t ) T z R ( t ) + e I ( t ) T e I ( t ) + z I ( t ) T z I ( t ) + j = 1 n t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s + j = 1 n t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s ) β + 1 2 = 2 ε ( V ( t ) ) β + 1 2 = ε ˜ ( V ( t ) ) β ˜ .
Then, one attains
I R ( t ) T g R ( t ) + I I ( t ) T g I ( t ) γ 1 ( I R ( t ) ) T I R ( t ) + ( I I ( t ) ) T I I ( t ) V ˙ ( t ) + ε ˜ ( V ( t ) ) β ˜ ,
where  ε ˜ = 2 ε β ˜ = β + 1 2 0 < ε ˜ R 0 < β ˜ < 1 . Consequently, system (11) can achieve FTISP under controller (19). When  z ( t ) = 0 , the proving procedure has a resemblance to Theorem 1 and is omitted here. So, based on Definition 2, the system (11) can obtain FTISP through controller (19) and (20). □
Theorem 3.
Under the condition of Theorem 1, the network (11) can realize FTOSP through controllers (19) and (20) if there are
W F R + Q F R G R 0 , W F I + Q F I G I 0 ,
Φ 2 R δ 2 R ϕ 2 R ( δ 2 R ) T ϖ 2 R Ξ 2 R ( ϕ 2 R ) T ( Ξ 2 R ) T θ 3 R 0 and Φ 2 I δ 2 I ϕ 2 I ( δ 2 I ) T ϖ 2 I Ξ 2 I ( ϕ 2 I ) T ( Ξ 2 I ) T θ 3 I 0 ,
Π 2 R ϕ 2 R ( ϕ 2 R ) T θ 3 R 0 and Π 2 I ϕ 2 I ( ϕ 2 I ) T θ 3 I 0 ,
where  Φ 2 R = Φ 1 R + γ 2 M 2 T M 2 ,   Φ 2 I = Φ 2 I + γ 2 M 2 T M 2 ,   δ 2 R = δ 1 I = δ 1 R + γ 2 M 1 T M 2 2 ,   ϕ 2 R = ϕ 1 I = ϕ 1 R + γ 2 M 2 T M 3 M 2 2 ,   ϖ 2 R = ϖ 1 R + γ 2 M 1 T M 1 ,   ϖ 2 I = ϖ 1 I + γ 2 M 1 T M 1 ,   Ξ 2 R = Ξ 2 I = Ξ 1 R + γ 2 M 1 T M 3 M 1 2 ,     θ 3 R = θ 3 I = θ 1 R + γ 2 M 3 T M 3 M 3 .   Π 2 R = Π 1 R + γ 2 M 2 T M 2 ,   Π 2 I = Π 1 I + γ 2 M 2 T M 2 .
Proof. 
Considering the same Lyapunov function as (25), combined with (46) and (47), it follows that
V ˙ ( t ) ( I R ( t ) ) T g R ( t ) + ( I I ( t ) ) T g I ( t ) + γ 2 ( w R ( t ) ) T w R ( t ) + ( w I ( t ) ) T w I ( t ) ( e R ( t ) ) T ( 2 H 2 λ R + 2 χ R + 2 χ R Λ ) e R ( t ) + ( e R ( t ) ) T [ 2 ( E A ) ] z R ( t ) + ( z R ( t ) ) T ( 2 B + C R + C I + D R + D I 2 φ R ) z R ( t ) + 2 | ( z R ( t ) ) T | ( W F R + Q F R G R ) + 2 ( z R ( t ) ) T I R ε j = 1 n 2 t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s β + 1 2 2 ε ( ( e R ( t ) ) T e R ( t ) ) β + 1 2 2 ε ( ( z R ( t ) ) T z R ( t ) ) β + 1 2 + ( e I ( t ) ) T ( 2 H 2 λ I + 2 χ I + 2 χ I Λ ) e I ( t ) + ( e I ( t ) ) T [ 2 ( E A ) ] z I ( t ) + ( z I ( t ) ) T ( 2 B + C R + C I + D R + D I 2 φ I ) z I ( t ) + 2 | ( z I ( t ) ) T | ( W F I + Q F I G I ) + 2 ( z I ( t ) ) T I I ε j = 1 n 2 t τ j ( t ) t η j I e j I ( s ) 2 1 ζ d s β + 1 2 2 ε ( ( e I ( t ) ) T e I ( t ) ) β + 1 2 2 ε ( ( z I ( t ) ) T z I ( t ) ) β + 1 2 [ I R ( t ) T M 1 z R ( t ) + I R ( t ) T M 2 e R ( t ) + I R ( t ) T M 3 I R ( t ) + I I ( t ) T M 1 z I ( t ) + I I ( t ) T M 2 e I ( t ) + I I ( t ) T M 3 I I ( t ) ] + [ γ 2 z R ( t ) T M 1 T M 1 z R ( t ) + γ 2 z R ( t ) T M 1 T M 2 e R ( t ) + γ 2 z R ( t ) T M 1 T M 3 I R ( t ) + γ 2 e R ( t ) T M 2 T M 2 e R ( t ) + γ 2 e R ( t ) T M 2 T M 3 I R ( t ) + γ 2 I R ( t ) T M 3 T M 3 I R ( t ) + γ 2 z I ( t ) T M 1 T M 1 z I ( t ) + γ 2 z I ( t ) T M 1 T M 2 e I ( t ) + γ 2 z I ( t ) T M 1 T M 3 I I ( t ) + γ 2 e I ( t ) T M 2 T M 2 e I ( t ) + γ 2 e I ( t ) T M 2 T M 3 I I ( t ) + γ 2 I I ( t ) T M 3 T M 3 I I ( t ) ]
Γ R ( t ) T Φ 2 R δ 2 R ϕ 2 R ( δ 2 R ) T ϖ 2 R Ξ 2 R ( ϕ 2 R ) T ( Ξ 2 R ) T θ 3 R Γ R ( t ) 2 ε e R ( t ) T e R ( t ) β + 1 2
2 ε z R ( t ) T z R ( t ) β + 1 2 ε j = 1 n 2 t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s β + 1 2
+ Γ I ( t ) T Φ 2 I δ 2 I ϕ 2 I ( δ 2 I ) T ϖ 2 I Ξ 2 I ( ϕ 2 I ) T ( Ξ 2 I ) T θ 3 I Γ I ( t ) 2 ε e I ( t ) T e I ( t ) β + 1 2
2 ε z I ( t ) T z I ( t ) β + 1 2 ε j = 1 n 2 t τ j ( t ) t η j I e j I ( s ) 2 1 ζ d s β + 1 2
+ 2 | z R ( t ) T | ( W F R + Q F R G R ) + 2 | z I ( t ) T | ( W F I + Q F I G I )
2 ε ( e R ( t ) T e R ( t ) + z R ( t ) T z R ( t ) + e I ( t ) T e I ( t ) + z I ( t ) T z I ( t ) + j = 1 n ( t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s + j = 1 n t τ j ( t ) t η j R e j R ( s ) 2 1 ζ d s β + 1 2 = 2 ε ( V ( t ) ) β + 1 2 = ε ˜ ( V ( t ) ) β ˜ ,
where  ε ˜ = 2 ε β ˜ = β + 1 2 0 < ε ˜ R 0 < β ˜ < 1 . Then, one derives
( I R ( t ) ) T g R ( t ) + ( I I ( t ) ) T g I ( t ) + γ 2 ( w R ( t ) ) T w R ( t ) + ( w I ( t ) ) T w I ( t ) V ˙ ( t ) + ε ˜ ( V ( t ) ) β ˜ .
Consequently, the system (11) can achieve FTOSP under controller (19). When  z ( t ) = 0 , the proving procedure has a resemblance to Theorem 1 which is omitted here. So, based on Definition 2, the system (11) can obtain FTISP through controllers (19) and (20). □

3.2. Finite-Time Synchronization

Theorem 4.
Suppose that  U ( t ) [ 0 , + ] [ 0 , + ]  is a differentiable continuous function which has
ω ( x ( t ) 2 ) U ( t ) ,
where  ω : [ 0 , + ] [ 0 , + ]  stands for a strictly monotonically continuous increasing function, for  r > 0 ω ( r )  is positive with  ω ( 0 ) = 0 .  If network system (11) obtains FTP (FTISP, FTOSP) through control inputs (19) and (20), the networks (9) and (10) reach FTS based on controllers (19) and (20).
Proof. 
If system (11) realizes FTP under controllers (19) and (20), there exist  0 < ε ˜ R  and  0 < β ˜ < 1 , such that
I R ( t ) T g R ( t ) + I I ( t ) T g I ( t ) U ˙ ( t ) + ε ˜ ( U ( t ) ) β ˜ .
Considering  I ( t ) = 0 , we have
U ˙ ( t ) ε ˜ ( U ( t ) ) β ˜ .
From Lemma 1, we obtain  U ( t ) = 0  for  t T T = U ( 0 ) 1 ε ˜ β ˜ ( 1 ε ˜ ) .
Because of
ω ( x ( t ) 2 ) U ( t ) ,
one attains
ω ( x ( t ) 2 ) U ( t ) = 0 ,
where  t T . Then, we can derive  x ( t ) 2 = 0 . Namely, the system (9) and (10) arrive at FTS under controller (19) and (20). Similarly, when system (11) obtains FTISP and FTOSP, we can deduce that system (9) and (10) reach FTS under control inputs (19) and (20). □
Corollary 1.
If there exist
λ R = D i a g ( λ 1 R , λ 2 R , λ n R ) , λ I = D i a g ( λ 1 I , λ 2 I , λ n R ) , φ R = D i a g ( φ 1 R , φ 2 R , φ n R ) , φ I = D i a g ( φ 1 I , φ 2 I , φ n I ) , G R = D i a g ( G 1 R , G 2 R , G n R ) , G I = D i a g ( G 1 I , G 2 I , G n I ) R n × n ,
satisfying such conditions as
W F R + Q F R G R 0 , W F I + Q F I G I 0 ,
Φ 1 R δ 1 R ϕ 1 R ( δ 1 R ) T ϖ 1 R Ξ 1 R ( ϕ 1 R ) T ( Ξ 1 R ) T θ 1 R 0 a n d Φ 1 I δ 1 I ϕ 1 I ( δ 1 I ) T ϖ 1 I Ξ 1 I ( ϕ 1 I ) T ( Ξ 1 I ) T θ I 1 0 ,
Π 1 R ϕ 1 R ( ϕ 1 R ) T θ 1 R 0 a n d Π 1 I ϕ 1 I ( ϕ 1 I ) T θ 1 I 0 ,
where  Φ 1 R ,   Φ 1 I ,   δ 1 R ,   ϕ 1 R ,   ϖ 1 R ,   ϖ 1 R ,   Ξ 1 R ,   Π 1 I ,   Π 1 R   and  θ 2 R  are same as in Theorem 1, then FICVNN (1) is a finite-time synchronization based on controllers (19) and (20).
Remark 4.
Commonly, it is not easy to deal with passivity and synchronization in real-valued neural network systems, let alone solve the problem of FTP and FTS with complex-valued parameters. Furthermore, owing to inertial terms and fuzzy logic, traditional ways can not directly deal with the FTP and FTS of FICVNNs. To conquer these points, in this paper, some suitable control inputs and novel Lyapunov functionals are divided into the real part and the imaginary part to ensure that FICVNNs are addressed to obtain passivity and synchronization in a finite time interval.

3.3. Example

Example 1.
Think about the delayed FICVNNs as follows:
x ¨ k ( t ) = a k x k ( t ) b k x ˙ k ( t ) + j = 1 2 c k j f j ( x j ( t ) ) + j = 1 2 d k j f j ( x j ( t τ j ( t ) ) ) + j = 1 2 w k j f j ( x j ( t τ j ( t ) ) ) + j = 1 2 q k j f j ( x j ( t τ j ( t ) ) ) .
Through Formulae (3) and (4), the matrix form of system (65) is demonstrated by
x ˙ ( t ) = H x ( t ) + v ( t ) v ˙ ( t ) = A x ( t ) B v k ( t ) + C f ( x ( t ) ) + D f ( x ( t ) ¯ ) W f ( x ( t ) ¯ ) + Q f ( x ( t ) ¯ ) ,
where  A = d i a g ( 0.3 , 0.2 ) B = d i a g ( 1.1 , 1.2 ) H = d i a g ( 1 , 1 ) C = ( c k j ) 2 × 2 c 11 = 0.6 1.2 i c 12 = 8.1 + 5.1 i c 21 = 3.4 1.9 i c 22 = 550 550 i D = ( d k j ) 2 × 2 d 11 = 1.8 + 1.3 i d 12 = 3.9 6.9 i d 21 = 0.3 4.3 i d 12 = 536 536 i W = ( w k j ) 2 × 2 w 11 = 0.5 w 12 = 1 w 21 = 0.1 w 22 = 1 Q = ( q k j ) 2 × 2 w 11 = 0.1 w 12 = 0.2 w 21 = 0.3 w 22 = 0.1 τ j ( t ) = e t 1 + e t η R = η I = 1 f j ( · ) = tanh ( R e ( · ) ) + i tanh ( I m ( · ) ) j = 1 , 2 . The initial conditions of the system (65) are selected as  x 1 ( 0 ) = 10 + 10 i v 1 ( 0 ) = 2.5 + 0.9 i x 2 ( 0 ) = 0.6 + 0.2 i v 1 ( 0 ) = 0.4 + 6 i . Then, Figure 1, Figure 2 and Figure 3 show the trajectories of states  x k ( t ) , v k ( t ) , k = 1 , 2  without control. Moreover, consider system (66) as a drive system; the response system is
u ˙ ( t ) = H u ( t ) + y ( t ) + Δ ( t ) y ˙ ( t ) = A u ( t ) B y k ( t ) + C f ( u ( t ) ) + D f ( u ( t ) ¯ ) W f ( u ( t ) ¯ ) + Q f ( u ( t ) ¯ ) + m ( t ) + I ( t ) ,
where  Δ ( t ) , m(t) are controllers given in formula (19), I(t) is external input,  I 1 ( t ) = 3.9 cos ( t ) + 5.4 cos ( t ) i I 2 ( t ) = 5.2 cos ( t ) + 9.8 cos ( t ) i . We select  M 1 M 2 , and  M 3  as follows:
M 1 = 4 0.1 2 0.5 , M 2 = 2 0.2 3 0.2 , M 3 = 32 11 18 9 .
The rest of parameters are the same as in system (66). In addition, the parameters in (19) are selected as  ε = 0.5 β = 0.5 ζ = 0.1 χ = d i a g ( 0.1 + 0.6 i , 1 + 0.8 i ) , and  G = d i a g ( 70 + 27 i , 60 + 5 i ) . Take  λ = d i a g ( 5 + 2.6 i , 5 + 2.6 i )  and  φ = d i a g ( 9 + 15.6 i , 9 + 15.6 i ) , which satisfy the condition of Theorem 1. Through Theorem 1, the network in (65) obtains FTP under controller (19). Take  γ 1 = 0.2 λ = d i a g ( 12 + 8.3 i , 12 + 8.3 i ) , and  φ = d i a g ( 18 + 0.5 i , 18 + 0.5 i ) , which satisfy the condition of Theorem 2; (65) achieves FTISP under controller (19). Take  γ 2 = 0.8 λ = d i a g ( 21.4 + 9.1 i , 21.4 + 9.1 i ) , and  φ = d i a g ( 15 + 0.9 i , 15 + 0.9 i ) , and one can satisfy the condition of Theorem 3. In terms of Theorem 3, system (65) can realize the FTOSP under controller (19). Above all, the simulation of dynamical changes for state error  e ( t ) z ( t ) , input  I ( t ) , and output  g ( t )  are given in Figure 4.
Let  e j R ( t ) = u j R ( t ) x j R ( t ) e j I ( t ) = u j I ( t ) x j I ( t ) z j R ( t ) = y j R ( t ) v j R ( t ) z j I ( t ) = y j I ( t ) v j I ( t ) j = 1 , 2 . Figure 5 denotes the trajectories of state error  R e ( e j ( t ) ) , R e ( z j ( t ) ) j = 1 , 2  and  I m ( e j ( t ) ) , I m ( z j ( t ) ) j = 1 , 2  with controller (19) when (65) is finite-time passive. The state trajectories of state error  R e ( e j ( t ) ) R e ( z j ( t ) ) j = 1 , 2  and  I m ( e j ( t ) ) I m ( z j ( t ) ) j = 1 , 2 , respectively, are illustrated in Figure 6.
According to Corollary 1, the network (65) with the above parameters can realize FTS under controller (19), and the setting time is computed as  T * = 5.904 . As Figure 7 reveals, when time increases to 5.904, the corresponding simulation curves of the state error  R e ( e j ( t ) ) , R e ( z j ( t ) ) j = 1 , 2  and  I m ( e j ( t ) ) , I m ( z j ( t ) ) j = 1 , 2  tend to 0. This demonstrates that network (65) can achieve finite-time synchronization.
Example 2.
The application example of FICVNNs (65) relates to the pseudorandom number generator (PRNG) [43]. Let a sequence of pseudorandom number  k ( t ˘ ) = ϑ ( o 1 ( t ˘ ) , o 2 ( t ˘ ) ) t ˇ [ t ˘ s t a r t , t ˘ e n d ] , a n d [ t ˘ s t a r t , t ˘ e n d ]  stand for the operating time interval; then, one has
ϑ ( o 1 ( t ˘ ) , o 2 ( t ˘ ) ) = 1 , o 1 ( t ˘ ) o 2 ( t ˘ ) , 0 , o 1 ( t ˘ ) > o 2 ( t ˘ ) ,
where
o 1 ( t ˘ ) = x 1 R ( t ˘ ) m a x t ˘ [ t ˘ s t a r t , t ˘ e n d ] x 2 R ( t ˘ ) ,
o 2 ( t ˘ ) = x 2 I ( t ˘ ) m a x t ˘ [ t ˘ s t a r t , t ˘ e n d ] x 2 I ( t ˘ ) .
Consequently, by using the same parameters as in Example 1, because of the chaotic features of the FICVNNs, the PRNG is produced in Figure 8. Then, Figure 9 explains that s(t) is the original transmission signal. So, through the transformation  p ( t ) = s ( t ) k ( t ) , we can obtain the encrypted signal in Figure 10.

4. Conclusions

This article investigated the finite-time passivity and finite-time synchronization of the proposed FICVNNs with time delays. By transforming the second-order complex-valued model into first-order real-valued differential systems, the Lyapunov functional method and some novel controllers are explored to guarantee the FTP, FTISP, and FTOSP of FICVNNs. Furthermore, based on finite-time passive FICVNNs, finite-time synchronization has been investigated. Finally, some numerical simulations are provided to confirm the theory results.
Some existing works have discussed the passive properties of neural network systems that can maintain the system’s internal stability. For example, the infinite-time passivity or infinite-time synchronization of neural networks is generally considered by resorting to passivity theory [24,35,49]. Compared with the above literature, the neural model built in this paper, with inertial terms, complex-valued parameters, fuzzy logic, and time delays, enriches and expands previous results. It is supposed to create a foundation for observing the creative controllers for fixed-time synchronization or predefined-time synchronization in neural networks [9,51]. Compared with [33], we can also develop the related results to fit inertial neural networks, the parameters of the systems state-dependently switching. This is an essential and valuable direction in investigating various kinds of neural networks with more control methods in the future.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares that they have no known competing financial interest or personal circumstances that could have appeared to influence the work reported in this manuscript.

References

  1. Xiao, J.Y.; Li, Y.T.; Wen, S.P. Mittag-Leffler synchronization and stability analysis for neural networks in the fractional-order multi-dimension field. Knowl.-Based Syst. 2021, 231, 107404. [Google Scholar] [CrossRef]
  2. Soltani, A.; Abadi, S. A novel control system for synchronizing chaotic systems in the presence of communication channel time delay; case study of Genesio-Tesi and Coullet systems. Nonlinear Anal. Hybrid Syst. 2023, 50, 101408. [Google Scholar]
  3. Huang, X.Q.; Lin, W.; Yang, B. Global finite-time stabilization of a class of uncertain nonlinear systems. Automatica 2005, 41, 881–888. [Google Scholar] [CrossRef]
  4. Wheeler, D.W.; Schieve, W.C. Stability and chaos in an inertial two-neuron system. Physica D 1997, 105, 267–284. [Google Scholar] [CrossRef]
  5. Jian, J.G.; Duan, L. Finte-time synchronization for fuzzy neutral-type inertial neural networks with time-varyiing coefficients and proportional delyas. Fuzzy Set Syst. 2020, 381, 51–67. [Google Scholar] [CrossRef]
  6. Rakkiyappan, R.; Premalatha, S.; Chandrasekar, A.; Cao, J.D. Stability and synchronization analysis of inertial memrisitve nerual networks with time delays. Cogn. Neurodyn. 2016, 10, 437–451. [Google Scholar] [CrossRef]
  7. Prakash, M.; Balasubramaniam, P.; Lakshmanan, S. Synchroinization of Markovian jumping inertial neural networks and its applications in image encryption. Neural Netw. 2016, 83, 86–93. [Google Scholar] [CrossRef]
  8. Zhang, W.; Huang, T.; He, X.; Li, C. Global exponential stability of inertial memristor-based neural networks with time-varying delayed and impulses. Neural Netw. 2017, 95, 102–109. [Google Scholar] [CrossRef]
  9. Han, J.; Chen, G.C.; Hu, J.H. New results on anti-synchronization in predefined-time for a class of fuzzy inertial neural networks with mixed time delays. Neurocomputing 2022, 495, 26–36. [Google Scholar] [CrossRef]
  10. Tu, Z.W.; Cao, J.; Alsaedi, A.; Alsaadi, F. Global dissipativity of memristor-based neutral type inertial neural networks. Neural Netw. 2017, 88, 125–133. [Google Scholar] [CrossRef]
  11. Shanmugasundaram, S.; Udhayakumar, K.; Gunasekaran, D.; Rakkiyappan, R. Event-triggered impulsive control design for synchronization of inertial neural networks with time delyas. Neurocomputing 2022, 483, 322–332. [Google Scholar] [CrossRef]
  12. Zhang, Z.Q.; Cao, J.D. Novel finite-time synchronization criteria for inertial neural networks with time delays via integral inequality method. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1476–1485. [Google Scholar] [CrossRef] [PubMed]
  13. Han, S.Y.; Hu, C.; Yu, J.; Jiang, H.J.; Wen, S.P. Stabilization of inertial Cohen-Grossberg neural netwroks with generalized delays: A direct analysis approah. Chaos Solitons Fractals 2021, 142, 110432. [Google Scholar] [CrossRef]
  14. Wan, P.; Jian, J. Global convergence analysis of impulsive inertial neural networks with time-varying delays. Neurocomputing 2017, 245, 68–76. [Google Scholar] [CrossRef]
  15. Alimi, A.M.; Aouiti, C.; Assali, E.A. Finte-time and fixed-time synchronization of a class of inertial nerual networks with multi-proportional delays and its application to secure communication. Neurocomputing 2019, 332, 29–43. [Google Scholar] [CrossRef]
  16. Duan, L.Y.; Li, J.M. Fixed-time synchronization of fuzzy neutral-type BAM meeristive inertial neural networks with proportional delays. Inf. Sci. 2021, 576, 522–541. [Google Scholar] [CrossRef]
  17. Kong, F.C.; Ren, Y.; Sakthivel, R. Delay-dependent crteria for periodicity and exponential stability of inertial neural networks with time-varying delays. Neurocomputing 2021, 419, 261–272. [Google Scholar] [CrossRef]
  18. Zheng, C.C.; Yu, J.; Jiang, H.J. Fixed-time synchronization of discontinuous competitive neural networks with time-varying delays. Neural Netw. 2022, 153, 192–203. [Google Scholar] [CrossRef]
  19. Cao, Y.T.; Wang, S.Q.; Guo, Z.Y.; Huang, T.W.; Wen, S.P. Stabilization of memristive neural networks with mixed time-varying delays via continuous/periodic event-based control. J. Frankl. Inst. 2020, 357, 7122–7138. [Google Scholar] [CrossRef]
  20. Kumar, R.; Das, S. Exponential stability of inertial BAM neural network with time-varying impulses and mixed time-varying delays via matrix measure approach. Commun. Nonlinear Sci. Number. Simul. 2020, 81, 105016. [Google Scholar] [CrossRef]
  21. Zhang, G.D.; Zeng, Z.G. Stabilization of second-order memristive neural networks with mixed time delays via non-reduced order. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 700–706. [Google Scholar] [CrossRef] [PubMed]
  22. Han, J.; Chen, G.C.; Zhang, G.D. Exponential stabilization of fuzzy inertial neural networks with mixed delays. In Proceedings of the 3rd International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China, 8–11 November 2021; pp. 1–6. [Google Scholar]
  23. Xiao, J.; Zeng, Z.G. Finite-time passivity of nerual networks with time varying delay. J. Frankl. Inst. 2020, 357, 2437–2456. [Google Scholar] [CrossRef]
  24. Zhang, G.D.; Shen, Y.; Yin, Q.; Sun, J.W. Passivity analysis for memristor-based recurrent neural networks with discrete and distributed delays. Neural Netw. 2015, 61, 49–58. [Google Scholar] [CrossRef] [PubMed]
  25. Wang, L.M.; Zeng, K.; Hu, C.; Zhou, Y.J. Multiple finite-time synchronizaiton of delayed inertial neural networks via a unified control scheme. Konwl.-Based Syst. 2022, 236, 107785. [Google Scholar] [CrossRef]
  26. Liu, Z.Y.; Ge, Q.B.; Li, Y.; Hu, J.H. Finite-time synchronization of memristor-based recurrent neural networks with inertial items and mixed delays. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 2701–2711. [Google Scholar] [CrossRef]
  27. Wu, K.; Jian, J.G. Non-reduced order strategies for global dissipativity of memristive neural-type inertial neural networks with mixed time-varying delays. Neurocomputing 2021, 436, 174–183. [Google Scholar] [CrossRef]
  28. Tu, Z.W.; Cao, J.D.; Hayat, T. Matrix measure based dissipativity analysis for inertial delayed uncertain neural networks. Neural Netw. 2016, 75, 47–55. [Google Scholar] [CrossRef] [PubMed]
  29. Chen, X.; Zhao, Z.; Song, Q.; Hu, J. Multistability of complex-valued neural networks with time-varying delays. Appl. Math. Comput. 2017, 294, 18–35. [Google Scholar] [CrossRef]
  30. Ding, X.; Cao, J.; Alsaedi, A.; Alsaadi, F.; Hayat, T. Robust fixed-time synchronization for uncertain complex-valued neural networks with discontinuous activation functions. Neural Netw. 2017, 90, 42–55. [Google Scholar] [CrossRef]
  31. Zhu, S.; Liu, D.; Yang, C.Y.; Fu, J. Synchronization of memristive complex-valued neural networks wtih time delays via pinning control method. IEEE Trans. Cybern. 2020, 50, 3806–3815. [Google Scholar] [CrossRef]
  32. Udhayakumar, K.; Rakkiyappan, R.; Rihan, F.A.; Banerjee, S. Projective Multi-Synchronization of Fractional-order Complex-valued Coupled Multi-stable Neural Networks with Impulsive Control. Neurocomputing 2022, 467, 392–405. [Google Scholar] [CrossRef]
  33. Huang, Y.L.; Wu, F. Finite-time passivity and synchronization of coupled complex-valued memristive neural networks. Inf. Sci. 2021, 580, 775–800. [Google Scholar] [CrossRef]
  34. Li, H.L.; Hu, C.; Cao, J.D.; Jiang, H.J.; Alsaedi, A. Quasi-projectived and complete synchronization of fractional-order complex-valued neural networks with time delays. Neural Netw. 2019, 118, 102–109. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, J.L.; Wu, H.N.; Huang, T.W. Passivity-based synchronization of a class of complex dynamical networks with time-varying delays. Automatica 2015, 56, 105–112. [Google Scholar] [CrossRef]
  36. Nitta, T. Solving the XOR problem and the detection of symmetry using a single complex-valued neuron. Neural Netw. 2003, 16, 1101–1105. [Google Scholar] [CrossRef] [PubMed]
  37. Wei, X.F.; Zhang, Z.Y.; Lin, C.; Chen, J. Synchronization and anti-synchronization for complex-valued inertial neural networks with time-varying delays. Appl. Math. Comput. 2021, 403, 126194. [Google Scholar] [CrossRef]
  38. Tang, Q.; Jian, J.G. Global exponential convergence for impulsive inertial complex-valued neural networks with time-varying delays. Math. Comput. Simul. 2019, 159, 39–56. [Google Scholar] [CrossRef]
  39. Yu, J.; Hu, C.; Jiang, H.J.; Wang, L.M. Exponential and adaptive synchronziation of inertial complex-valued neural networks: A non-reduced order and non-separation approach. Neural Netw. 2020, 124, 55–59. [Google Scholar] [CrossRef]
  40. Long, C.Q.; Zhang, G.D.; Hu, J.H. Fixed-time synchronization for delayed inertial complex-valued neural networks. Appl. Math. Comput. 2021, 405, 126272. [Google Scholar] [CrossRef]
  41. Long, C.Q.; Zhang, G.D.; Zeng, Z.G.; Hu, J.H. Finite-time stabilization of complex-valued neural networks with proportional delays and inertial terms: A non-separation approch. Neural Netw. 2022, 148, 86–95. [Google Scholar] [CrossRef]
  42. Yang, T.; Yang, L.; Wu, C.; Chua, L. Fuzzy cellular neura networks: Theroy. In Proceedings of the IEEE International Workshop on Cellular Neural Networks and Applications, Seville, Spain, 24–26 June 1996; pp. 181–186. [Google Scholar]
  43. Han, J.; Chen, G.C.; Zhang, G.D. Anti-Synchronization Control of Fuzzy Inertial Neural Networks with Distributed Time Delays. In Proceedings of the International Conference on Neuromorphic Computing (ICNC), Wuhan, China, 11–14 October 2021; pp. 99–103. [Google Scholar]
  44. Aouiti, C.; Hui, Q.; Jallouli, H.; Moulay, E. Fixed-time stabilization of fuzzy neural-type inertial neural networks with time-varying delay. Fuzzy Sets Syst. 2021, 411, 48–76. [Google Scholar] [CrossRef]
  45. Xiao, Q.; Huang, T.W.; Zeng, Z.G. Pssivity and passification of fuzzy memristive inertial neural networks on time scales. IEEE Trans. Fuzzy Syst. 2018, 26, 3342–3353. [Google Scholar] [CrossRef]
  46. Li, X.F.; Huang, T.W. Adaptive synchronization for fuzzy inertial complex-valued neural networks with state-dependent coefficients and mixed delays. Fuzzy Sets Syst. 2021, 411, 174–189. [Google Scholar] [CrossRef]
  47. Wang, J.L.; Xu, M.; Wu, H.N.; Huang, T.W. Fintie-time Passivity of coupled neural networks with multiple weights. IEEE Trans. Netw. Sci. Eng. 2018, 5, 184–196. [Google Scholar] [CrossRef]
  48. Wang, J.L.; Wu, H.N.; Huang, T.W.; Ren, S.Y. Passivity and synchronization of linearly coupled rection-diffusion neural networks with adaptive coupling. IEEE Trans. Cybern. 2015, 45, 1942–1952. [Google Scholar] [CrossRef]
  49. Wu, A.; Zeng, Z. Passivity analysis of memristive nerual networks with different memductance functions. Commun. Nonlinear Sci. Number. Simul. 2014, 19, 274–285. [Google Scholar] [CrossRef]
  50. Tang, Y. Terminal sliding mode control for rigid robots. Automatica 1998, 34, 51–56. [Google Scholar] [CrossRef]
  51. Han, J.; Chen, G.; Wang, L.; Zhang, G.; Hu, J. Direct approach on fixed-time stabilization and projective synchronization of inertial neural networks with mixed time delays. Neurocomputing 2023, 535, 97–106. [Google Scholar] [CrossRef]
Figure 1. Bule line stands for transient behavior of variables  R e ( x k ( t ) ) k = 1 , 2  and red line stands for transient behavior of variables  I m ( x k ( t ) k = 1 , 2  of FICVNNs (65).
Figure 1. Bule line stands for transient behavior of variables  R e ( x k ( t ) ) k = 1 , 2  and red line stands for transient behavior of variables  I m ( x k ( t ) k = 1 , 2  of FICVNNs (65).
Axioms 13 00039 g001
Figure 2. Bule line stands for transient behavior of variables  R e v k ( t ) k = 1 , 2  and red line stands for transient behavior of variables  I m ( v k ( t ) ) k = 1 , 2  of FICVNNs (65).
Figure 2. Bule line stands for transient behavior of variables  R e v k ( t ) k = 1 , 2  and red line stands for transient behavior of variables  I m ( v k ( t ) ) k = 1 , 2  of FICVNNs (65).
Axioms 13 00039 g002
Figure 3. Bule line stands for state trajectory of variables  R e ( x k ( t ) ) R e ( v k ( t ) ) k = 1 , 2  and red line stands for state of trajectory of variables  I m ( x k ( t ) ) I m ( v k ( t ) ) k = 1 , 2  of FICVNNs (65) without control.
Figure 3. Bule line stands for state trajectory of variables  R e ( x k ( t ) ) R e ( v k ( t ) ) k = 1 , 2  and red line stands for state of trajectory of variables  I m ( x k ( t ) ) I m ( v k ( t ) ) k = 1 , 2  of FICVNNs (65) without control.
Axioms 13 00039 g003
Figure 4. The curves of error states  e k ( t ) z k ( t ) , external input  I k ( t ) , and output  g k ( t ) k = 1 , 2  under controller (19).
Figure 4. The curves of error states  e k ( t ) z k ( t ) , external input  I k ( t ) , and output  g k ( t ) k = 1 , 2  under controller (19).
Axioms 13 00039 g004
Figure 5. Trajectory of error states  e k ( t ) z k ( t ) k = 1 , 2  under controller (19).
Figure 5. Trajectory of error states  e k ( t ) z k ( t ) k = 1 , 2  under controller (19).
Axioms 13 00039 g005
Figure 6. Trajectories of error  e k ( t ) , z k ( t ) , k = 1 , 2  with controller  ( 19 ) .
Figure 6. Trajectories of error  e k ( t ) , z k ( t ) , k = 1 , 2  with controller  ( 19 ) .
Axioms 13 00039 g006
Figure 7. The synchronization curve of error states  e k ( t ) , z k ( t ) , k = 1 , 2  with controller (19).
Figure 7. The synchronization curve of error states  e k ( t ) , z k ( t ) , k = 1 , 2  with controller (19).
Axioms 13 00039 g007
Figure 8. PRNG produced by FICVNNs.
Figure 8. PRNG produced by FICVNNs.
Axioms 13 00039 g008
Figure 9. Original signals.
Figure 9. Original signals.
Axioms 13 00039 g009
Figure 10. Encrypted signals.
Figure 10. Encrypted signals.
Axioms 13 00039 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, J. Finite-Time Passivity and Synchronization for a Class of Fuzzy Inertial Complex-Valued Neural Networks with Time-Varying Delays. Axioms 2024, 13, 39. https://doi.org/10.3390/axioms13010039

AMA Style

Han J. Finite-Time Passivity and Synchronization for a Class of Fuzzy Inertial Complex-Valued Neural Networks with Time-Varying Delays. Axioms. 2024; 13(1):39. https://doi.org/10.3390/axioms13010039

Chicago/Turabian Style

Han, Jing. 2024. "Finite-Time Passivity and Synchronization for a Class of Fuzzy Inertial Complex-Valued Neural Networks with Time-Varying Delays" Axioms 13, no. 1: 39. https://doi.org/10.3390/axioms13010039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop