© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
Major need for automatic classification associated with digital signal formats has emerged in the last few decades. Automatic classification is considered to be of high importance in military and civil applications and communication systems. The recognition regarding the received signal modulation can be defined as a transitional stage between detection and demodulation of signals.to improve system performance, by removing irrelevant or weak features and retaining just strong, relevant features for classification. The hybrid intelligent system for digital signal type recognition presented in this study with signal to noise ratio of -2 to -10 dB, the 6 most important schemes of digitally modulated signals are (4PSK, 2PSK, 2FSK, 8PSK, 8FSK, and 4FSK). Both high-order moments (HOMs) and high-order cumulants (HOCs) have been utilized. The system is composed of three primary modules: a feature extraction module, an optimization module uses of Spotted Hyena Optimizer (SHO) which was applied for the first time on such type of digitally modified signal, and an SVM classifier, in our study on the classification of digital signals and data. The suggested system's high recognition accuracy, even at low SNRs, is confirmed by simulation results. The outcomes of the SVM classifier and the Spotted Hyena Optimizer Algor1ithm were 93% classification accuracy. The accuracy ratio was 89%, which was greater than the classification accuracy achieved with SVM without optimization.
classification, Spotted Hyena Optimizer (SHO) algorithm, SVM algorithm
The middle step between signal demodulation and detection is called automatic modulation recognition. From the variety of possible modulations, this stage could determine the received signal’s modulation type. Automatic formation recognition finds extensive use in both military and commercial contexts. The recognition regarding digital modulation is especially important these days because of the rise in digital modulation in both civil and military applications. It is common practice to extract and use certain features regarding the received signal for automatic modulation recognition. Selecting the right features is crucial to improving recognition efficiency. The automatic modulation categorization system typically uses one of the two approaches of the following to function. The Decision Theoretic (DT) technique or Pattern Recognition (PR) method [1]. The primary benefit of DTs is their incredibly complex computations and lack of resistance against model mismatches. The DT technique uses probabilistic hypothesis testing arguments to formulate recognition problems. Furthermore, due to DTs' concerns, they are ineffective when dealing with different kinds of digital signals, whereas PR techniques perform two main subsystems: the classifier subsystem and the feature extraction subsystem. The former specified the memberships associated with the signals, whereas the later works by extracting features. They don’t need special care, so they might be used with ease [2].
In the study [2], various modulation schemes, including 4FSK, 2FSK, 4ASK, 2ASK, 4PSK, 16QAM, 2PSK, 64QAM, and 4QAM, were analyzed under Gaussian noise levels ranging from -5 dB to 20 dB. The results demonstrated an improvement in the accuracy of modulation type identification using a support vector system.
Amudha et al. [3] employed a neural network (NN) algorithm to classify ten modulated signals. Additionally, a hybrid algorithm was applied, integrating the modified artificial bee colony (MABC) method for infiltration detection prediction.
In the study [4], a dual decision tree model was trained on features extracted using a high-order stacking tool. The findings showed that, at an SNR of -5 dB, an average accuracy of more than 91% could be achieved in recognizing modulation signals. The authors presented a data-driven model for this purpose.
Rajendran et al. [5] proposed a data-based model that does not rely on expert features, such as high-frequency moments, for the classification of automatic settings. Within an SNR range of 0 dB to 20 dB, the model achieved an accuracy of 90%.
In the study [6], two DL models based on convolutional neural networks (CNNs) are used in this study. The findings have shown the DL-based method's significant performance advantage and the experimental viability of its applications for modulation categorization.
In this work, the careful digitally modulated signals were: (8PSK, 4PSK, 2PSK, 2FSK, 4FSK, and 8FSK) is with the use of SHO, SVM is classified in this study. Selecting the right feature set is one of the primary challenges. Using numerous features for combination classification typically results in increased efficiency. The authors have modulated the signal features, leading to reduction of signal properties by increasing system accuracy in signal type detections and identifications using SHO optimization algorithm which are connected to the primary objective of this study is to enhance the characteristics regarding the embedded signals through eliminating the weak signal characteristics as well as maintaining just the strong signal characteristics. This will increase the system's accuracy in identifying and detecting the type of signal. Through optimizing the signal parameters and raising the accuracy, the algorithm raises the classification accuracy both prior to and following optimization. this paper's general structure. Following the introduction, Section 2 will examine the feature extraction and optimization methods, and Sections 3 and 4 will demonstrate the classifier. In Section 5, some findings are presented, and lastly, Section 6 will provide the study's conclusions.
2.1 Feature extraction
A typical system of pattern recognition will first perform certain preprocessing operations, and then it will frequently extract distinct attributes, or features, from the raw data set to minimize its size. The potential incapacity to use the raw data brings feature extraction into scene. Selecting the best features in the signal recognition domain helps the classifier identify more and higher digital signals while also lowering its complexity. Since different kinds of digital signals have different characteristics, it might be difficult to find the right properties in a signal to identify them (especially in a higher order instant, cumulant). Our research indicates that the statistical features offer a precise means of differentiating between the various types of digital signals under consideration [7].
2.1.1 Higher order statistical features
Features are utilized for identifying differentiating qualities of data, like higher order statistical quantities like Cumulants and Moments. Cumulants are majorly a different way of looking at a given distribution moment, whereas moments are a way to "measure shape" of a set of points [8]. The following higher-order statistics were chosen as classification features: mean, standard deviation, skewness, and kurtosis. These statistics were chosen because they capture the main characteristics of the data distribution and contribute to improving classification accuracy.
2.1.2 Moments
One way to describe probability density function moments is as an expected value model. Eq. (1) specifies the $i t h$ moment specification for digital signals with finite length [7].
$\mu_{i=\sum_{k=1}^n\left(s_k-\mu\right)^i f\left(s_k\right)}$ (1)
This might have N as the data length, $s_k$ as the random variable, subscript (k) as the integer-valued variable, and μ as the random variable's average value.
Assuming that the signal has 0-mean (μ= 0), therefore Eq. (1) becomes Eq. (2):
$\mu_{i=\sum_{k=1}^n s_k^i f\left(s_k\right)}$ (2)
The random variable’s auto-moment is Eq. (3):
$E_{S, P+q, p}=E\left[S^P\left(S^*\right)^q\right]$ (3)
where, S stands for a discrete random variable, p is the number of non-conjugated terms, q is the number of conjugated terms, and p+q is the moment order [7].
In which μ represent the average value of random variable. The definition of the k-th moment for finite length discrete signals can be expressed by Sadkhan-Smieee et al. [9].
2.1.3 Cumulates
Cumulates can be defined as statistical features. In the scenario where Eq. (4) is the characteristic equation of random variable S with 0-mean:
$f(t)=E\left[e^{j t s}\right]$ (4)
Expanding Logarithm of Eq. (2) and Eq. (4) through the application of a Taylor series, we get Eq. (5).
$g(t)=\log \left\{E\left[e_{j t s}\right] \sum_{n=1}^{\infty} k_n \frac{(j t)^n}{n!}\right\}$ (5)
where, $kn$ is the cumulate, t is time.
The n-th order simulant can be compared with n-th order moment; therefore, Eq. (6) can be represented as:
$C_{S, P+q, p}=\operatorname{CUM}\left[s(t), \ldots, s(t), s^*(t), \ldots s^*(t)\right]$ (6)
Therefore, cumulates could be derived from moment as Eq. (7):
${{CUM}\left[S_{1, \ldots \ldots \ldots .,} S_N\right]}=\sum_{A V} \begin{gathered}(-1)^{q-1}(q-1)!E\left[\Pi_{j \in v 1} s_j\right] \ldots E\left[\Pi_{j \in v 1} s_j\right]\end{gathered}$ (7)
Table 1. Statistical moment [10]
Moment Order |
Moment Expression |
|
2 |
$\mathrm{E}_{\mathrm{S}, 2,2}$ |
$\mathrm{E}\left[a^2-b^2\right]$ |
$\mathrm{E}_{\mathrm{S}, 2,1}$ |
$\mathrm{E}\left[a^2+b^2\right]$ |
|
4 |
$\mathrm{E}_{\mathrm{S}, 4,4}$ |
$\mathrm{E}\left[a^4-6 a^2 b^2+b^4\right]$ |
$\mathrm{E}_{\mathrm{S}, 4,3}$ |
$\mathrm{E}\left[a^4-b^4\right]$ |
|
$\mathrm{E}_{\mathrm{S}, 4,2}$ |
$\mathrm{E}\left[a^4+2 a^2 b^2+b^4\right]$ |
|
6 |
$\mathrm{E}_{\mathrm{S}, 6,6}$ |
$\mathrm{E}\left[a^6+15 a^2 b^4-15 a^4 b^2-b^6\right]$ |
$\mathrm{E}_{\mathrm{S}, 6,5}$ |
$\mathrm{E}\left[a^6-5 a^2 b^4-5 a^4 b^2+b^6\right]$ |
|
$\mathrm{E}_{\mathrm{S}, 6,4}$ |
$\mathrm{E}\left[a^6-a^2 b^4-a^4 b^2-b^6\right]$ |
|
$\mathrm{E}_{\mathrm{S}, 6,3}$ |
$\mathrm{E}\left[a^6+a^2 b^4-3 a^4 b^2-b^6\right]$ |
|
8 |
$\mathrm{E}_{\mathrm{S}, 8,8}$ |
$\mathrm{E}\left[a^8-28 a^2 b^6-28 a^6 b^2+70 b^4 b^4+b^8\right]$ |
$\mathrm{E}_{\mathrm{S}, 8,7}$ |
$\mathrm{E}\left[a^8+14 a^2 b^6-14 a^6 b^2-b^8\right]$ |
|
$\mathrm{E}_{\mathrm{S}, 8,6}$ |
$\mathrm{E}\left[a^8-4 a^2 b^6-4 a^6 b^2-10 a^4 b^4+b^8\right]$ |
|
$\mathrm{E}_{\mathrm{S}, 8,5}$ |
$\mathrm{E}\left[a^8-2 a^2 b^6-4 a^6 b^2-b^8\right]$ |
|
$\mathrm{E}_{\mathrm{S}, 8,4}$ |
$\mathrm{E}\left[a^8+4 a^2 b^6+4 a^6 b^2+6 a^4 b^4+b^8\right]$ |
Note: Since the cumulant depends on moments, it is possible to derive the cumulants in terms of moments.
Summation will be conducted on the partitions $v=v_1, \ldots, v_q$ for indices 1, 2, ..., n, q denotes the number of the elements in the partition [11]. We can obtain the relationship between HOCs and HOMs of second-order to eight-order cumulants are shown in different expressions of moments can be easily derived. Different expressions of moments can be easily derived. Table 1 displays the most important moments.
2.2 Optimization feature extraction
In order to achieve optimal system optimization results, it is recommended that irrelevant or weak features be removed from the system and only strong, relevant features remain. Consequently, a system's ability to identify the modified signals with greater accuracy. The hybrid intelligent system developed in this work is connected to the recognition of digitally modulated signals the system's accuracy should be verified by comparing the outcomes of applying the feature optimization algorithm versus not applying it using SHO. The SHO algorithm simulates hyenas' control over the most important hunting elements. It starts with different sets of features (represented by "hyenas"), and then evaluates their performance using an objective function (such as the discriminant model). In clear danger, it moves toward the best set of features to explore new discoveries. The process continues until an optimal set is reached that improves the model's performance and minimizes unexpected features. The most influential features.
The system suggested in the present study consists of 2 phases:
Phase 1: Additive white Gaussian noise (AWGN) channel was used for generating the electromagnetic signals (8PSK, 4PSK, 2PSK, 2FSK, 4FSK, and 8FSK) with a signal-to-noise ratio (SNR) of -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10) db. For feature extraction (FE), HOMs and HOCs were employed. MATLAB programs were created to complete the assignments. A matrix of the data set contained the retrieved HOCs and HOMs. Utilizing SHO, output FE was increased.
Phase 2: Figure 1 shows the suggested system's diagram. The outputs regarding 1st stage are used as inputs to the SVM for classifying the signal and predict its kind.
Figure 1. Schematic diagram of the suggested approach
4.1 SHO
SHO can be defined as a bio-inspired metaheuristic optimization algorithm that has been created. Depending on social activities of spotted hyenas—the largest of 3 species of hyena, which are (brown, striped, and aardwolf)—the algorithm was developed. Spotted hyenas are expert hunters who usually hunt and live in groups, depending on networks with more than 100 members. The four primary steps of SHO algorithm mimic the searching, hunting, attacking, and encircling actions of spotted hyenas. The social bond and cooperative nature of spotted hyenas provide the fundamental idea of this algorithm. The three fundamental processes of SHO are locating prey, attacking prey, and encircling prey. Each of these steps is modeled and carried out analytically [12].
Encircling prey:
Other search agents adjust their placements as a response to optimal solution, which is deemed the target prey. This behavior's mathematical model is provided by the study [12]:
$D_h=\left|B \cdot P_{p(x)-P_{p(x)-}}\right|$ (8)
$p(x+1)=P_{p(x)-} E . D_h$ (9)
Hunting: The hunting strategy of SHO can be specified in the following way [12]:
$D_h=\left|B . P h_{-} p k\right|$ (10)
$B k=P h_{-} E . D_h$ (11)
$C h=P k+P k+1+\cdots+P k+n$ (12)
$p_{(x+1)}=\frac{c h}{N}$ (13)
Attacking prey:
Algorithm 1: Spotted Hyena Optimizer [12] |
1: procedure SHO 2: Input: Spotted Hyenas population pi(i= 1, 2, 3……….n) 3: Output: the optimal search agent 4: initialization h, B, E and N 5: Evaluate the fitness of each search agent 6: ph…..Identify the best search agent Ch …. From a group or cluster of all distant optimal solution While x< max iteration do For each search agent do Update the current agents position using Eq. (10) End for |
Update h, B, E and N Ensure search agents stay within the given search space and adjust if necessary Compute the fitness of each search agent Update ph if a better solution is found compared to the previous optimal soulution Modify group ch based on ph $x \leftarrow x+1$ end while return ph end procedure |
The attack on prey mathematical formulation is provided by
Searching for prey: The B and E vectors must be evaluated in order to find a workable solution. SHO algorithm could avoid local optimal problems and efficiently solve a wide range of high-dimensional problems. Algorithm 1 provides the pseudo-code for SHO algorithm [12], which we will use in this work for solving the flow shop scheduling problem with the use of encircling behavior [13].
The following parameters were used in the SHO algorithm:
Population Size: [50].
Iterations: [such as 1000].
Exploration Factor: [0.8].
Exploitation Factor: [0.2].
These values were chosen based on [previous] experiments.
The default parameter values are as follows:
Number of particles 20/100.
Number of iterations 100-1000.
Learning rate 0.1-0.9.
Decline rate 0.1-0.9.
The parameter values that lead to the best results in terms of speed, stability, accuracy and complexity were determined by experimentation, and the analysis tools are:
(1) Using analysis tools in MATLAB.
(2) Using statistical analysis libraries in PYTHON.
(3) To determine the optimal parameter values, grid search, technique was used grid search.
4.2 Support Vector Machine (SVM)
Giveki et al. [14] in 1995 introduced the SVM classifier, a supervised learning algorithm depending on statistical learning theory. This approach's major objective is using a training dataset for the identification of the hyperplane which best divides 2 classes. SVM represents a group of related supervised learning approaches utilized in classification and regression diagnostics in medicine [15]. SVM model is a way to describe instances that are characterized as mapped points in space that allow examples from various categories to be separated by as wide a gap as feasible [16]. SVM could handle numerous continuous as well as categorical variables and provides regression and classification algorithms. The dimension regarding the classified items has no direct bearing on the SVM-based classification system's efficiency [17, 18]. Through converting input space into a multi-dimensional space with the use of unique non-linear functions that is referred to as kernels, the algorithm achieves great discriminative power. It is evident that, for a given volume of data, selecting the optimal kernel function and parameter values is crucial [19-21]. Moreover, by default, all attributes are normalized. In the second stage of the proposed classification system, an SVM classifier was used. To select the optimal parameters, a "Random Search" was used to determine the optimal values for key parameters such as C and Gamma [22]. The performance of the classifier was evaluated using cross-validation, where the parameters that give the best accuracy on the validation set or the lowest error value were selected [23].
SHO algorithm and SVM is used in this section for classifying six different types. The signal was modified (4FSK, 2FSK, 2PSK, 8FSK, 8 PSK, 4 PSK) within the level of SNR that ranges from -2 to 10 dB. and contrasting these outcomes when categorizing signals in an algorithm without optimizing their features.
SVM is used for every kind of signal and rating accuracy, as indicated in Tables 2 and 3. For every signal type in Tables 2 and 3, Table 3 shows a higher success rate in recognizing signals following the application of the SHO algorithm, indicating greater system efficiency.
1-Represents Classification criteria of the suggested algorithms without optimization show in Table 2 and Figure 2 where the accuracy ratio has been 89% to the suggested method.
The accuracy of the signals in SVM is for every signal type and the feature of rating as it has been indicated in Table 2 and Figure 2 for every form of signal. Compared to the suggested methods, the accuracy ratio was 89%.
Table 3 and Figure 3 shows that the SHO algorithm has a greater success rate in recognizing signals which translates to more system efficiency. Findings about classification accuracy, with a 93% accuracy ratio. In the same series range of SNR (-2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 to 10 dB).
Table 2. The proposed algorithm's classification criteria without optimization
Recall |
F-Measure |
Accuracy |
TP Rate |
FP Rate |
MCC |
ROC Area |
|
2PSK |
90 |
90 |
90 |
90 |
1 |
89 |
95 |
4PSK |
90 |
90 |
90 |
90 |
1 |
85 |
93 |
8PSK |
90 |
90 |
90 |
90 |
1 |
86 |
94 |
2FSK |
80 |
80 |
85 |
85 |
2 |
89 |
92 |
4FSK |
90 |
90 |
90 |
90 |
1 |
88 |
98 |
8FSK |
90 |
90 |
90 |
90 |
1 |
84 |
95 |
Table 3. Results of the suggested algorithm's classification criteria with the optimization
Recall |
F-Measure |
Accuracy |
TP Rate |
FP Rate |
MCC |
ROC Area |
|
2PSK |
95 |
95 |
95 |
95 |
1 |
93 |
97 |
4PSK |
90 |
90 |
90 |
90 |
1 |
89 |
95 |
8PSK |
94 |
94 |
94 |
94 |
1 |
92 |
96 |
2FSK |
96 |
96 |
96 |
96 |
1 |
91 |
98 |
4FSK |
95 |
95 |
95 |
95 |
1 |
91 |
98 |
8FSK |
90 |
90 |
90 |
90 |
1 |
88 |
92 |
Figure 2. Classification criteria of the proposed algorithms without optimization
Figure 3. The classification accuracy SVM with HSO
2-Represents Results of accuracy with the use of SVM with SHO show in Table 3 and Figure 3 where the accuracy ratio has been 93% to the suggested method.
Additional tests were performed under different signal-to-noise ratios (SNRs). Recognition rate curves were calculated at low and high SNRs to analyze the stability and performance of the algorithm in varying noise environments.
In the present study, 10 electro-magnetic signal types embedded in MATLAB program have been created within an SNR level ranging from -2 to 10 dB. Following the extraction of the signals' statistical characteristics (moment, cumulant), the features are improved.
With SHO using SVM classifier during feature optimization, the results were compared of the proposed methods SHO algorithm had led to maximum rating accuracy even at a low level of SNR of approximately 93%, which means increasing the efficiency of the proposed system, leading to increasing its accuracy, compared to if it used the same method )the same classifier (on the same type of signals without an enhancer Considering the results shown when applying the SHO algorithm to improve performance, the fundamental mechanism that contributes to this improvement can be analyzed. It is important to analyze how the SHO algorithm improves its performance. The algorithm relies on iteratively improving solutions by randomly combining good solutions together. The algorithm balances between exploring new solutions (diversity) and improving existing solutions (homogeneity), which helps avoid falling into sub-solutions. Also, continuously adjusting the parameters during the process allows the algorithm to adapt to the characteristics of the problem and solve it more efficiently. This analysis shows how SHO can achieve better performance compared to other algorithms.
[1] Azarbad, M., Hakimi, S., Ebrahimzadeh, A. (2012). Automatic recognition of digital communication signal. International Journal of Energy, Information and Communications, 3(4): 21-33.
[2] Su, W., Xu, J.L., Zhou, M. (2008). Real-time modulation classification based on maximum likelihood. IEEE Communications Letters, 12(11): 801-803. https://doi.org/10.1109/LCOMM.2008.081107
[3] Amudha, P., Karthik, S., Sivakumari, S. (2015). A hybrid swarm intelligence algorithm for intrusion detection using significant features. The Scientific World Journal, 2015(1): 574589. https://doi.org/10.1155/2015/574589
[4] Almaspour, S., Moniri, M.R. (2016). Automatic modulation recognition and classification for digital modulated signals based on ANN algorithms. Journal of Multidisciplinary Engineering Science and Technology (JMEST), 3(12): 6230-6235.
[5] Rajendran, S., Meert, W., Giustiniano, D., Lenders, V., Pollin, S. (2018). Deep learning models for wireless signal classification with distributed low-cost spectrum sensors. IEEE Transactions on Cognitive Communications and Networking, 4(3): 433-445. https://doi.org/10.1109/TCCN.2018.2835460
[6] Peng, S., Jiang, H., Wang, H., Alwageed, H., Zhou, Y., Sebdani, M.M., Yao, Y.D. (2018). Modulation classification based on signal constellation diagrams and deep learning. IEEE Transactions on Neural Networks and Learning Systems, 30(3): 718-727. https://doi.org/10.1109/TNNLS.2018.2850703
[7] Sun, X., Su, S., Huang, Z., Zuo, Z., Guo, X., Wei, J. (2019). Blind modulation format identification using decision tree twin support vector machine in optical communication system. Optics Communications, 438: 67-77. https://doi.org/10.1016/j.optcom.2019.01.025
[8] Bagga, J., Tripathi, N. (2013). Automatic modulation classification using statistical features in fading environment. International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering, 2(8): 3701-3709.
[9] Sadkhan-Smieee, S.B., Hameed, A.Q., Hamed, H.A. (2015). Digitally modulated signals recognition based on adaptive neural-fuzzy inference system (ANFIS). International Journal of Advancements in Computing Technology, 7(5): 57-65.
[10] Hatzichristos, G., Fargues, M.P. (2021). A hierarchical approach to the classification of digital modulation types in multipath environments. In Conference Record of Thirty-Fifth Asilomar Conference on Signals, Systems and Computers (Cat. No. 01CH37256), Pacific Grove, CA, USA, pp. 1494-1498.
[11] Cheng, L., Liu, J. (2014). An optimized neural network classifier for automatic modulation recognition. TELKOMNIKA Indonesian Journal of Electrical Engineering, 12(2): 1343-1352. https://doi.org/10.11591/telkomnika.v12i2.3930
[12] Mzili, T., Mzili, I., Riffi, M.E., Dhiman, G. (2023). Hybrid genetic and spotted hyena optimizer for flow shop scheduling problem. Algorithms, 16(6): 265.
[13] Ghafori, S., Gharehchopogh, F.S. (2022). Advances in spotted hyena optimizer: A comprehensive survey. Archives of Computational Methods in Engineering, 29(3): 1569-1590. https://doi.org/10.1007/s11831-021-09624-4
[14] Giveki, D., Salimi, H., Bahmanyar, G., Khademian, Y. (2012). Automatic detection of diabetes diagnosis using feature weighted support vector machines based on mutual information and modified cuckoo search. arXiv preprint arXiv:1201.2173. https://doi.org/10.48550/arXiv.1201.2173
[15] Kumari, V.A., Chitra, R. (2013). Classification of diabetes disease using support vector machine. International Journal of Engineering Research and Applications, 3(2): 1797-1801.
[16] Saeed, H.H., Alazzawia, A. (2023). Analysis of the most important concepts related to social distancing as a result of COVID-19 pandemic: A review. Academic Science Journal, 1(2): 19-33. https://doi.org/10.24237/ASJ.01.02.620B
[17] Cervantes, J., Garcia-Lamont, F., Rodríguez-Mazahua, L., Lopez, A. (2020). A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing, 408: 189-215. https://doi.org/10.1016/j.neucom.2019.10.118
[18] Semakula, J., Corner-Thomas, R.A., Morris, S.T., Blair, H.T., Kenyon, P.R. (2021). Application of machine learning algorithms to predict body condition score from liveweight records of mature Romney ewes. Agriculture, 11(2): 162. https://doi.org/10.3390/agriculture11020162
[19] Yu, W., Liu, T., Valdez, R., Gwinn, M., Khoury, M.J. (2010). Application of support vector machine modeling for prediction of common diseases: The case of diabetes and pre-diabetes. BMC Medical Informatics and Decision Making, 10: 16. https://doi.org/10.1186/1472-6947-10-16
[20] El Moutaouakil, K., Roudani, M., Ouhmid, A., Zhilenkov, A., Mobayen, S. (2024). Decomposition and symmetric kernel deep neural network fuzzy support vector machine. Symmetry, 16(12): 1585. https://doi.org/10.3390/sym16121585
[21] Du, K.L., Jiang, B., Lu, J., Hua, J., Swamy, M.N.S. (2024). Exploring kernel machines and support vector machines: principles, techniques, and future directions. Mathematics, 12(24): 3935. https://doi.org/10.3390/math12243935
[22] Lin, S.W., Lee, Z.J., Chen, S.C., Tseng, T.Y. (2008). Parameter determination of support vector machine and feature selection using simulated annealing approach. Applied Soft Computing, 8(4): 1505-1512. https://doi.org/10.1016/j.asoc.2007.10.012
[23] Varoquaux, G., Colliot, O. (2023). Evaluating machine learning models and their diagnostic value. In Machine Learning for Brain Disorders, Humana, New York, pp. 601-630. https://doi.org/10.1007/978-1-0716-3195-9_20