[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = Haralick characteristics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5700 KiB  
Article
Relating Macroscopic PET Radiomics Features to Microscopic Tumor Phenotypes Using a Stochastic Mathematical Model of Cellular Metabolism and Proliferation
by Hailey S. H. Ahn, Yas Oloumi Yazdi, Brennan J. Wadsworth, Kevin L. Bennewith, Arman Rahmim and Ivan S. Klyuzhin
Cancers 2024, 16(12), 2215; https://doi.org/10.3390/cancers16122215 - 13 Jun 2024
Viewed by 786
Abstract
Cancers can manifest large variations in tumor phenotypes due to genetic and microenvironmental factors, which has motivated the development of quantitative radiomics-based image analysis with the aim to robustly classify tumor phenotypes in vivo. Positron emission tomography (PET) imaging can be particularly helpful [...] Read more.
Cancers can manifest large variations in tumor phenotypes due to genetic and microenvironmental factors, which has motivated the development of quantitative radiomics-based image analysis with the aim to robustly classify tumor phenotypes in vivo. Positron emission tomography (PET) imaging can be particularly helpful in elucidating the metabolic profiles of tumors. However, the relatively low resolution, high noise, and limited PET data availability make it difficult to study the relationship between the microenvironment properties and metabolic tumor phenotype as seen on the images. Most of previously proposed digital PET phantoms of tumors are static, have an over-simplified morphology, and lack the link to cellular biology that ultimately governs the tumor evolution. In this work, we propose a novel method to investigate the relationship between microscopic tumor parameters and PET image characteristics based on the computational simulation of tumor growth. We use a hybrid, multiscale, stochastic mathematical model of cellular metabolism and proliferation to generate simulated cross-sections of tumors in vascularized normal tissue on a microscopic level. The generated longitudinal tumor growth sequences are converted to PET images with realistic resolution and noise. By changing the biological parameters of the model, such as the blood vessel density and conditions for necrosis, distinct tumor phenotypes can be obtained. The simulated cellular maps were compared to real histology slides of SiHa and WiDr xenografts imaged with Hoechst 33342 and pimonidazole. As an example application of the proposed method, we simulated six tumor phenotypes that contain various amounts of hypoxic and necrotic regions induced by a lack of oxygen and glucose, including phenotypes that are distinct on the microscopic level but visually similar in PET images. We computed 22 standardized Haralick texture features for each phenotype, and identified the features that could best discriminate the phenotypes with varying image noise levels. We demonstrated that “cluster shade” and “difference entropy” are the most effective and noise-resilient features for microscopic phenotype discrimination. Longitudinal analysis of the simulated tumor growth showed that radiomics analysis can be beneficial even in small lesions with a diameter of 3.5–4 resolution units, corresponding to 8.7–10.0 mm in modern PET scanners. Certain radiomics features were shown to change non-monotonically with tumor growth, which has implications for feature selection for tracking disease progression and therapy response. Full article
(This article belongs to the Special Issue PET/CT in Cancers Outcomes Prediction)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Components of the hybrid mathematical model used in the simulation. The model uses a combination of one agent grid and two PDE grids for oxygen and glucose, as shown. The dashed vertical lines illustrate the link between the agent and PDE grids. Blocks of different color represent different types of agents, and arrows indicate diffusion processes in the PDE grids. Blood vessels are the sources of nutrients in the simulation. (<b>B</b>) Flowchart of the main algorithm for tumor growth simulation, including the steps of simulation initialization, cell type determination, and the processes involved in the change of nutrient concentrations. [O<sub>2</sub>] is the oxygen concentration at the cell location, and <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>f</mi> <mi>f</mi> </mrow> </msub> </semantics></math> is the number of diffusion steps required to reach steady state for each cell step.</p>
Full article ">Figure 2
<p>Agent maps illustrating the simulated tumor phenotypes. Columns 1 and 2 contain two longitudinal growth snapshots for each phenotype. Columns 3 and 4 show zoomed-in regions of the agent maps (as indicated by yellow and red squares).</p>
Full article ">Figure 3
<p>Simulated tumor phenotypes compared to real microscopic images of subcutaneous tumor xenografts. (<b>A</b>) Phenotype A compared to SiHa human cervical squamous cell carcinoma. (<b>B</b>) Phenotype B compared to WiDr human colorectal adenocarcinoma. The darkest regions within the tumor xenograft show necrosis. (<b>C</b>) Phenotype D compared to SiHa human cervical squamous cell carcinoma. (<b>D</b>) Phenotype E compared to WiDr human colorectal adenocarcinoma.</p>
Full article ">Figure 4
<p>Synthetic PET images produced for the simulated tumor phenotypes, noise free and with 5%, 10%, and 15% image noise levels. The agent grids were converted to images of expected FDG uptake, forward-projected into sinogram space with added Poisson noise, and reconstructed with a resolution of 2.4 mm FWHM. Noise was regulated by adjusting the simulated acquisition time.</p>
Full article ">Figure 5
<p>Together, GLCM difference entropy and GLCM cluster shade (<b>top row</b>) were able to identify all six phenotypes with image noise up to 15%. The pairs GLCM difference entropy and GLCM cluster tendency (<b>bottom row</b>) represent a good alternative. SS—silhouette score; CHC—Calinski–Harabasz criterion.</p>
Full article ">Figure 6
<p>A representative set of Haralick features plotted as functions of tumor size (longitudinal tumor growth).</p>
Full article ">
23 pages, 6506 KiB  
Article
Selection of the Discriming Feature Using the BEMD’s BIMF for Classification of Breast Cancer Mammography Image
by Fatima Ghazi, Aziza Benkuider, Fouad Ayoub and Khalil Ibrahimi
BioMedInformatics 2024, 4(2), 1202-1224; https://doi.org/10.3390/biomedinformatics4020066 - 9 May 2024
Viewed by 905
Abstract
Mammogram exam images are useful in identifying diseases, such as breast cancer, which is one of the deadliest cancers, affecting adult women around the world. Computational image analysis and machine learning techniques can help experts identify abnormalities in these images. In this work [...] Read more.
Mammogram exam images are useful in identifying diseases, such as breast cancer, which is one of the deadliest cancers, affecting adult women around the world. Computational image analysis and machine learning techniques can help experts identify abnormalities in these images. In this work we present a new system to help diagnose and analyze breast mammogram images. To do this, the system a method the Selection of the Most Discriminant Attributes of the images preprocessed by BEMD “SMDA-BEMD”, this entails picking the most pertinent traits from the collection of variables that characterize the state under study. A reduction of attribute based on a transformation of the data also called an extraction of characteristics by extracting the Haralick attributes from the Co-occurrence Matrices Methods “GLCM” this reduction which consists of replacing the initial set of data by a new reduced set, constructed at from the initial set of features extracted by images decomposed using Bidimensional Empirical Multimodal Decomposition “BEMD”, for discrimination of breast mammogram images (healthy and pathology) using BEMD. This decomposition makes it possible to decompose an image into several Bidimensional Intrinsic Mode Functions “BIMFs” modes and a residue. The results obtained show that mammographic images can be represented in a relatively short space by selecting the most discriminating features based on a supervised method where they can be differentiated with high reliability between healthy mammographic images and pathologies, However, certain aspects and findings demonstrate how successful the suggested strategy is to detect the tumor. A BEMD technique is used as preprocessing on mammographic images. This suggested methodology makes it possible to obtain consistent results and establishes the discrimination threshold for mammography images (healthy and pathological), the classification rate is improved (98.6%) compared to existing cutting-edge techniques in the field. This approach is tested and validated on mammographic medical images from the Kenitra-Morocco reproductive health reference center (CRSRKM) which contains breast mammographic images of normal and pathological cases. Full article
(This article belongs to the Special Issue Feature Papers on Methods in Biomedical Informatics)
Show Figures

Figure 1

Figure 1
<p>The flowchart of the Bidimensional Empirique Multimodal Decomposition BEMD algorithm.</p>
Full article ">Figure 2
<p>Bidimensional Empirique Multimodal Decomposition BEMD of the signal <span class="html-italic">S</span>(<span class="html-italic">x</span>,<span class="html-italic">y</span>).</p>
Full article ">Figure 3
<p>Co-occurrence matrix directions.</p>
Full article ">Figure 4
<p>Haralick features for decomposing images by Bidimensional Empirique Multimodal Decomposition BMED.</p>
Full article ">Figure 5
<p>The architecture of the methods suggested for the diagnosis of breast cancer through the identification of the most distinctive features of the images broken down using Bidimensional Empirique Multimodal Decomposition (BMED).</p>
Full article ">Figure 6
<p>Examples breast mammogram images: healthy (<b>c</b>,<b>d</b>) pathological (<b>a</b>,<b>b</b>). from the reference center for reproductive health in Kenitra-Morocco (CRSRKM).</p>
Full article ">Figure 7
<p>The decomposition of pathology and healthy images decomposed by the bidimensional empirical multimodal decomposition “BEMD” method.</p>
Full article ">Figure 8
<p>Projection of observations extracted from healthy and pathological mammography images obtained from the most discriminating BIMFs level and reconstructed and original images: (<b>a</b>) the categorization results obtained for healthy and cancerous images for the images reconstructed after decomposition. (<b>b</b>) the categorization results obtained for healthy and cancerous images for the originals images; (<b>c</b>) the categorization results obtained for healthy and cancerous images the images decomposed by the BEMD.</p>
Full article ">Figure 9
<p>The interval between the minimum value of Jf healthy images (Jfs min) and the maximum value of Jf des pathology images (Jfp max).</p>
Full article ">Figure 10
<p>ROC curve comparison using SVM between the SMDA-BEMD, SMDA-Reconstructed Image, and the SMDA-Original.</p>
Full article ">Figure 11
<p>The results obtained from the classifications of mammographic images of the breast in the form of point clouds: one for healthy (blue) and the other for cancerous (red): (<b>a</b>) SMDA-Reconstructed Image, (<b>b</b>) SMDA- Original Images, (<b>c</b>) proposed methodology (SMDA-BEMD).</p>
Full article ">Figure 12
<p>Quantifying the effectiveness of classification by measuring the value of the area under the ROC curve (AUC) of the existing method with the proposed methodology.</p>
Full article ">
14 pages, 3146 KiB  
Article
The Effect of Primary Aldosteronism on Carotid Artery Texture in Ultrasound Images
by Sumit Kaushik, Bohumil Majtan, Robert Holaj, Denis Baručić, Barbora Kološová, Jiří Widimský and Jan Kybic
Diagnostics 2022, 12(12), 3206; https://doi.org/10.3390/diagnostics12123206 - 17 Dec 2022
Viewed by 1576
Abstract
Primary aldosteronism (PA) is the most frequent cause of secondary hypertension. Early diagnoses of PA are essential to avoid the long-term negative effects of elevated aldosterone concentration on the cardiovascular and renal system. In this work, we study the texture of the carotid [...] Read more.
Primary aldosteronism (PA) is the most frequent cause of secondary hypertension. Early diagnoses of PA are essential to avoid the long-term negative effects of elevated aldosterone concentration on the cardiovascular and renal system. In this work, we study the texture of the carotid artery vessel wall from longitudinal ultrasound images in order to automatically distinguish between PA and essential hypertension (EH). The texture is characterized using 140 Haralick and 10 wavelet features evaluated in a region of interest in the vessel wall, followed by the XGBoost classifier. Carotid ultrasound studies were carried out on 33 patients aged 42–72 years with PA, 52 patients with EH, and 33 normotensive controls. For the most clinically relevant task of distinguishing PA and EH classes, we achieved a classification accuracy of 73% as assessed by a leave-one-out procedure. This result is promising even compared to the 57% prediction accuracy using clinical characteristics alone or 63% accuracy using a combination of clinical characteristics and intima-media thickness (IMT) parameters. If the accuracy is improved and the method incorporated into standard clinical procedures, this could eventually lead to an improvement in the early diagnosis of PA and consequently improve the clinical outcome for these patients in future. Full article
(This article belongs to the Special Issue Advances in Vascular Imaging)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Longitudinal B-mode image of the carotid artery showing clear interfaces for measurement of intima-media thickness at the far wall of the common carotid artery and carotid bifurcation. CB, carotid bifurcation; CCA, common carotid artery; ECA, external carotid artery; ICA, internal carotid artery; TFD, tip of the flow divider. (<b>b</b>) The Meijer´s Arc allows a standardized scan of the left and right carotid artery in predefined angles.</p>
Full article ">Figure 2
<p>Block diagram of the proposed methodology. The new parts that our method consists of are framed in green, while the steps done previously or done only for the experimental comparison have a think frame. ROI, region of interest; IMT, intima-media thickness.</p>
Full article ">Figure 3
<p>Example longitudinal carotid artery ultrasound images: healthy controls (C), patients with essential hypertension (EH), and primary aldosteronism (PA). The intima-media region of interest (ROI) on the left and right side is marked in yellow.</p>
Full article ">Figure 4
<p>ROC curves for using texture features to distinguish between (<b>a</b>) PA versus EH and (<b>b</b>) PA + EH versus controls. C, controls; EH, essential hypertension; PA, primary aldosteronism.</p>
Full article ">
22 pages, 8813 KiB  
Article
Multiplicative Long Short-Term Memory with Improved Mayfly Optimization for LULC Classification
by Andrzej Stateczny, Shanthi Mandekolu Bolugallu, Parameshachari Bidare Divakarachari, Kavithaa Ganesan and Jamuna Rani Muthu
Remote Sens. 2022, 14(19), 4837; https://doi.org/10.3390/rs14194837 - 28 Sep 2022
Cited by 13 | Viewed by 1872
Abstract
Land Use and Land Cover (LULC) monitoring is crucial for global transformation, sustainable land control, urban planning, urban growth prediction, and the establishment of climate regulations for long-term development. Remote sensing images have become increasingly important in many environmental planning and land use [...] Read more.
Land Use and Land Cover (LULC) monitoring is crucial for global transformation, sustainable land control, urban planning, urban growth prediction, and the establishment of climate regulations for long-term development. Remote sensing images have become increasingly important in many environmental planning and land use surveys in recent times. LULC is evaluated in this research using the Sat 4, Sat 6, and Eurosat datasets. Various spectral feature bands are involved, but unexpectedly little consideration has been given to these characteristics in deep learning models. Due to the wide availability of RGB models in computer vision, this research mainly utilized RGB bands. Once the pre-processing is carried out for the images of the selected dataset, the hybrid feature extraction is performed using Haralick texture features, an oriented gradient histogram, a local Gabor binary pattern histogram sequence, and Harris Corner Detection to extract features from the images. After that, the Improved Mayfly Optimization (IMO) method is used to choose the optimal features. IMO-based feature selection algorithms have several advantages that include features such as a high learning rate and computational efficiency. After obtaining the optimal feature selection, the LULC classes are classified using a multi-class classifier known as the Multiplicative Long Short-Term Memory (mLSTM) network. The main functionality of the multiplicative LSTM classifier is to recall appropriate information for a comprehensive duration. In order to accomplish an improved result in LULC classification, a higher amount of remote sensing data should be processed. So, the simulation outcomes demonstrated that the proposed IMO-mLSTM efficiently classifies the LULC classes in terms of classification accuracy, recall, and precision. When compared with ConvNet and Alexnet, the proposed IMO-mLSTM method accomplished accuracies of 99.99% on Sat 4, 99.98% on Sat 6, and 98.52% on the Eurosat datasets. Full article
(This article belongs to the Special Issue New Advancements in Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of satellite image classification.</p>
Full article ">Figure 2
<p>Sat 4 and Sat 6 databases.</p>
Full article ">Figure 3
<p>Eurosat database.</p>
Full article ">Figure 4
<p>Graphical depiction of multiplicative LSTM.</p>
Full article ">Figure 5
<p>Performance analysis of precision and recall on Sat 4.</p>
Full article ">Figure 6
<p>Performance analysis of accuracy on Sat 4.</p>
Full article ">Figure 7
<p>Performance analysis of precision and recall on the Sat 4 dataset.</p>
Full article ">Figure 8
<p>Performance analysis of accuracy on various classes.</p>
Full article ">Figure 9
<p>Visual results of the Sat 4 dataset.</p>
Full article ">Figure 10
<p>Confusion matrix for the Sat 4 dataset.</p>
Full article ">Figure 11
<p>Performance analysis of precision and recall on the Sat 6 dataset.</p>
Full article ">Figure 12
<p>Performance analysis of accuracy on Sat 6.</p>
Full article ">Figure 13
<p>Visual results of the Sat 6 dataset.</p>
Full article ">Figure 14
<p>Confusion Matrix for the Sat 4 dataset.</p>
Full article ">Figure 15
<p>Performance of precision, recall, and accuracy on Eurosat.</p>
Full article ">Figure 16
<p>Visual analysis of Eurosat.</p>
Full article ">Figure 17
<p>Confusion Matrix for the Eurosat dataset.</p>
Full article ">Figure 18
<p>Comparative analysis of accuracy with existing classes [<a href="#B22-remotesensing-14-04837" class="html-bibr">22</a>,<a href="#B23-remotesensing-14-04837" class="html-bibr">23</a>,<a href="#B24-remotesensing-14-04837" class="html-bibr">24</a>,<a href="#B29-remotesensing-14-04837" class="html-bibr">29</a>].</p>
Full article ">Figure 19
<p>Comparative analysis of accuracy with existing DBN [<a href="#B28-remotesensing-14-04837" class="html-bibr">28</a>].</p>
Full article ">
17 pages, 2406 KiB  
Article
An Adaptive Learning Model for Multiscale Texture Features in Polyp Classification via Computed Tomographic Colonography
by Weiguo Cao, Marc J. Pomeroy, Shu Zhang, Jiaxing Tan, Zhengrong Liang, Yongfeng Gao, Almas F. Abbasi and Perry J. Pickhardt
Sensors 2022, 22(3), 907; https://doi.org/10.3390/s22030907 - 25 Jan 2022
Cited by 6 | Viewed by 2771
Abstract
Objective: As an effective lesion heterogeneity depiction, texture information extracted from computed tomography has become increasingly important in polyp classification. However, variation and redundancy among multiple texture descriptors render a challenging task of integrating them into a general characterization. Considering these two problems, [...] Read more.
Objective: As an effective lesion heterogeneity depiction, texture information extracted from computed tomography has become increasingly important in polyp classification. However, variation and redundancy among multiple texture descriptors render a challenging task of integrating them into a general characterization. Considering these two problems, this work proposes an adaptive learning model to integrate multi-scale texture features. Methods: To mitigate feature variation, the whole feature set is geometrically split into several independent subsets that are ranked by a learning evaluation measure after preliminary classifications. To reduce feature redundancy, a bottom-up hierarchical learning framework is proposed to ensure monotonic increase of classification performance while integrating these ranked sets selectively. Two types of classifiers, traditional (random forest + support vector machine)- and convolutional neural network (CNN)-based, are employed to perform the polyp classification under the proposed framework with extended Haralick measures and gray-level co-occurrence matrix (GLCM) as inputs, respectively. Experimental results are based on a retrospective dataset of 63 polyp masses (defined as greater than 3 cm in largest diameter), including 32 adenocarcinomas and 31 benign adenomas, from adult patients undergoing first-time computed tomography colonography and who had corresponding histopathology of the detected masses. Results: We evaluate the performance of the proposed models by the area under the curve (AUC) of the receiver operating characteristic curve. The proposed models show encouraging performances of an AUC score of 0.925 with the traditional classification method and an AUC score of 0.902 with CNN. The proposed adaptive learning framework significantly outperforms nine well-established classification methods, including six traditional methods and three deep learning ones with a large margin. Conclusions: The proposed adaptive learning model can combat the challenges of feature variation through a multiscale grouping of feature inputs, and the feature redundancy through a hierarchal sorting of these feature groups. The improved classification performance against comparative models demonstrated the feasibility and utility of this adaptive learning procedure for feature integration. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Imaging and Visual Sensing)
Show Figures

Figure 1

Figure 1
<p>Illustration of co-occurrence matrix (CM) calculation in 2D/3D images: (<b>a</b>) CM parameters in 2D images; (<b>b</b>) CM parameters in 3D images; and (<b>c</b>) A GLCM example of a 2D case when direction is 0° and displacement = 1. The left is a gray image, and the right one is its GLCM.</p>
Full article ">Figure 2
<p>Visualization of information CNN learnt from each subgroups: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>G</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>G</mi> <mn>2</mn> </msub> </mrow> </semantics></math> and (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>G</mi> <mn>3</mn> </msub> </mrow> </semantics></math>. The first column is the original GLCM. The corresponding label (0 for benign and 1 for malignant) and model score of the malignancy risk are listed on the top. The remaining two columns are the interpretations of model prediction on the two classes. The red cells show the entries push the model’s decision close to that class, while blue pixels pull the prediction results away.</p>
Full article ">Figure 3
<p>The flowchart of multi-level learning model for fusion of multi-scale feature sets.</p>
Full article ">Figure 4
<p>The flowchart of the feature selection step for the baseline and the complement in the multi-layer learning model.</p>
Full article ">Figure 5
<p>Network structure of FSFS-CNN.</p>
Full article ">Figure 6
<p>Flowchart of data acquisition and preparation for these experiments.</p>
Full article ">Figure 7
<p>Three sample CT slices from select polyp masses. Green contour around the polyp show the segmentation. Air voxels from the lumen below −450 HU are removed post-segmentation and are highlighted red in the images. Images show sample polyps with pathologies (<b>a</b>) adenocarcinoma, (<b>b</b>) villous adenoma, and (<b>c</b>) villous adenoma.</p>
Full article ">Figure 8
<p>The trends of three AUC score curves of polyp classification, their maximums and their partitions over 63 polyps via forward step feature selection method: (<b>a</b>) D1, (<b>b</b>) D2, and (<b>c</b>) D3.</p>
Full article ">Figure 9
<p>ROC curves of proposed and comparative methods.</p>
Full article ">
11 pages, 638 KiB  
Article
Texture-Based Analysis of 18F-Labeled Amyloid PET Brain Images
by Alexander P. Seiffert, Adolfo Gómez-Grande, Eva Milara, Sara Llamas-Velasco, Alberto Villarejo-Galende, Enrique J. Gómez and Patricia Sánchez-González
Appl. Sci. 2021, 11(5), 1991; https://doi.org/10.3390/app11051991 - 24 Feb 2021
Viewed by 1717
Abstract
Amyloid positron emission tomography (PET) brain imaging with radiotracers like [18F]florbetapir (FBP) or [18F]flutemetamol (FMM) is frequently used for the diagnosis of Alzheimer’s disease. Quantitative analysis is usually performed with standardized uptake value ratios (SUVR), which are calculated by [...] Read more.
Amyloid positron emission tomography (PET) brain imaging with radiotracers like [18F]florbetapir (FBP) or [18F]flutemetamol (FMM) is frequently used for the diagnosis of Alzheimer’s disease. Quantitative analysis is usually performed with standardized uptake value ratios (SUVR), which are calculated by normalizing to a reference region. However, the reference region could present high variability in longitudinal studies. Texture features based on the grey-level co-occurrence matrix, also called Haralick features (HF), are evaluated in this study to discriminate between amyloid-positive and negative cases. A retrospective study cohort of 66 patients with amyloid PET images (30 [18F]FBP and 36 [18F]FMM) was selected and SUVRs and 6 HFs were extracted from 13 cortical volumes of interest. Mann–Whitney U-tests were performed to analyze differences of the features between amyloid positive and negative cases. Receiver operating characteristic (ROC) curves were computed and their area under the curve (AUC) was calculated to study the discriminatory capability of the features. SUVR proved to be the most significant feature among all tests with AUCs between 0.692 and 0.989. All HFs except correlation also showed good performance. AUCs of up to 0.949 were obtained with the HFs. These results suggest the potential use of texture features for the classification of amyloid PET images. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p>Receiver characteristic curves (ROC) curves representing the discriminatory capacity of the global standardized uptake value ratios (SUVR) and Haralick features (HFs) in separating Aβ+ and Aβ− cases. ROC curves for the three study groups: (<b>a</b>) all patients; (<b>b</b>) amyloid positron emission tomography (PET) images acquired with [<sup>18</sup>F]FBP; (<b>c</b>) amyloid PET images acquired with [<sup>18</sup>F]FMM.</p>
Full article ">
15 pages, 1465 KiB  
Article
CT Radiomics in Colorectal Cancer: Detection of KRAS Mutation Using Texture Analysis and Machine Learning
by Víctor González-Castro, Eva Cernadas, Emilio Huelga, Manuel Fernández-Delgado, Jacobo Porto, José Ramón Antunez and Miguel Souto-Bayarri
Appl. Sci. 2020, 10(18), 6214; https://doi.org/10.3390/app10186214 - 7 Sep 2020
Cited by 14 | Viewed by 2854
Abstract
In this work, by using descriptive techniques, the characteristics of the texture of the CT (computed tomography) image of patients with colorectal cancer were extracted and, subsequently, classified in KRAS+ or KRAS-. This was accomplished by using different classifiers, such as Support Vector [...] Read more.
In this work, by using descriptive techniques, the characteristics of the texture of the CT (computed tomography) image of patients with colorectal cancer were extracted and, subsequently, classified in KRAS+ or KRAS-. This was accomplished by using different classifiers, such as Support Vector Machine (SVM), Grading Boosting Machine (GBM), Neural Networks (NNET), and Random Forest (RF). Texture analysis can provide a quantitative assessment of tumour heterogeneity by analysing both the distribution and relationship between the pixels in the image. The objective of this research is to demonstrate that CT-based Radiomics can predict the presence of mutation in the KRAS gene in colorectal cancer. This is a retrospective study, with 47 patients from the University Hospital, with a confirmatory pathological analysis of KRAS mutation. The highest accuracy and kappa achieved were 83% and 64.7%, respectively, with a sensitivity of 88.9% and a specificity of 75.0%, achieved by the NNET classifier using the texture feature vectors combining wavelet transform and Haralick coefficients. The fact of being able to identify the genetic expression of a tumour without having to perform either a biopsy or a genetic test is a great advantage, because it prevents invasive procedures that involve complications and may present biases in the sample. As well, it leads towards a more personalized and effective treatment. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

Figure 1
<p>“Large-original” is the first image to obtain. This image belongs to a KRAS+ patient included in the study. The tumour under study is highlighted with red arrows and can also be seen in greater detail in the adjacent images.</p>
Full article ">Figure 2
<p>“Large-delimited” is the second image to be obtained. In the two adjacent small images, the delimited tumour can be seen in detail. The delimitation line is highlighted between the two red arrows in the most enlarged image.</p>
Full article ">Figure 3
<p>“Small-delimited”. To take this image, starting from the “large-delimited” image, the zoom is enlarged focusing on the delimited tumour. In the two adjacent small images, the delimited tumour can be seen in detail. In the enlarged image the delimitation is highlighted between the two red arrows.</p>
Full article ">Figure 4
<p>“Small-original”. In the two adjacent small images, the tumour can be seen in detail. In this case, the images have not delimitation.</p>
Full article ">
19 pages, 8171 KiB  
Article
A Method for the Assessment of Textile Pilling Tendency Using Optical Coherence Tomography
by Joanna Sekulska-Nalewajko, Jarosław Gocławski and Ewa Korzeniewska
Sensors 2020, 20(13), 3687; https://doi.org/10.3390/s20133687 - 1 Jul 2020
Cited by 29 | Viewed by 7672
Abstract
Pilling is caused by friction pulling and fuzzing the fibers of a material. Pilling is normally evaluated by visually counting the pills on a flat fabric surface. Here, we propose an objective method of pilling assessment, based on the textural characteristics of the [...] Read more.
Pilling is caused by friction pulling and fuzzing the fibers of a material. Pilling is normally evaluated by visually counting the pills on a flat fabric surface. Here, we propose an objective method of pilling assessment, based on the textural characteristics of the fabric shown in optical coherence tomography (OCT) images. The pilling layer is first identified above the fabric surface. The percentage of protruding fiber pixels and Haralick’s textural features are then used as pilling descriptors. Principal component analysis (PCA) is employed to select strongly correlated features and then reduce the feature space dimensionality. The first principal component is used to quantify the intensity of fabric pilling. The results of experimental studies confirm that this method can determine the intensity of pilling. Unlike traditional methods of pilling assessment, it can also detect pilling in its early stages. The approach could help to prevent overestimation of the degree of pilling, thereby avoiding unnecessary procedures, such as mechanical removal of entangled fibers. However, the research covered a narrow group of fabrics and wider conclusions about the usefulness and limitations of this method can be drawn after examining fabrics of different thickness and chemical composition of fibers. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Martindale Abrasion and Pilling Tester TF210 [<a href="#B33-sensors-20-03687" class="html-bibr">33</a>].</p>
Full article ">Figure 2
<p>Acquiring images of textile fabric (<b>a</b>) The functional modules of Spark OCT <math display="inline"><semantics> <mrow> <mn>1300</mn> <mspace width="0.277778em"/> <mi>nm</mi> </mrow> </semantics></math> system; (<b>b</b>) the stack of acquired B-scans equivalent to volumetric data.</p>
Full article ">Figure 3
<p>Flowchart of the proposed algorithm.</p>
Full article ">Figure 4
<p>Layers inside the B-scan and their Hough transform in the horizontal direction <span class="html-italic">x</span> (at an angle of <math display="inline"><semantics> <msup> <mn>90</mn> <mo>°</mo> </msup> </semantics></math>): <math display="inline"><semantics> <msub> <mi>L</mi> <mi>P</mi> </msub> </semantics></math>—pilling layer; <math display="inline"><semantics> <msub> <mi>L</mi> <mi>F</mi> </msub> </semantics></math>—fabric layer; <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>z</mi> <mi>P</mi> </msub> </mrow> </semantics></math>—height of <math display="inline"><semantics> <msub> <mi>L</mi> <mi>F</mi> </msub> </semantics></math> layer; <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>z</mi> <mi>F</mi> </msub> </mrow> </semantics></math>—the height of <math display="inline"><semantics> <msub> <mi>L</mi> <mi>F</mi> </msub> </semantics></math> layer; <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math>—the Hough transform of a B-scan in the horizontal direction; <math display="inline"><semantics> <msub> <mi>z</mi> <mn>0</mn> </msub> </semantics></math>—the middle line position of the fabric layer <math display="inline"><semantics> <msub> <mi>L</mi> <mi>F</mi> </msub> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>2</mn> </msub> </mrow> </semantics></math>—limits of the pilling layer <math display="inline"><semantics> <msub> <mi>L</mi> <mi>P</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Illustration of pilling layer extraction steps for a knitwear B-scan: (<b>a</b>) example B-scan image; (<b>b</b>) the image from figure (<b>a</b>) binarized using the Otsu method with the detected knitwear surface line; (<b>c</b>) the horizontal Hough transform <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math> of the binary image in figure (<b>b</b>) with the line of the material surface detection level <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>·</mo> <mi>H</mi> <mo>(</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </semantics></math> (Equation (<a href="#FD6-sensors-20-03687" class="html-disp-formula">6</a>)); (<b>d</b>) the image from figure (<b>a</b>) with the pilling layer detected between horizontal lines.</p>
Full article ">Figure 6
<p>Knitwear surfaces after the standardized Martindale pilling test: 0 W indicate fabrics without laser treatment; 14–18 W indicate fabrics after laser ablation.</p>
Full article ">Figure 7
<p>Exemplary two-dimensional OCT images (B-scans) of knitwear cross-sections taken from volumetric data, illustrating the appearance of a layer above the fabric in the absence of pilling (control samples of <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>1</mn> </mrow> </semantics></math> after standardized abrasion test), in the early stages of pilling (all tested fabrics after manual tests), and during the pilling phase after standardized abrasion test (<math display="inline"><semantics> <mrow> <mi>F</mi> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>3</mn> </mrow> </semantics></math>). 1—loose fabric fibers, 2—cut fiber fragments, 3—a pill attached to the fabric surface.</p>
Full article ">Figure 8
<p>Selected texture feature variability due to fabric abrasion: <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>3</mn> </mrow> </semantics></math>—types of fabric tested; <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mn>1</mn> </msub> <mspace width="0.222222em"/> <mrow> <mo>(</mo> <mi>e</mi> <mi>n</mi> <mi>e</mi> <mi>r</mi> <mi>g</mi> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mn>2</mn> </msub> <mspace width="0.222222em"/> <mrow> <mo>(</mo> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>s</mi> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mn>4</mn> </msub> <mspace width="0.222222em"/> <mrow> <mo>(</mo> <mi>v</mi> <mi>a</mi> <mi>r</mi> <mi>i</mi> <mi>a</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mn>5</mn> </msub> <mspace width="0.222222em"/> <mrow> <mo>(</mo> <mi>h</mi> <mi>o</mi> <mi>m</mi> <mi>o</mi> <mi>g</mi> <mi>e</mi> <mi>n</mi> <mi>e</mi> <mi>i</mi> <mi>t</mi> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mn>6</mn> </msub> <mspace width="0.222222em"/> <mrow> <mo>(</mo> <mi>s</mi> <mi>u</mi> <mi>m</mi> <mspace width="0.222222em"/> <mi>a</mi> <mi>v</mi> <mi>e</mi> <mi>r</mi> <mi>a</mi> <mi>g</mi> <mi>e</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mn>9</mn> </msub> <mspace width="0.222222em"/> <mrow> <mo>(</mo> <mi>e</mi> <mi>n</mi> <mi>t</mi> <mi>r</mi> <mi>o</mi> <mi>p</mi> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>—Haralick feature types; <math display="inline"><semantics> <msub> <mi>f</mi> <mi>P</mi> </msub> </semantics></math>—fraction of fiber pixels; <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>P</mi> </mrow> </semantics></math>—not pilled (control) sample; <math display="inline"><semantics> <mrow> <mi>T</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>T</mi> <mn>2</mn> </mrow> </semantics></math>—manual tests; <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>T</mi> </mrow> </semantics></math>—apparatus test.</p>
Full article ">Figure 9
<p>Score plot for <span class="html-italic">PC</span>1 and <span class="html-italic">PC</span>2 for different fabric abrasion tests, where: <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>3</mn> </mrow> </semantics></math>—tested fabrics; <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>P</mi> </mrow> </semantics></math>—not pilled (control) sample; <math display="inline"><semantics> <mrow> <mi>T</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>T</mi> <mn>2</mn> </mrow> </semantics></math>—manual tests; <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>T</mi> </mrow> </semantics></math>—apparatus test. A complete set of Haralick’s texture features (<math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math>–<math display="inline"><semantics> <msub> <mi>H</mi> <mn>13</mn> </msub> </semantics></math>) calculated for the OCT images of the fabrics was used for the analysis.</p>
Full article ">Figure 10
<p>Score plot of strongly correlated textural features in <math display="inline"><semantics> <msub> <mi>L</mi> <mi>P</mi> </msub> </semantics></math> layer of OCT images cast on the plane of principal components <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>C</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>C</mi> <mn>2</mn> </mrow> </semantics></math> for different fabric abrasion tests. In addition to the original textile samples, fabrics subjected to laser ablation were also used for PCA analysis, marked in the plot with the suffix a. (<math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>H</mi> <mn>2</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>H</mi> <mn>4</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>H</mi> <mn>5</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>H</mi> <mn>9</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>f</mi> <mi>P</mi> </msub> </semantics></math>)—coordinates of the original feature space. <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>3</mn> </mrow> </semantics></math>—tested fabrics; <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>P</mi> </mrow> </semantics></math>—not pilled (control) sample; <math display="inline"><semantics> <mrow> <mi>T</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>T</mi> <mn>2</mn> </mrow> </semantics></math>—manual tests; <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>T</mi> </mrow> </semantics></math>—apparatus test.</p>
Full article ">Figure 11
<p>Plots of the <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>C</mi> <mn>1</mn> </mrow> </semantics></math> component in the function of different pilling tests for various fabric types: <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <mn>3</mn> </mrow> </semantics></math>—tested fabrics; <math display="inline"><semantics> <mrow> <mi>F</mi> <msup> <mn>1</mn> <mi>a</mi> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <msup> <mn>2</mn> <mi>a</mi> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <msup> <mn>3</mn> <mi>a</mi> </msup> </mrow> </semantics></math>—tested fabrics after the laser ablation; <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>P</mi> </mrow> </semantics></math>—not pilled (control) sample; <math display="inline"><semantics> <mrow> <mi>T</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>T</mi> <mn>2</mn> </mrow> </semantics></math>—manual tests; <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>T</mi> </mrow> </semantics></math>—apparatus test.</p>
Full article ">
16 pages, 3151 KiB  
Article
Efficacy of Quantitative Muscle Ultrasound Using Texture-Feature Parametric Imaging in Detecting Pompe Disease in Children
by Hong-Jen Chiou, Chih-Kuang Yeh, Hsuen-En Hwang and Yin-Yin Liao
Entropy 2019, 21(7), 714; https://doi.org/10.3390/e21070714 - 22 Jul 2019
Cited by 7 | Viewed by 3763
Abstract
Pompe disease is a hereditary neuromuscular disorder attributed to acid α-glucosidase deficiency, and accurately identifying this disease is essential. Our aim was to discriminate normal muscles from neuropathic muscles in children affected by Pompe disease using a texture-feature parametric imaging method that simultaneously [...] Read more.
Pompe disease is a hereditary neuromuscular disorder attributed to acid α-glucosidase deficiency, and accurately identifying this disease is essential. Our aim was to discriminate normal muscles from neuropathic muscles in children affected by Pompe disease using a texture-feature parametric imaging method that simultaneously considers microstructure and macrostructure. The study included 22 children aged 0.02–54 months with Pompe disease and six healthy children aged 2–12 months with normal muscles. For each subject, transverse ultrasound images of the bilateral rectus femoris and sartorius muscles were obtained. Gray-level co-occurrence matrix-based Haralick’s features were used for constructing parametric images and identifying neuropathic muscles: autocorrelation (AUT), contrast, energy (ENE), entropy (ENT), maximum probability (MAXP), variance (VAR), and cluster prominence (CPR). Stepwise regression was used in feature selection. The Fisher linear discriminant analysis was used for combination of the selected features to distinguish between normal and pathological muscles. The VAR and CPR were the optimal feature set for classifying normal and pathological rectus femoris muscles, whereas the ENE, VAR, and CPR were the optimal feature set for distinguishing between normal and pathological sartorius muscles. The two feature sets were combined to discriminate between children with and without neuropathic muscles affected by Pompe disease, achieving an accuracy of 94.6%, a specificity of 100%, a sensitivity of 93.2%, and an area under the receiver operating characteristic curve of 0.98 ± 0.02. The CPR for the rectus femoris muscles and the AUT, ENT, MAXP, and VAR for the sartorius muscles exhibited statistically significant differences in distinguishing between the infantile-onset Pompe disease and late-onset Pompe disease groups (p < 0.05). Texture-feature parametric imaging can be used to quantify and map tissue structures in skeletal muscles and distinguish between pathological and normal muscles in children or newborns. Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Show Figures

Figure 1

Figure 1
<p>Texture-feature parametric imaging of a normal rectus femoris muscle in a 12 month old boy. (<b>a</b>) Original B-mode image, (<b>b</b>) extracted rectus femoris muscle region (indicated by the white dashed line) in the B-mode image, (<b>c</b>) autocorrelation image, (<b>d</b>) contrast image, (<b>e</b>) energy image, (<b>f</b>) entropy image, (<b>g</b>) maximum probability image, (<b>h</b>) variance image, and (<b>i</b>) cluster prominence image. F: femur bone reflection, VI: vastus intermedius muscle.</p>
Full article ">Figure 2
<p>Texture-feature parametric imaging of a pathological rectus femoris muscle in a 10 day old boy with infantile-onset Pompe disease. (<b>a</b>) Original B-mode image, (<b>b</b>) extracted rectus femoris muscle region (indicated by the white dashed line) in the B-mode image, (<b>c</b>) autocorrelation image, (<b>d</b>) contrast image, (<b>e</b>) energy image, (<b>f</b>) entropy image, (<b>g</b>) maximum probability image, (<b>h</b>) variance image, and (<b>i</b>) cluster prominence image.</p>
Full article ">Figure 3
<p>Texture-feature parametric imaging of a normal sartorius muscle in a 12 month old boy. (<b>a</b>) Original B-mode image, (<b>b</b>) extracted sartorius muscle region (indicated by the white dashed line) in the B-mode image, (<b>c</b>) autocorrelation image, (<b>d</b>) contrast image, (<b>e</b>) energy image, (<b>f</b>) entropy image, (<b>g</b>) maximum probability image, (<b>h</b>) variance image, and (<b>i</b>) cluster prominence image.</p>
Full article ">Figure 4
<p>Texture-feature parametric imaging of a pathological sartorius muscle in a five month old boy with late-onset Pompe disease. (<b>a</b>) Original B-mode image, (<b>b</b>) extracted sartorius muscle region (indicated by the white dashed line) in the B-mode image, (<b>c</b>) autocorrelation image, (<b>d</b>) contrast image, (<b>e</b>) energy image, (<b>f</b>) entropy image, (<b>g</b>) maximum probability image, (<b>h</b>) variance image, and (<b>i</b>) cluster prominence image.</p>
Full article ">Figure 5
<p>Box plots of the distributions of the seven parameters for normal rectus femoris muscles and pathological rectus femoris muscles affected by Pompe disease. (<b>a</b>) AUT: autocorrelation; (<b>b</b>) CON: contrast; (<b>c</b>) ENE: energy; (<b>d</b>) ENT: entropy; (<b>e</b>) MAXP: maximum probability; (<b>f</b>) VAR: variance; (<b>g</b>) CPR: cluster prominence; *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 6
<p>Box plots of the distributions of the seven parameters for normal sartorius muscles and pathological sartorius muscles affected by Pompe disease. (<b>a</b>) AUT: autocorrelation; (<b>b</b>) CON: contrast; (<b>c</b>) ENE: energy; (<b>d</b>) ENT: entropy; (<b>e</b>) MAXP: maximum probability; (<b>f</b>) VAR: variance; (<b>g</b>) CPR: cluster prominence; * <span class="html-italic">p</span> &lt; 0.05; ** <span class="html-italic">p</span> &lt; 0.01; and *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 7
<p>Receiver operating characteristic (ROC) curves of each feature set. F1: comprising the variance and cluster prominence for rectus femoris muscles. F2: comprising the energy, variance, and cluster prominence for sartorius muscles. F3: constituting a combination of F1 and F2.</p>
Full article ">
Back to TopTop