Skip to main content
TELKOMNIKA JOURNAL
  • Universitas Ahmad Dahlan, 4th Campus, 9th Floor, LPPI Room
    Jl. Ringroad Selatan, Kragilan, Tamanan, Banguntapan, Bantul, Yogyakarta, Indonesia 55191
  • +62 (274) 563515, 511830, 379418, 371120
The number of studies on COVID-19 has rapidly grown in recent years. The pandemic has caused widespread disruption, including in renewable energy research. This study aims to examine the impact of the COVID-19 pandemic on renewable energy... more
The number of studies on COVID-19 has rapidly grown in recent years. The pandemic has caused widespread disruption, including in renewable energy research. This study aims to examine the impact of the COVID-19 pandemic on renewable energy research by conducting a bibliometric analysis from the Scopus indexing database from 2020 to 2023. This study employs a bibliometric approach to analyze the authors, affiliation, publication source, keywords, thematic map, and trend topic. The analysis will focus on the comprehensive overview of the published research and identify areas where future research is needed. The results show that the number of research studies related to COVID-19's effect on renewable energy will continue growing and a solid collaboration network exists. China has had a significant presence on the impact of COVID-19 on renewable energy studies, including organization and university affiliations, number of single and multiple country publications, funding sponsors, and most cited countries. The finding also shows niche themes, research trends, and thematic evolution shifting. This finding will contribute to a better understanding of the impact of the COVID-19 pandemic on renewable energy research and a new direction for future research.
This paper reports the development and switching controller design of an inverted pendulum system (IPS) platform. The euler-lagrange approach is first used to model the dynamics of the IPS which takes into account the impact of friction... more
This paper reports the development and switching controller design of an inverted pendulum system (IPS) platform. The euler-lagrange approach is first used to model the dynamics of the IPS which takes into account the impact of friction forces during its movements. The paper then derives a switching control method to swing the pendulum rod into the neighborhood of and stabilizing it at the equilibrium point. The implemented switching controller consists of: i) a nonlinear swing up control which brings the pendulum to the vertical position and ii) a linear stabilizing control which maintains the pendulum rod to remain in a vertical position around the neighborhood of the vertical axis. The nonlinear controller is constructed using lyapunov's method while the linear controller is designed using linear quadratic regulator (LQR) method framework. Simulation and experimental results are presented to show the effectiveness of the proposed switching controller. This is an open access article under the CC BY-SA license.
This study aims to design and evaluate a wireless electronic stethoscope that can transmit heart sound using Bluetooth HC-05 and Bluetooth 5.0 transmitter. This novel design contributes to the remote diagnosis and monitoring of heart... more
This study aims to design and evaluate a wireless electronic stethoscope that can transmit heart sound using Bluetooth HC-05 and Bluetooth 5.0 transmitter. This novel design contributes to the remote diagnosis and monitoring of heart conditions, especially for patients with infectious diseases. The heart sound signals are captured using a mic condenser mic, amplified, filtered, and converted to digital data by a microcontroller. The data are then transmitted by Bluetooth HC-05 to a module and by Bluetooth 5.0 transmitter to a headset. The quality-of-service parameters such as throughput, delay, and packet loss ratio (PLR) of the data transmission at different distances are measured. The results show that the wireless electronic stethoscope can transmit heart sound data with a small PLR of 0.10% and a throughput of 1002.5 bps. The study concludes that the wireless electronic stethoscope is an effective and useful device for examining heart conditions remotely, without compromising the functionality of the device.
This research proposed and verified a novel method in realizing end-fire radial line slot array (RLSA) antennas. This method involved the use of high beamsquint values in the design of slot pairs, which aimed to shift the antenna's beam... more
This research proposed and verified a novel method in realizing end-fire radial line slot array (RLSA) antennas. This method involved the use of high beamsquint values in the design of slot pairs, which aimed to shift the antenna's beam toward the end-fire direction. Furthermore, identical slot pairs were also placed in the antenna's background to further squint the beam in the end-fire direction. By using this method, forty multibeam endfire RLSA antennas were modeled and simulated to determine the most efficient model to be fabricated. The accuracy of the simulations was confirmed through measurements taken from the fabricated prototype, which demonstrate good agreement with the simulation results and confirm the validity of the proposed method. The result showed that it is possible to design four end-fire beam antennas with a gain of 8 dBi, directions of 0°, 90°, 180°, and 270° in the azimuth direction, and a beamwidth of about 20°. The antenna also showed low reflection and bandwidth of about 500 MHz, which is suitable for Wi-Fi applications.
The humanoid robot soccer system encounters a notable challenge in object detection, primarily concentrating on identifying the ball and often neglecting crucial elements like opposing robots and goals, resulting in on-field collisions... more
The humanoid robot soccer system encounters a notable challenge in object detection, primarily concentrating on identifying the ball and often neglecting crucial elements like opposing robots and goals, resulting in on-field collisions and imprecise ball shooting. This study comparatively evaluates three you only look once (YOLO) real-time object detection system variants: YOLOv8, YOLOv7, and YOLO-NAS. A dataset of 2104 annotated images, covering classes such as ball, goalpost, and robot, was curated from Roboflow and robot-captured images. The dataset was partitioned into training, validation, and testing sets, and each YOLO model underwent extensive finetuning over 100 epochs on this custom dataset, leveraging the pre-trained common objects in context (COCO) model. Evaluation metrics, including mean average precision (mAP) and inference speed, assessed performance. YOLOv8 achieved the highest accuracy with a mAP of 0.92, while YOLOv7 showed the fastest inference speed of 24 ms on the Jetson Nano platform. Balancing accuracy and speed, YOLO-NAS emerged as the optimal choice. Thus, YOLO-NAS is recommended for object detection for humanoid soccer robots, regardless of team affiliation. Future research should focus on enhancing object detection through advanced training techniques, model architectures, and sensor fusion for improved performance in dynamic environments, potentially optimizing through scenario-specific fine-tuning.
A novel technique utilizing a convolutional autoencoder (CAE) is introduced with the aim of enhancing the spatial resolution of multispectral (MS) images while concurrently mitigating spectral distortion. First, an original panchromatic... more
A novel technique utilizing a convolutional autoencoder (CAE) is introduced with the aim of enhancing the spatial resolution of multispectral (MS) images while concurrently mitigating spectral distortion. First, an original panchromatic (PAN) image is constructed from its spatially degraded version. Then, the relationship between the original PAN image and its degraded version is utilized to reconstruct the high-resolution MS image; in addition, an intensity component of MS image, which is obtained using an adaptive intensity-hue-saturation (AIHS), is reconstructed by utilizing the aforementioned relationship. Two types of remote sensing datasets are adopted, and the effect of the patch size with the overlapping pixel on spectral and spatial distortion is considered. After training CAE, the low-resolution MS image and its intensity component are given to the trained network as input to obtain the MS image and intensity component with better details. Eventually, the fused image is obtained by using a component substitution (CS) framework. Experimental findings corroborate that the proposed method yields superior outcomes compared with several existing approaches, demonstrating advantages in both objective metrics and visual fidelity.
A flood stands as one of the most common natural occurrences, often resulting in substantial financial losses to property and possessions, as well as affecting human lives adversely. Implementing measures to prevent such floods becomes... more
A flood stands as one of the most common natural occurrences, often resulting in substantial financial losses to property and possessions, as well as affecting human lives adversely. Implementing measures to prevent such floods becomes crucial, offering inhabitants ample time to evacuate vulnerable areas before flood events occur. In addressing the flood issue, numerous scholars have put forth various solutions, such as the development of fuzzy system models and the establishment of suitable infrastructure. However, when applying a fuzzy system, it often results in a loss of interpretability of the fuzzy rules. To address this issue effectively, we propose to reframe the optimization problem by incorporating stage costs alongside the terminal cost. Results show the proposed model called hybrid fuzzy logic and neural networks (NNs) can mitigate the loss of interpretability. Results also show that the proposed method was employed in a flood early detection system aligned with integrating into Twitter social media. The proposed concepts are validated through case studies, showcasing their effectiveness in tasks such as XOR-classification problems. This is an open access article under the CC BY-SA license.
Technology's increasing role in everyday life has pushed the evolution of the internet of things (IoT), which now permeates industries like information technology, agribusiness, and transportation. Critical concerns in IoT security... more
Technology's increasing role in everyday life has pushed the evolution of the internet of things (IoT), which now permeates industries like information technology, agribusiness, and transportation. Critical concerns in IoT security include platform diversity and issues with authentication and authorization. Critical vulnerabilities identified by researchers contain unencrypted communications, compromised interfaces, and compromised access control processes. A new solution, narrowband IoT (NB-IoT), has responded. Based on the cellular network, this technology is designed for improved security and efficiency, operating within the fourth-generation mobile networks and leveraging essential network components. The current study focuses on NB-IoT vulnerabilities, particularly in the radio segment, which is notably vulnerable. The research utilized the open-source tool OpenLTE and hardware like software-defined radio (SDR) in a setting with active NB-IoT sensors on an LTE network. This included deploying a test listening tool and a laboratory-based IMSI catcher to intercept active device communications in a testbed. The results highlight significant vulnerabilities: sensors were deactivated following simulated network attacks with rogue eNodeB and traffic area update (TAU) messages, revealing the technology's susceptibility to connection failure.
This research examines the efficacy of random search (RS) in hyperparameter tuning, comparing its performance to baseline methods namely manual search and grid search. Our analysis spans various deep learning (DL) architecturesmultilayer... more
This research examines the efficacy of random search (RS) in hyperparameter tuning, comparing its performance to baseline methods namely manual search and grid search. Our analysis spans various deep learning (DL) architecturesmultilayer perceptron (MLP), convolutional neural network (CNN), and AlexNet implemented on prominent benchmark datasets of Modified National Institute of Standards and Technology (MNIST) and Canadian Institute for Advanced Research-10 (CIFAR-10). In the context of this study, the evaluation will be adopting a multi-objective framework, navigating the delicate trade-offs between conflicting performance metrics, including accuracy, F1-score, and model parameter size. The primary objective of employing a multi-objective evaluation framework is to enhance the understanding regarding the interactions of these performance metrics interact and influence each other. In real-world scenarios, DL models often need to strike a balance between these conflicting objectives. This research adds to the increasing wealth of knowledge in hyperparameter tuning for DL models and serves as a reference point for practitioners seeking to optimize their DL architectures. The results of our analysis are positioned to provide invaluable insights into the intricate balancing act required during the process of hyperparameter fine-tuning. These insights will contribute to the ongoing advancement of best practices in optimizing DL models and facilitating the ongoing optimization of the DL models.
Collaborative filtering (CF) is a method to be used in recommendation systems. CF works by analyzing rating data patterns from previous users to produce recommendations according to their interests. However, it faces a crucial problem,... more
Collaborative filtering (CF) is a method to be used in recommendation systems. CF works by analyzing rating data patterns from previous users to produce recommendations according to their interests. However, it faces a crucial problem, sparsity, a condition where a lot of data is empty, which will affect the quality of the recommendations produced. To state this problem, the purpose of this study is to input methods including mean, min, max, and knearest neighbor imputation (KNNI). The steps taken include imputation of empty data, followed by similarity calculations using the cosin similarity method, and evaluation using root mean square error (RMSE). The experimental result shows that the mean method is excellent with an average similarity value of 0.99 and an RMSE value of 0.98.
Diabetes is one of the most deadly chronic diseases because most sufferers do not realize they have it. A more accurate prediction of diabetes disease must be made to reduce the risk of bad things happening to sufferers. This research... more
Diabetes is one of the most deadly chronic diseases because most sufferers do not realize they have it. A more accurate prediction of diabetes disease must be made to reduce the risk of bad things happening to sufferers. This research will optimize the decision tree (DT) classification method for diabetes prediction. Optimization is done by splitting criteria, splitting data, particle swarm optimization (PSO), and parameter optimization to find the highest and most accurate forecast of diabetes. Splitting criteria is done by comparing the results of three criteria, namely gain ratio (GR), information gain (IG), and gini index (GI). Splitting data is done by dividing training data and testing data into three comparison groups, namely 70:30, 80:20, and 90:10. The application of PSO and parameter optimization is carried out to increase the accuracy value. The processed data is taken from the UCI machine learning repository with 520 records and 17 attributes (1 class/label attribute). From the experiments, the GI criterion with splitting data 90:10 obtained the greatest accuracy of 98.08%, and the combination with PSO resulted in an accuracy of 97.66%. Meanwhile, parameter optimization with splitting data 90:10 combined with GR criteria resulted in the highest accuracy of 97.90%.
The increase in signature forgery cases can be attributed to the escape of forged signatures from manual signature verification systems. Researchers have developed various machine learning and deep learning methods to verify the... more
The increase in signature forgery cases can be attributed to the escape of forged signatures from manual signature verification systems. Researchers have developed various machine learning and deep learning methods to verify the authenticity of signatures, one of which uses convolutional neural networks (CNNs). This research aims to develop a mobile application for handwritten signature verification using CNN architecture by adding a batch normalization technique to its layer. The performance of our proposed method achieved a verification accuracy of 86.36%, with a 0.061 false acceptance rate (FAR), 0.303 false rejection rate (FRR), and 0.182 equal error rate (EER), which is compatible to be embedded in smartphones. However, there is still a need for further development of the CNN model and its integration with mobile applications.
High accuracy in breast cancer classification contributes to the effectiveness of early breast cancer detection. This study aimed to improve the multiview convolutional neural network (MVCNN) performance for classifying breast cancer... more
High accuracy in breast cancer classification contributes to the effectiveness of early breast cancer detection. This study aimed to improve the multiview convolutional neural network (MVCNN) performance for classifying breast cancer based on the combined mediolateral (MLO) and craniocaudal (CC) views. The main contribution of this study is the development of a system, consisting of an effective image pre-processing method to create datasets using background removal techniques, and image enhancement. Also, a simplicity of preprocessing stage in the classifier machine, which does not require a feature extraction process. Furthermore, the performance of the classifier was improved by combining preprocessing dataset techniques and evaluating the best hyperparameter in MVCNN architecture. The digital dataset for screening mammography (DDSM) dataset was used for evaluation in this study. The best result from this proposed method achieved accuracy, precision, sensitivity, and specificity of 98.63%, 97.29%, 100%, and 97.29%. The evaluation results demonstrated the capability to improve classification performance. The method proposed in this work can be applied to the detection of breast cancer.
Emotions are mental states, categorizes them into positive and negative feelings, and uses stress as an example of a negative emotion. Research demonstrated that acute and chronic stress can change physiological variables, such as heart... more
Emotions are mental states, categorizes them into positive and negative feelings, and uses stress as an example of a negative emotion. Research demonstrated that acute and chronic stress can change physiological variables, such as heart rate variability (HRV) and electroencephalography (EEG). This research aims to early prevention and management of stress that is comfortable to use, reliable and accurate for stress detection. The Einthoven triangle rule was used to gather electrocardiogram (ECG) signals, while EEG signals were obtained from Fp1 and F3 connected to mikromedia 7 with the STM32F746ZG chipset. Various parameters were examined, including ECG signals in the time domain, frequency domain, non-linear analysis, and EEG signals in the frequency domain. Healthy subjects aged 18-23 undergoing different stress-inducing stages, with stress levels validated through the STAI-Y1 questionnaire. To process the HRV and EEG features, Pearson's correlation function (PCF) was employed to select appropriated features into classification method. The proposed classification method in this research is the artificial neural network (ANN) with stratified K-fold, which yielded a stress level output accuracy of 95%. Additionally, the STAI-Y1 questionnaire results evaluation indicated a similarity score of 90.91%. This research has potential applications for individuals experiencing stress, providing a valuable tool for stress detection.
This review provides a concise overview of key transformer-based language models, including bidirectional encoder representations from transformers (BERT), generative pre-trained transformer 3 (GPT-3), robustly optimized BERT pretraining... more
This review provides a concise overview of key transformer-based language models, including bidirectional encoder representations from transformers (BERT), generative pre-trained transformer 3 (GPT-3), robustly optimized BERT pretraining approach (RoBERTa), a lite BERT (ALBERT), text-to-text transfer transformer (T5), generative pre-trained transformer 4 (GPT-4), and extra large neural network (XLNet). These models have significantly advanced natural language processing (NLP) capabilities, each bringing unique contributions to the field. We delve into BERT's bidirectional context understanding, GPT-3's versatility with 175 billion parameters, and RoBERTa's optimization of BERT. ALBERT emphasizes model efficiency, T5 introduces a text-to-text framework, and GPT-4, with 170 trillion parameters, excels in multimodal tasks. Safety considerations are highlighted, especially in GPT-4. Additionally, XL-Net's permutation-based training achieves bidirectional context understanding. The motivations, advancements, and challenges of these models are explored, offering insights into the evolving landscape of large-scale language models. This is an open access article under the CC BY-SA license.
Cauliflower is a popular winter crop in Bangladesh. However, cauliflower plants are vulnerable to several diseases that can reduce the cauliflowers' productivity and degrade their quality. The manual monitoring of these diseases takes a... more
Cauliflower is a popular winter crop in Bangladesh. However, cauliflower plants are vulnerable to several diseases that can reduce the cauliflowers' productivity and degrade their quality. The manual monitoring of these diseases takes a lot of effort and time. Therefore, automatic classification of the diseased cauliflower through computer vision techniques is essential. This study has retrieved ten different statistical and gray-level co-occurrence matrix (GLCM)-based features from the cauliflower image dataset by implementing a variety of image processing techniques. Afterwards, the SelectKBest method with the analysis of variance f-value (ANOVA F-value) has been used to identify the most important attributes for classification of the diseased cauliflower. Based on the ANOVA F-value, the top N (5≤N ≤9) most dominant attributes is used to train and test five machine learning (ML) models for classification of diseased cauliflower. Finally, different performance metrics have been used for evaluating the effectiveness of the employed ML models. The bagging classifier achieved the highest accuracy of 82.35%. Moreover, this model has outperformed other ML classifiers in terms of other performance metrics also.
The rise of social media platforms has led to an increase in the flow and dissemination of information, but it has also made generating and spreading rumors easier. Rumor detection requires understanding the context and semantics of text,... more
The rise of social media platforms has led to an increase in the flow and dissemination of information, but it has also made generating and spreading rumors easier. Rumor detection requires understanding the context and semantics of text, dealing with the evolving nature of rumors, and processing vast amounts of data in real-time. Deep learning (DL)-based techniques exhibit a higher accuracy in detecting rumors on social media compared to many traditional machine learning approaches. This study presents a systematic review of DL approaches in rumor detection, analyzing datasets, pre-processing methods, feature taxonomy, and frequently used DL methods. In the context of feature selection, we categorize features into three areas: text-based, user-based, and propagation-based. Besides, we surveyed the trends in DL models for rumor detection and classified them into convolutional neural networks (CNN), recurrent neural networks (RNN), graph neural networks (GNN), and other methods based on the model structure. It offers insights into effective algorithms and strategies, aiming to guide researchers, developers, social media users, and governments in detecting and preventing the spread of false information. The study contributes to enhancing research in this field and identifies potential areas for future exploration.
Date palm trees originate in many tropical regions of the world and produce dates. Each variety can be differentiated through the shape, texture, size, and colour of the fruits. People have difficulties visualising and recognising the... more
Date palm trees originate in many tropical regions of the world and produce dates. Each variety can be differentiated through the shape, texture, size, and colour of the fruits. People have difficulties visualising and recognising the types of date fruits because they have many varieties and species. An Androidbased mobile application is being proposed to help users quickly identify the dates based on their images and expand their knowledge of dates. The date fruit species classification mobile application categorises nine different varieties of date fruits, namely Ajwa, Medjool, Rutab, Nabtat Ali, Meneifi, Galaxy, Sugaey, Shaishe, and Sokari. The classification, which is based on a transfer learning technique from a pre-trained neural network, achieved a 94.2% accuracy rate. The mobile application features a user-friendly graphical interface that makes it easy to use and understand. Users can learn about different date fruit varieties and improve knowledge retention through a mini game. The application's usability, usefulness, and interface design were confirmed through the user acceptance survey.
The transition from an error-prone, slower, and extremely high-volume legacy system like monolithic system to a faster, lighter, and error-free microservices based system is not always so simple. Microservices are independently deployable... more
The transition from an error-prone, slower, and extremely high-volume legacy system like monolithic system to a faster, lighter, and error-free microservices based system is not always so simple. Microservices are independently deployable and allow for a better team autonomy. In this work, several migration efforts to migrate from a legacy based monolithic system to a pure distributed microservices based system has been tested and deployed in keeping two DevOps principles, the software code build and deployment time and latency in monolithic and microservices. Some real-time projects are considered to measure the performance and the time taken to execute the experiments. To measure the total build and deployment time and latency, Jenkins, Prometheus, and JMeter are installed which are industryrecommended softwares. It is observed that there is a total of 7 seconds taken to build and deploy at containers for 10 microservices whereas 10 monolith applications took almost 260 seconds to be built and deployed to the application server. While increasing more requests per second it is observed that upto 3000 requests per second, it impacted the response time of monolith applications but microservices stays the same. The main conclusion is that microservices are rarely impacted in response time with respect to requests per second.
This paper discusses software metrics and their impact on software defect prediction values in the NASA metric data program (MDP) dataset. The NASA MDP dataset consists of four categories of software metrics: halstead, McCabe, LoC, and... more
This paper discusses software metrics and their impact on software defect prediction values in the NASA metric data program (MDP) dataset. The NASA MDP dataset consists of four categories of software metrics: halstead, McCabe, LoC, and misc. However, there is no study showing which metrics participate in increasing the area under the curve (AUC) value of the NASA MDP dataset. This study utilizes 12 modules from the NASA MDP dataset, where these 12 modules are being tested into 14 relationships of software metrics derived from the four existing metric categories. Subsequently, classification is performed using the k-nearest neighbor (kNN) method. The research concludes that software metrics have a significant impact on the AUC value, with the LoC+McCabe+misc metrics relationship influencing the improvement of the AUC value. However, the metrics relationship that has the most impact on achieving less optimal AUC values is McCabe. Halstead metric also plays a role in decreasing the performance of other metrics.
This study intends to identify research gaps and future trends and provide a framework for the next generation of research to assess how much big data (BD) is employed in hospitality and tourist research. The study is based on a... more
This study intends to identify research gaps and future trends and provide a framework for the next generation of research to assess how much big data (BD) is employed in hospitality and tourist research. The study is based on a comprehensive quantitative evaluation of the relevant literature: Scopus and Web of Science (WoS)-listed academic works. The following criteria were used to assess the submissions: those who have the following traits an overview of the study's subject matter, including its theoretical and conceptual framework, data sources, data kind and quantity, data collection methods, and data analysis methodologies. Research shows that the usage of books on hospitality and tourism management has increased in recent years. Massive volumes of data are analyzed using analytical methods. However, the scope of this investigation is really wide. Furthermore, this research contributes to an in-depth and systematic assessment of the extent to which scholars in hospitality/tourism know and work on business intelligence and BD. This is the first complete survey of the literature on the topic of hospitality and tourism.
This empirical investigation delves into the influence of machine learning (ML) algorithms in the realm of cross-project defect prediction, employing the AEEEEM dataset as a foundation. The primary objective is to discern the nuanced... more
This empirical investigation delves into the influence of machine learning (ML) algorithms in the realm of cross-project defect prediction, employing the AEEEEM dataset as a foundation. The primary objective is to discern the nuanced influences of various algorithms on predictive performance, with a specific focus on the F1 score metric as evaluation criterion. Four ML algorithms have been carefully assessed in this study: random forest (RF), support vector machines (SVM), k-nearest neighbors (KNN), and logistic regression (LR). The choice of these algorithms reflects their prevalence in software defect prediction literature and their diversity. Through rigorous experimentation and analysis, the investigation unveils compelling evidence affirming the superiority of RF over its counterparts. The F1 score utilized as evaluation metric, capturing the delicate balance between precision and recall, essential in defect prediction scenarios. The nuanced examination of algorithmic efficacy provides practical insights for developers and practitioners navigating the challenges of cross-project defect prediction. By leveraging the rich and diverse AEEEEM dataset, this study ensures a comprehensive exploration of algorithmic influences across varied software projects. The findings not only contribute to the academic discourse on defect prediction but also offer practical guidance for real-world application, emphasizing the pivotal role of RF as a tool in enhancing predictive accuracy and reliability.
We report the design, fabrication, and experimental results of an optical wavelength demultiplexer for a new wireless-optical signal converter for Beyond-5G/6G mobile communication system. This optical wavelength demultiplexer is based on... more
We report the design, fabrication, and experimental results of an optical wavelength demultiplexer for a new wireless-optical signal converter for Beyond-5G/6G mobile communication system. This optical wavelength demultiplexer is based on a lithium niobate (LiNbO3) multimode interference (MMI) coupler and is intended to be applied in an advanced electro-optics modulator (EOM) with ability in converting the space division multiplexing (SDM) wireless-wavelength division multiplexing (WDM) optical signals. The designed MMI coupler displays a high splitting ratio over-13 dB in both O and L bands. The results from the experiments align well with the simulation. The utilization of the MMI coupler in EOM enables the direct conversion of SDM wireless signals to WDM optical signals, without any additional power supply. The modification of the characteristic of a constructed MMI coupler can be achieved by controlling of the applied voltage of the device.
Over the past years, remote sensing, radar, and imaging applications have all made use of ultra-wideband (UWB) technology. This study undertakes an extensive analysis of tree-shaped monopole antennas tailored for UWB systems. The intended... more
Over the past years, remote sensing, radar, and imaging applications have all made use of ultra-wideband (UWB) technology. This study undertakes an extensive analysis of tree-shaped monopole antennas tailored for UWB systems. The intended antenna has an incomplete ground plane and a circular radiating patch. To increase bandwidth, two ears have been added to the circular structure. Possessing a dielectric constant of 4.3. The antenna substrate consists of FR-4 material with a dielectric constant of 4.3. To achieve optimal impedance matching for UWB systems, the antenna is fed via a coplanar waveguide (CPW). Design antenna is a simple structure, small size, easy design, and simple integration with the substrate with dimensions of 54 mm ×36 mm ×1.6 mm. All simulation results presented in this article were generated using computer simulation technology (CST) software. He monopole antenna exhibits an impressive impedance bandwidth of 9.6 GHz (146.68%), spanning from 1.99 GHz to 11.56 GHz. Furthermore, the simulated UWB circular monopole antenna exhibits omnidirectional radiation characteristics, boasting a peak gain of 8 dB, and a directivity of 8.2 dBi at the frequency of 5 GHz, and a remarkable radiation efficiency of 97%. With these attributes, the suggested monopole UWB antenna shows significant potential for ground penetrating radar (GPR) applications.
This research designs, analyzes, and studies a 2.45 GHz rectangular microstrip patch antenna (RMPA). The antenna design uses Rogers RT5880 (lossy) substrate material with 2.2 dielectric permittivity, 1.5 mm thickness, and 0.0009 loss... more
This research designs, analyzes, and studies a 2.45 GHz rectangular microstrip patch antenna (RMPA). The antenna design uses Rogers RT5880 (lossy) substrate material with 2.2 dielectric permittivity, 1.5 mm thickness, and 0.0009 loss tangent. Additionally, the antenna was designed and simulated using computer simulation technology (CST) studio 2019 software. Plot designs were again created using Origin Pro Software. The simulation results showed that the return loss (S11), voltage standing wave ratio (VSWR), gain, directivity, bandwidth, efficiency, and surface current were-45.992 dB, 1.0101, 6.115 dBi, 6.534 dBi, 70.8 MHz, 93.59%, and 49.9 A/m, respectively. This paper aims to increase return loss to a typical VSWR value near 1. Besides boosting antenna gain, directivity, and efficiency, it can be used in future wireless applications, including mobile phones and wireless LANs. The proposed antenna design outperforms earlier experiments, demonstrating that the research has increased performance.
Annual reports serve as vital instruments for government ministries and agencies, enabling transparency and accountability in managing state budgets (APBN) and activities, thereby fulfilling a crucial role in public accountability,... more
Annual reports serve as vital instruments for government ministries and
agencies, enabling transparency and accountability in managing state budgets (APBN) and activities, thereby fulfilling a crucial role in public
accountability, particularly in the context of sustainable development goal
(SDG) 14. However, due to their extensive nature, it becomes imperative to
conduct topic modeling analysis to discern trends and topics within these
reports. In this study, latent Dirichlet allocation (LDA), a prominent topic
modeling technique, is employed to analyze the annual reports of the Ministry of Marine Affairs and Fisheries (KKP) Indonesia from 2015 to 2022. Utilizing the coherence score as an evaluation metric, we assess the quality of topic models across each report year. Our findings underscore the consistent mphasis on fisheries and marine-related initiatives, emphasizing their relevance to SDG 14 and Indonesia’s maritime landscape. Ultimately, this study offers valuable insights to inform strategic planning and decision-making processes within the KKP, contributing to the advancement of SDG 14 and promoting sustainable development in Indonesia’s fisheries and marine sectors.
In this research, we explain comprehensive industrial and innovation results on using an artificial neural network (ANN) method to improve the performance of microstrip patch antennas for 5G, indoor-outdoor, and Ku band uses. To determine... more
In this research, we explain comprehensive industrial and innovation results
on using an artificial neural network (ANN) method to improve the
performance of microstrip patch antennas for 5G, indoor-outdoor, and Ku
band uses. To determine if an antenna is appropriate, this article discusses
multiple methods, one of which is to do a simulation using validating software
like high frequency structure simulator (HFSS) and Altair Feko. Based on the
Rogers RT 5880 substrate, the antenna is constructed. There is a loss tangent
of 0.0009 and its dimensions are 17.1053 mm in length and 16 mm in width.
Its dielectric constant is 2.2. Despite its small size, it boasts an impressive
maximum efficiency of almost 90% and a gain of approximately 8 dB. As an
indicator of ANN model performance, we may look at the R-squared value
(99%), the mean square error (MSE), which is approximately 0.0015, and the
confidence interval (99%). The ANN models are the most accurate and have
the lowest error rate when it comes to predicting efficiency and gain. The
suggested antenna is a promising contender for the targeted Ku band,
indoor/outdoor, and 5G uses, as verified by the clustering of computer
simulation technology (CST), HFSS, and Altair Feko simulated results with
the measured and predicted outcomes of ANN approach.
This article provides a detailed evaluation of cutting-edge artificial intelligence (AI) approaches and metaheuristic algorithms for optimizing wind turbine location inside wind farms. The growing need for renewable energy sources has... more
This article provides a detailed evaluation of cutting-edge artificial intelligence (AI) approaches and metaheuristic algorithms for optimizing wind turbine location inside wind farms. The growing need for renewable energy sources has fueled an increase in research towards efficient and sustainable wind farm designs. To address this challenge, various AI techniques, including genetic algorithms (GA), particle swarm optimization (PSO), simulated annealing, artificial neural networks (ANNs), convolutional neural networks (CNNs), and reinforcement learning, have been explored in combination with metaheuristic algorithms. The goal is to discover optimal sites for turbine placement based on a variety of parameters such as energy output, cost-effectiveness, environmental impact, and geographical restrictions. The paper examines the advantages and disadvantages of each strategy and highlights current breakthroughs in the area. This assessment adds to continuing efforts to optimize wind farm design and promote the use of clean and sustainable energy sources by offering significant insights into current advances.
This article proposes a distributed secondary control scheme based on the voltage-shifting method for standalone direct current (DC) microgrids to enforce proportional power-sharing under the imbalance of the feeder line impedances and... more
This article proposes a distributed secondary control scheme based on the voltage-shifting method for standalone direct current (DC) microgrids to enforce proportional power-sharing under the imbalance of the feeder line impedances and compensate for the DC bus voltage error caused by the droop control. Secondary control will be implemented on local controllers to increase reliability. A low-bandwidth communication network will also be needed to exchange converters' information. Using information from the communication network, the reference voltage for the primary control will be adjusted to compensate for the droop control and line impedance influences. The appropriate voltage shifting terms will be determined by the delta iteration method. The proposed control will be evaluated with simulations using PLECS software.
The background of research try to use an open loop system, without feedback, by building torque in the target position. Theoretical basis manages the turning round of servo motor change direct currents to alternating currents by adjusting... more
The background of research try to use an open loop system, without feedback, by building torque in the target position. Theoretical basis manages the turning round of servo motor change direct currents to alternating currents by adjusting the rate input used. Methodology in simple terms, the flowchart of code processing from igniting transistors to managing the turning round of a three-phase alternating current (AC) servo motor. The result of the transistor ignition time with room phasor pulse width modulation (PWM) to set the ignition of a three-phase AC servo motor in the process of igniting as ordered, for manage position only igniting transistors in target position, and building torque at there. Discussion, igniting transistors to manage the turning round of servo motors, allows motion of servo motor, allows motion of servo motor to be added faster, accurately, soft, and steady. In conclusion, in this case the researcher uses it more specifically by turning on the transistor only at the target position, so that the torque generated is only at the target position, the servo motor motion from the start position immediately ends at the target position. Suggestions, this system igniting can be applied at a motion high-velocity system like a missile system controlled.
The grid-feeding inverter is the most popular choice for implementing a distributed generator (DG) in a photovoltaic (PV) system. Its role is to deliver active power with zero reactive power while connected to the grid. In most cases, the... more
The grid-feeding inverter is the most popular choice for implementing a distributed generator (DG) in a photovoltaic (PV) system. Its role is to deliver active power with zero reactive power while connected to the grid. In most cases, the inverter is disconnected as soon as voltage sag disturbances occur, an operation that is known as an islanding mode. Recently, grid code regulations have been upgraded, allowing the inverter to remain connected to the grid and inject a certain amount of reactive power during voltage sags, provided that it meets grid code requirements. However, this can result in an increase in current injection by the inverter, which may cause overcurrent and damage the inverter if it exceeds the capability of the inverter. To control peak current during voltage sag disturbances, a proposed gridfeeding inverter attempts to detect voltage sag and calculate proportional injected active and reactive power. Once the disturbance has disappeared from the system, the inverters can be restored to normal operation. Prototype experiments have validated the ability of this system to control peak current during voltage sag while protecting the inverters.
Security challenges over the years has led to the need for an improvement in the traditional security approaches. This led to the advent of biometrics. Recently, among the biometric approaches, sclera has been an area of imense study.... more
Security challenges over the years has led to the need for an improvement in the traditional security approaches. This led to the advent of biometrics. Recently, among the biometric approaches, sclera has been an area of imense study. This is due to its accuracy; however, segmentation of the sclera has been a limiting factor to the application of this biometric trait. Several approaches have been proposed in literature but there is still the need to improve the segmentation accuracy. This study proposes the use of circular hough transform and a modified run-data based algorithm. The study also presented a sclera recognition system using the compound local binary pattern for features extraction and Manhattan distance for classification. The system produced a segmentation accuracy of 99.9% for sclera blood vessels, periocular and iris (SBVPI) sclera database and 100% for manually captured sclera database. The system produced an accuracy of 99.98 for SBVPI sclera database and 99.99% for manually captured sclera database.
Brain tumor disease has become a topic of research whether it is in the case of segmentation or classification. For the case of classification, the types of brain tumors that are grouped generally consist of high-grade glioma (HGG) and... more
Brain tumor disease has become a topic of research whether it is in the case of segmentation or classification. For the case of classification, the types of brain tumors that are grouped generally consist of high-grade glioma (HGG) and low-grade glioma (LGG) tumors. In this research we are doing, we propose a method for classifying 2 types of tumors, namely HGG and LGG, using the convolutional neural network (CNN) algorithm which is trained and will be tested against the 2018 and 2019 brain tumor segmentation (BRATS) datasets which have 4 modalities, namely fluid-attenuated inversion recovery (FLAIR), T1, T1ce, and T2 totaling 2048 images. The CNN algorithm was chosen because it can directly receive input in the form of a magnetic resonance image (MRI) with the feature extraction process as well as the classification algorithm. By forming a simple CNN algorithm architecture with only 3 convolutional layers which have an input layer in the form of a full MRI image with dimensions of 240×240×3, we obtained a relatively high accuracy result of 94.14%, it can even be said to be better than similar methods but with more complicated architecture.
This study focuses on enhancing office security through a smart door system, designed to protect sensitive documents and critical data. Emphasizing exclusive access for authorized personnel, the system integrates advanced biometric... more
This study focuses on enhancing office security through a smart door system, designed to protect sensitive documents and critical data. Emphasizing exclusive access for authorized personnel, the system integrates advanced biometric authentication, predominantly facial recognition. The project's aim is to optimize face recognition using convolutional neural network (CNN) techniques, identifying the best preprocessing methods and hyperparameter settings. A significant aspect of the research involves developing a smart door system with remote authentication and control capabilities via internet connectivity. Employing transfer learning with MobileNet V2, the study presents a compact model tailored for the Raspberry Pi platform. The model utilizes a dataset with five facial recognition classes and an additional class for unknown faces, ensuring a diverse representation. The trained model achieved a high accuracy (0.9729) and low loss (0.09). System evaluation revealed an overall accuracy of 0.96, perfect recall (1.00), and a precision of 0.897. These results demonstrate the system's efficacy in secure access control, making it a viable solution for contemporary office environments.
The internet of things (IoT) has revolutionized connectivity and introduced significant security challenges. In this context, intrusion detection systems (IDS) play a crucial role in detecting attacks in IoT environments. Bot-IoT datasets... more
The internet of things (IoT) has revolutionized connectivity and introduced
significant security challenges. In this context, intrusion detection systems (IDS) play a crucial role in detecting attacks in IoT environments. Bot-IoT datasets often face class imbalance issues, with the attack class having significantly more samples than the normal class. Addressing this imbalance is essential to enhance IDS performance. The study evaluates various techniques, including imbalance ratio techniques we call imbalance ratio formula (IRF) for controlling imbalance data, while also testing IRF to compare it with oversampling techniques like synthetic minority oversampling technique (SMOTE) and adaptive synthetic sampling (ADASYN). This research also incorporates the extreme gradient boosting (XGBoost) ensemble model approach to improve IDS performance in dealing with multiclass imbalance issues in Bot-IoT datasets. Through in depth analysis, we identify the strengths and eaknesses of each method. This study aims to guide researchers and practitioners working on IDS in high-risk IoT environments. The proposed IRF, when integrated with the XGBoost algorithm has been demonstrated to achieve comparable accuracy of 99.9993% while reducing the training time to be on average at least two times faster than those achieved by the other state-of-the-art ensemble methods.
Consistency, scalability, and local stability properties ensure that a model or method produces reliable and predictable outcomes. The Shapash helps users understand how the model makes its decisions. With machine learning (ML) system,... more
Consistency, scalability, and local stability properties ensure that a model or method produces reliable and predictable outcomes. The Shapash helps users understand how the model makes its decisions. With machine learning (ML) system, healthcare experts can identify individuals at higher risk and implement interventions to reduce the occurrence and severity of disease. ML had achieved higher prediction accuracy even though the accuracy of their prediction depends on the quality and uantity of the data used for training. Despite the wider application and higher accuracy of different ML for disease prediction, the explanation of their predictive outcome is much more important to the healthcare professional, the patient, and even their developers. However, most of the ML systems do not explain their outcomes. To address the explainability issue various techniques such as local model agnostic explanation (LIME), and shapley additive explanation (SHAP) have been proposed over the recent years. Furthermore, the consistency, local stability, and approximation of the explanation remained one of the research topics in ML. This study investigated the consistency, stability, and approximation of LIME and SHAP in predicting heart disease (HD). The result suggested that LIME and SHAP generated a similar explanation (distance=0.35), compared to the active coalition of variable (ACV) explanation (distance=0.43).
Diabetic retinopathy (DR) is the leading cause of blindness among adults and has no visible symptoms. Early detection is the key to prevent vision loss. Computer-aided deep learning using convolutional neural networks (CNN) have recently... more
Diabetic retinopathy (DR) is the leading cause of blindness among adults and has no visible symptoms. Early detection is the key to prevent vision loss. Computer-aided deep learning using convolutional neural networks (CNN) have recently gained momentum for DR diagnosis as the cost can be significantly reduced while making the diagnosis more accessible. In this work, we present a fully automated framework DR network (DRNET) that fuses both image texture features and deep learning features to train the CNN model. The framework aggregates predictions from three CNN models using ensemble learning for more precise and accurate DR diagnosis when compared to standalone CNN. To strengthen the confidence of medical practitioners in acceptance of automated DR diagnosis, we extend the DRNET framework by producing model uncertainty scores and explainability maps along with the classification results.
This research is based on a significant problem in credit risk analysis in the banking sector caused by class imbalance. We face the problem of the model's inability to accurately identify risks in the ''Charged Off'' class. As a... more
This research is based on a significant problem in credit risk analysis in the banking sector caused by class imbalance. We face the problem of the model's inability to accurately identify risks in the ''Charged Off'' class. As a solution, we propose a stacked ensemble approach that utilizes synthetic minority over-sampling technique (SMOTE) to balance the class distribution. Experiments were conducted by applying SMOTE to the training data before training the credit model using gradient boosting (XGBoost) and random forest (RF) algorithms in a single ensemble. The results show significant improvements in precision, recall, and F1-score after applying SMOTE on the unbalanced classes. The updated model achieved a striking accuracy rate of 0,97 on resampled training data. This research clearly identifies the problem of class imbalance as a major challenge in credit risk analysis. The application of SMOTE in a stacked ensemble was found to be effective in improving model performance, making a valuable contribution to the development of more reliable credit models for better risk management and revenue generation in financial institutions.
A dynamic model of countering phishing attacks is considered. Cryptocurrency exchanges (CCE) and/or their clients are considered as an example of a phishing victim. The model, unlike similar ones, is based on the assumption that the... more
A dynamic model of countering phishing attacks is considered.
Cryptocurrency exchanges (CCE) and/or their clients are considered as an
example of a phishing victim. The model, unlike similar ones, is based on
the assumption that the dynamics of the states of the player-victim of
phishing attacks and the player-intruder (fisher) is set by means of a system of differential equations. The peculiarity of this model is that it represents a bilinear differential game of quality, for which methods for solving linear differential games are not applicable and, in addition, the absence of functional restrictions on the strategies of players (even immeasurable functions are allowed) does not allow the use of traditional approach. And their solution makes it possible to form payoff matrices, which are part of the training set for artificial neural networks (ANNs). Such a collaboration of models will make it possible to accurately build an anti-phishing strategy, minimizing the costs of both a potential victim of phishing attacks and the defense side when building a secure system of communication with CCE clients. The neuro-game approach makes it possible to predict the process of countering phishing in the context of costs for both parties using different strategies.
The rapid development of technology has led to various advancements, including the ability to conduct online payments through applications. Companies providing digital transaction payment services require a payment gateway system as an... more
The rapid development of technology has led to various advancements, including the ability to conduct online payments through applications. Companies providing digital transaction payment services require a payment gateway system as an intermediary for online transactions. The system acts as an intermediary between merchants with digital wallets and banks, involving the company, merchants, and banks in its development. The system includes essential features like bill payment, user credit card verification, and transaction checking, customized to meet the specific requirements of merchants and adhere to security standards. In this study, we incorporated the rivest-shamir-adleman (RSA) algorithm and advanced encryption standard (AES) to ensure the security of a payment gateway system. We adopt the agile methodology in the development process of the system. We test the acceptance of the system to the user, and we also test the performance of the system. The results of this research show that the system can be accepted by the user, fulfill the user's needs, can be executed well, and performs adequately in handling multiple transactions concurrently. The outcome of this study can serve as valuable input for the company in building its own system, providing insights into algorithm implementation techniques and the system workflow.
Computer methods in biomechanics are computer methods for solving biomechanics problems. Rula, Reba, Niosh, and Owas is a method of analyzing work risks due to incorrect posture. Each method uses a worksheet on one posture to analyze... more
Computer methods in biomechanics are computer methods for
solving biomechanics problems. Rula, Reba, Niosh, and Owas is a
method of analyzing work risks due to incorrect posture. Each
method uses a worksheet on one posture to analyze risk. For
dynamic work, a computer method is needed that can quickly
calculate every change in posture. Virtual reality and Kinect as
dynamic work analysis methods cannot be carried out preventively.
Preventive measures are needed to prevent workers from risks
resulting from work. For this reason, computer simulation methods
are needed to create work stations and work processes so that they
can be carried out preventively. The s-task simulation builder (S-
TSB) framework provides a dynamic work analysis solution using
simulation. The three processes carried out include surveying,
creating simulations and processing posture data. Software validation
was tested using the black box analysis method and the results were
as expected. The dynamic working model is tested showing results in
the form of graphs so it is easy to compare each change. The use of
simulation also saves design costs because optimization can simply
be done by changing the simulated work station data and/or work
process.
In this study, we introduce an innovative approach that combines convolutional neural networks (CNN) with an attention mechanism (AM) to achieve precise emotion detection from speech data within the context of elearning. Our primary... more
In this study, we introduce an innovative approach that combines convolutional neural networks (CNN) with an attention mechanism (AM) to achieve precise emotion detection from speech data within the context of elearning. Our primary objective is to leverage the strengths of deep learning through CNN and harness the focus-enhancing abilities of attention mechanisms. This fusion enables our model to pinpoint crucial features within the speech signal, significantly enhancing emotion classification performance. Our experimental results validate the efficacy of our approach, with the model achieving an impressive 90% accuracy rate in emotion recognition. In conclusion, our research introduces a cutting-edge method for emotion detection by synergizing CNN and an AM, with the potential to revolutionize various sectors.
In software defect prediction, noisy attributes and high-dimensional data remain to be a critical challenge. This paper introduces a novel approach known as multi correlation-based feature selection (MCFS), which seeks to address these... more
In software defect prediction, noisy attributes and high-dimensional data
remain to be a critical challenge. This paper introduces a novel approach
known as multi correlation-based feature selection (MCFS), which seeks to address these challenges. MCFS integrates two feature selection techniques, namely correlation-based feature selection (CFS) and correlation matrix based feature selection (CMFS), intending to reduce data dimensionality and eliminate noisy attributes. To accomplish this, CFS and CMFS are applied independently to filter the datasets, and a weighted average of their outcomes is computed to determine the optimal feature selection. This approach not only reduces data dimensionality but also mitigates the impact of noisy attributes. To further enhance predictive performance, this paper leverages the particle swarm optimization (PSO) algorithm as a feature selection mechanism, specifically targeting improvements in the area under the curve (AUC). The evaluation of the proposed method is conducted on 12 benchmark datasets sourced from the NASA metrics data program (MDP) corpus, renowned for their noisy attributes, high dimensionality, and imbalanced class records. The research findings demonstrate that MCFS outperforms CFS and CMFS, yielding an average AUC value of 0.891, thereby emphasizing it is efficacy in advancing classification performance in the context of software defect prediction using k-nearest neighbors (KNN) classification.
An important facet of disaster mitigation is discovering regions based on their lack of preparedness for combating disaster. Accordingly, organizations can lay down appropriate risk management strategies and guidelines to minimize loss... more
An important facet of disaster mitigation is discovering regions based on their lack of preparedness for combating disaster. Accordingly, organizations can lay down appropriate risk management strategies and guidelines to minimize loss due to disaster. "Technique for order of preference by similarity to ideal solution (TOPSIS)" is a popular multi-criteria decisionmaking (MCDM) method that is deployed for ranking alternatives based on multiple pre-specified criteria. However, the method's efficiency in ranking region as per multiple criteria for disaster management is far from the ground truth. The authors propose a novel intelligent method HCF-TOPSIS, an extension of traditional TOPSIS, to deliver an efficient ranking mechanism for regional safety assessment of disaster affected regions. HCF-TOPSIS capitalizes on entropy (H), closeness (C), and farness (F) metrics to obtain efficient ranking scores of the disaster affected regions. Extensive experimentation validates the claim and proves the superiority of HCF-TOPSIS over existing TOPSIS variants. The proposed research presents many benefits, especially to governments and stakeholders, intending to take appropriate actions to contain disasters.
In this paper, a proposal is made for a cryptographic algorithm designed for passive ultra-high-frequency (UHF) radio frequency identification systems. The algorithm relies on the advanced encryption standard (AES) as its fundamental... more
In this paper, a proposal is made for a cryptographic algorithm designed for passive ultra-high-frequency (UHF) radio frequency identification systems. The algorithm relies on the advanced encryption standard (AES) as its fundamental encryption technique, augmented by two supplementary steps: the initial step involves generating a random key and the second is the randomization of data, this introduces an extra level of security to encryption process against attacks. The developed architecture has been optimized to minimize hardware resource consumption with faster execution speed. The algorithm has been simulated, synthesized and implemented in an xtreme digital signal processing (DSP) starter kit equipped with xilinx’s spartan-3A DSP 1800A edition and it serves the purpose of encrypting and decrypting user data on a radio frequency identification (RFID) passive tag. The main objective is to make it difficult to break the algorithm because of its multiple steps. The experimental results showed that the speed, functionality and cost
of encryption and decryption make this a perfectly practical solution, providing a satisfactory level of security for today’s communications systems, or other electronic data transfer processes where security is required.
The rapid advancement of digitalization has significantly impacted various aspects of accounting professions, particularly in taxation. Tax digitization offers numerous advantages, including streamlining tax processes, reducing... more
The rapid advancement of digitalization has significantly impacted various aspects of accounting professions, particularly in taxation. Tax digitization offers numerous advantages, including streamlining tax processes, reducing administrative burdens, increasing efficiency, and enhancing data security. While tax practitioners in advanced economies have embraced digitalization, their Malaysian counterparts are still in the early stages of transitioning to a modern digital system. This situation has prompted researchers to predict factors that could accelerate the adoption of tax digitalization among Malaysian tax practitioners. Emulating the Unified theory of acceptance and use of technology (UTAUT), this study investigates the adoption of tax digitalization with performance expectancy, effort expectancy, social influence, and facilitating conditions. The researchers distributed 200 questionnaires to Malaysian tax practitioners. However, only 142 proceeded for further analysis. Results from multiple regression using partial least squares structural equation modelling (PLS-SEM) 3 indicate that all variables: effort and performance expectancy, social influence, and facilitating conditions exhibit a significant relationship with tax digitalization adoption. These findings provide valuable insights for policymakers, tax authorities, and professional bodies in developing strategies and initiatives to promote the adoption of tax digitalization among practitioners. Embracing digitalization is crucial for transforming the profession and fostering efficiency, sustainability, and resilience.
This article introduces an innovative circular and compact ultra-wideband (UWB) radiator designed specifically for 5G microwave applications. This antenna incorporates a "TU"-shaped ground plane on its reverse side, with strip lines... more
This article introduces an innovative circular and compact ultra-wideband (UWB) radiator designed specifically for 5G microwave applications. This antenna incorporates a "TU"-shaped ground plane on its reverse side, with strip lines feeding the circular element on the front side. Notably, the antenna exhibits impressive characteristics, including an outstanding impedance bandwidth of 107%, and an impressive return loss of-32 dB. Its operational frequency range spans from 2.4 GHz to 11 GHz, centered at 6.7 GHz. Extensive simulations were conducted using CST microwave studio software to validate its performance. The antenna's physical dimensions are defined by a size of 0.12 λ × 0.08 λ × 0.012 λ relative to its wavelength. Furthermore, this antenna demonstrates exceptional stability in its polar patterns and maintains a high-efficiency level, achieving a substantial gain of 3.75 dBi with an efficiency rating of 84.5%. These remarkable attributes make this antenna suitable for a wide range of applications, including Wi-Fi, 5G, WLAN, and various other microwave communication scenarios.
Wireless communication networks could become quicker and more dependable as sixth generation (6G) antennas develop. One difficult development in the field of wearable technology is wearable textile antennas. Wearable textile antennas... more
Wireless communication networks could become quicker and more dependable as sixth generation (6G) antennas develop. One difficult development in the field of wearable technology is wearable textile antennas. Wearable textile antennas require flexible building materials, primarily textiles with planar structures. This study will concentrate on the design and specification of microstrip rectangular patch antennas that use a variety of fabrics as the substrate, such as lycra, polyester, and washed cotton. Using two-dimensional (2D) materials in the terahertz (THz) range, the study presented here will help in the construction of appropriate wearable antennas. This work may significantly improve materials science and engineering by investigating and using 2D materials, such as tungsten disulfide, in antenna design. The suggested antennas' resonance frequencies are 1.1254 THz for polyester substrates, 4.4019 THz for washed cotton, and 2.9861 THz for lycra substrates. For substrate materials such as lycra, polyester, and washed cotton, the measured return loss was-44.92 dB,-38.17 dB, and-20.75 dB. This study could lead to the creation of new technologies and materials, such tungsten disulfide, which would have far-reaching uses outside of wearable electronics and provide significant advantages for society.
In this study, the hybrid unipolar-bipolar (U-B) optical code division multiple access (OCDMA) with mixed unipolar-bipolar scheme in free-space optical communication was proposed. Additionally, the codeword assigned introduced the... more
In this study, the hybrid unipolar-bipolar (U-B) optical code division multiple access (OCDMA) with mixed unipolar-bipolar scheme in free-space optical communication was proposed. Additionally, the codeword assigned introduced the quasi-polarized code, which could be used to transmit both the unipolar and bipolar section. Using OptiSystem simulations, the model was studied. According to the results from the simulation, the proposed hybrid U-B OCDMA can correctly decode the original optical signal from its matching encoder. Further testing of the hybrid U-B OCDMA system was conducted in turbulence conditions. According to the simulation results, walsh-zero cross correlation (ZCC) performs better than all other codes for the unipolar segment whereas walsh-hadamard (W-H) code performs best for the bipolar section. The simulations also showed that the performance deterioration of the walsh-ZCC algorithm was the greatest.
In the B5G/6G network, the deployment of small cells increased to keep up with the growth of mobile traffic. This deployment will increase the number of handovers (HOs) between cells. Ping-pong handover (PPHO) and radio link failure (RLF)... more
In the B5G/6G network, the deployment of small cells increased to keep up with the growth of mobile traffic. This deployment will increase the number of handovers (HOs) between cells. Ping-pong handover (PPHO) and radio link failure (RLF) are considered the two major problems in HO that may occur. So, the challenge is to set the handover control parameters (HCPs) carefully to find out the proper HO decision that should be appropriate to the environmental constraints. Therefore, in our paper, we propose an adaptive HCPs algorithm that adapts to environmental constraints. In addition, the proposed algorithm will have immunity to RLF and will significantly minimize the amount of PPHO compared to other workers. In the simulation results, our proposed model is evaluated using two frequency plans. By using frequency plan 1, the user mean throughput increased from 270 Kbps to 281 Kbps when the serving cell was fully loaded. By using frequency plan 2, the user mean throughput increased from 6 Mbps to 20 Mbps when the serving cell was fully loaded. In addition, the amount of ping-pong handover between overlapped small cells decreased and will not exceed one PPHO compared to another literature model.

And 1657 more