[go: up one dir, main page]

 
 
applsci-logo

Journal Browser

Journal Browser

AI, VR, and Visual Computing in Mechatronics and Robotics

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 672

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechatronics and Machine Dynamics, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
Interests: computer vision; parallel robots; micro- and mini-robots; design of mechatronic systems; CAD; CAM; flexible systems manufacturing; mechanisms and dynamics of machines
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Virginia Commonwealth University, Richmond, VA 23284, USA
Interests: applied research in computational intelligence algorithms, such as artificial neural networks, fuzzy logic systems; unsupervised learning techniques in areas of energy, cyber security, human–machine interfacing, intelligent control systems, software-defined networks, robotics/mechatronics, visualizations, and others
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Mechanical Engineering, University of Niš, 18000 Niš, Serbia
Interests: robotics/mechatronics; Industry 4.0; energy; rail track detection; machine vision; computer vision; vision-based obstacle detection; machine learning; dataset generation for obstacle detection in railways
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Design Engineering and Robotics, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
Interests: robotics; predictive maintenance; robotics safety; machine vision; computer vision; vision-based obstacle detection; machine learning; robot-assisted medicine
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue focuses on recent advances in the integration of AI, VR, and visual computing in mechatronics and robotics technologies. We invite submissions that highlight cutting-edge research and developments in these fields, showcasing the latest innovations in computer vision, artificial intelligence, and immersive technologies within mechatronics systems. Topics of interest include AI-driven image processing, VR-enhanced simulations, and sensor integration, with applications spanning robotics, sports, additive manufacturing, CAD/CAM systems, UAVs, and more. These technologies are revolutionizing fields like sports training, industrial automation, and autonomous systems, opening new possibilities for innovation and efficiency.

Both theoretical and experimental studies are welcome, as well as comprehensive reviews and survey articles.

Topics of interest for this Special Issue include, but are not limited to, the following:

  • Recent advances in AI, VR, and visual computing in mechatronics and robotics;
  • AI and computer vision for obstacle detection and autonomous navigation;
  • VR applications in robotics and sports training;
  • AI-powered vision inspection and robot programming in serial and parallel robots;
  • Action recognition in realistic sports scenarios using AI and computer vision;
  • 3D machine vision and VR in additive manufacturing;
  • AI-enhanced performance indices of robotic systems;
  • AI and visual computing for robotics in medical applications;
  • Model-based control of mechatronic systems using AI and visual computing;
  • Vision-based control systems in mechatronics driven by AI;
  • Practical implementations of AI, VR, and visual computing in mechatronics and robotics;
  • AI and visual computing applications in mechanical systems;
  • Computer vision and AI for CAD/CAM engineering;
  • Autonomous vehicles and UAVs utilizing AI, VR, and visual computing.

We encourage contributions that push the boundaries of AI, VR, and visual computing, showcasing their transformative impact on mechatronics, robotics, and related fields.

Dr. Sergiu Dan Stan
Prof. Dr. Milos Manic
Dr. Milos Simonovic
Prof. Dr. Bogdan Mocan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI
  • VR
  • computer vision
  • mechatronics
  • robotics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1478 KiB  
Article
FFMT: Unsupervised RGB-D Point Cloud Registration via Fusion Feature Matching with Transformer
by Jiacun Qiu, Zhenqi Han, Lizhaung Liu and Jialu Zhang
Appl. Sci. 2025, 15(5), 2472; https://doi.org/10.3390/app15052472 - 25 Feb 2025
Viewed by 216
Abstract
Point cloud registration is a fundamental problem in computer vision and 3D computing, aiming to align point cloud data from different sensors or viewpoints into a unified coordinate system. In recent years, the rapid development of RGB-D sensor technology has greatly facilitated the [...] Read more.
Point cloud registration is a fundamental problem in computer vision and 3D computing, aiming to align point cloud data from different sensors or viewpoints into a unified coordinate system. In recent years, the rapid development of RGB-D sensor technology has greatly facilitated the acquisition of RGB-D data. In previous unsupervised point cloud registration methods based on RGB-D data, there has often been an overemphasis on matching local features, while the potential value of global information has been overlooked, thus limiting the improvement in registration performance. To address this issue, this paper proposes a self-attention-based global information attention module, which learns the global context of fused RGB-D features and effectively integrates global information into each individual feature. Furthermore, this paper introduces alternating self-attention and cross-attention layers, enabling the final feature fusion to achieve a broader global receptive field, thereby facilitating more precise matching relationships. We conduct extensive experiments on the ScanNet and 3DMatch datasets, and the results show that, compared to the previous state-of-the-art methods, our approach reduces the average rotation error by 26.9% and 32% on the ScanNet and 3DMatch datasets, respectively. Our method also achieves state-of-the-art performance on other key metrics. Full article
(This article belongs to the Special Issue AI, VR, and Visual Computing in Mechatronics and Robotics)
Show Figures

Figure 1

Figure 1
<p>The pipeline of our Fusion Feature Matching with Transformer (<b>FFMT</b>). It takes a pair of RGB-D images as input and estimates the relative pose between them. The RGB-D images first pass through a feature extraction and fusion module. Both the RGB branch and the point cloud branch adopt U-Net like structures, utilizing CNN and KPConv, respectively, for feature extraction. A multi-scale bidirectional feature fusion mechanism is employed between the two branches. After obtaining the extracted features, the features are flattened into 1D vectors and combined with positional encoding. The augmented features are then processed through a series of alternating self-attention and cross-attention layers. Based on the extracted features, coarse correspondences are determined based on Lowe’s ratio. These correspondences are refined through several RANSAC iterations to obtain precise matches. Finally, the estimated transformation is computed using a least-squares method. The entire model is trained end-to-end with differentiable rendering.</p>
Full article ">Figure 2
<p>The detail of our methods. (<b>a</b>) Detailed illustration of bidirectional feature fusion. (<b>b</b>) Self-attention layer. (<b>c</b>) Cross-attention layer. (<b>d</b>) Transformer encoder layer. (<b>e</b>) Linear attention layer with <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> </semantics></math> complexity.</p>
Full article ">Figure 3
<p>Rotation/Translation accuracy and error with different frames apart. On the left, <b>pr5</b> represents the accuracy of PointMBF when the rotation error is less than 5°, <b>or5</b> represents the accuracy of our methods when the rotation error is less than 5°, <b>prm</b> represents the mean rotational error of PointMBF, and <b>orm</b> represents the mean rotational error of our methods. On the right, <b>pt5</b> represents the accuracy of PointMBF when the translation error is less than 5 cm, <b>ot5</b> represents the accuracy of our methods when the translation error is less than 5 cm, <b>ptm</b> represents the mean translational error of PointMBF, and <b>otm</b> represents the mean translational error of our methods and others that can be deduced by analogy.</p>
Full article ">
Back to TopTop