CN110288528B - An image stitching system and method for micro-nano visual observation - Google Patents
An image stitching system and method for micro-nano visual observation Download PDFInfo
- Publication number
- CN110288528B CN110288528B CN201910555841.1A CN201910555841A CN110288528B CN 110288528 B CN110288528 B CN 110288528B CN 201910555841 A CN201910555841 A CN 201910555841A CN 110288528 B CN110288528 B CN 110288528B
- Authority
- CN
- China
- Prior art keywords
- image
- micro
- motion platform
- nano
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000000007 visual effect Effects 0.000 title claims abstract description 22
- 238000004891 communication Methods 0.000 claims abstract description 3
- 230000008569 process Effects 0.000 claims description 13
- 238000013519 translation Methods 0.000 claims description 13
- 239000003550 marker Substances 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 2
- 230000004927 fusion Effects 0.000 claims 2
- 230000001131 transforming effect Effects 0.000 claims 2
- 238000005516 engineering process Methods 0.000 description 10
- 238000011160 research Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003749 cleanliness Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Microscoopes, Condenser (AREA)
Abstract
本发明公开了一种面向微纳级视觉观测的图像拼接系统及方法,所述系统包括:微纳运动平台,用于固定观测物,设于显微镜物镜下方;所述微纳运动平台中还设有一个或多个光栅传感器;图像获取装置,设于显微镜目镜上,用于拍摄观测物图像;计算装置,与微纳运动平台和图像获取装置通信连接,控制微纳运动平台移动,并且每次移动后均通过图像获取装置采集观测物图像,且相邻两次获取的观测物图像存在部分重叠,同时获取光栅传感器传输的微纳运动平台的位置信息;根据各帧图像拍摄时微纳运动平台的位置信息,对各帧图像进行配准和拼接。本发明以微纳运动平台作为观测物的载体,通过光栅传感器记录图像的运动信息,能够满足微纳级图像配准的精度需求。
The invention discloses an image stitching system and method for micro-nano-level visual observation. The system comprises: a micro-nano motion platform, which is used for fixing an observation object, and is arranged under a microscope objective lens; the micro-nano motion platform is further provided with There are one or more grating sensors; an image acquisition device, which is arranged on the microscope eyepiece, is used to capture the image of the observation object; a computing device is connected in communication with the micro-nano motion platform and the image acquisition device, and controls the movement of the micro-nano motion platform, and each time After moving, the image of the observation object is collected by the image acquisition device, and the images of the observation object obtained in two adjacent times are partially overlapped, and the position information of the micro-nano motion platform transmitted by the grating sensor is obtained at the same time; The location information of each frame is registered and stitched. The invention uses the micro-nano motion platform as the carrier of the observation object, records the motion information of the image through the grating sensor, and can meet the precision requirements of the micro-nano image registration.
Description
技术领域technical field
本发明属于图像处理技术领域,尤其涉及一种面向微纳级视觉观测的 图像拼接系统及方法。The invention belongs to the technical field of image processing, and in particular relates to an image stitching system and method for micro-nano visual observation.
背景技术Background technique
本部分的陈述仅仅是提供了与本公开相关的背景技术信息,不必然构 成在先技术。The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art.
图像拼接技术的核心技术就是图像配准技术,早在1992年,剑桥大学 的LisaGottesfeld Brown就已经总结了各领域图像配准技术的基本理论以及 它们的主要方法,这些领域包括有医学图像分析,遥感数据处理以及计算 机视觉,模式识别等。The core technology of image stitching technology is image registration technology. As early as 1992, Lisa Gottesfeld Brown of Cambridge University has summarized the basic theory of image registration technology in various fields and their main methods. These fields include medical image analysis, remote sensing Data processing as well as computer vision, pattern recognition, etc.
1996年,Richard Szeliski(Microsoft Corporation),提出了基于运动的全 景图像拼接模型,采用Levenberg-Marquardt迭代非线性最小化方法(简称 L-M算法),通过求出图像间的几何交换关系来进行图像配准,由于此方 法效果较好,收敛速度快,且可处理具有平移,旋转,仿射等多种变换的 待拼接图像,因此也成为图像拼接领域的经典算法,而Richard Szeliski也 从此成为图像拼接领域的奠基人,他所提出的理论已经成为一种经典理论 体系,直到今天仍然有很多人在研究他的拼接理论。2000年,Shmuel Peleg(memberIEEE),Benny Rousso,Alex Rav-Acha和Assaf Zomet在 Richard Szeliski的基础上做了进一步的改进,提出了自适应的图像拼按模 型,它是根据相机的不同运动,自适应选择拼接模型,通过把图像分成狭 条进行多重投影来完成图像的拼接。这一研究成果无疑推动了图像拼接技 术的进一步发展,自适应问题也从此成为图像拼接领域研究的新热点。与 此同时,其他配准算法的发展也应用到了图像拼接技术上来。除了上述的 经典算法之外,还有两种主要方法。一种是相位相关度法,另一种是基于 几何特征的图像配准方法。In 1996, Richard Szeliski (Microsoft Corporation) proposed a motion-based panorama image stitching model, using the Levenberg-Marquardt iterative nonlinear minimization method (L-M algorithm for short), by finding the geometric exchange relationship between images for image registration. , because this method has good effect, fast convergence speed, and can process images to be stitched with translation, rotation, affine and other transformations, it has become a classic algorithm in the field of image stitching, and Richard Szeliski has since become the field of image stitching. The founder of his theory has become a classic theoretical system, and many people are still studying his splicing theory today. In 2000, Shmuel Peleg (memberIEEE), Benny Rousso, Alex Rav-Acha and Assaf Zomet made further improvements on the basis of Richard Szeliski and proposed an adaptive image collage model, which is based on the different movements of the camera. Adapt to the selection stitching model, and complete the image stitching by dividing the image into narrow strips for multiple projections. This research result has undoubtedly promoted the further development of image stitching technology, and the self-adaptive problem has since become a new hot spot in the field of image stitching. At the same time, the development of other registration algorithms has also been applied to image stitching techniques. In addition to the classical algorithms mentioned above, there are two main methods. One is the phase correlation method, and the other is the image registration method based on geometric features.
相位相关度法最早在1975年Kuglin和Hines提出,具有场景无关性, 能够将纯粹二维平移的图像精确地对齐。后来,De Castro和Morandi发现 用傅立叶交换确定旋转对齐,就像确定平移对齐一样;1996年,Reddy和 Chaueqi改进TDe Castro的算法,大大减少了需要转换的数量。两幅图像 的平移矢量可以通过它们互功率谱(Cross Power Spectrum)的}fl位直接计 算出来。应用傅立叶变换进行图像的配准是图像拼接领域的另一研究成果,而且随着快速傅立叶变换算法的提出以及信号处理领域对傅立叶变换 的成熟应用,图像拼接技术也得到了相应的发展。The phase correlation method was first proposed by Kuglin and Hines in 1975, which is scene independent and can accurately align purely two-dimensional translation images. Later, De Castro and Morandi discovered that Fourier exchange is used to determine rotational alignment, just as translation alignment is determined; in 1996, Reddy and Chaueqi improved TDe Castro's algorithm, greatly reducing the number of required transformations. The translation vector of the two images can be directly calculated from the }fl bits of their Cross Power Spectrum. The application of Fourier transform for image registration is another research achievement in the field of image stitching. With the proposal of fast Fourier transform algorithm and the mature application of Fourier transform in the field of signal processing, image stitching technology has also been developed accordingly.
基于几何特征的图像配准方法是图像拼接技术的另一研究热点。1994 年,Blaszka通过二维高斯模糊过滤可以得到一些低级特征模型,如边模 型、角模型和顶点模型。以此为基础,后来有越来越多的人开始研究基于 这些图像中的低级特征进行图像拼接的方法。1997年,Zoghlami I, Faugeras O.Dedche R.提出基于几何角模型的图像对齐算法,因为角模型 提供了比坐标点更多的信息;接着在1999年,Bao P,Xu D.提出利用 小波变换提取保留边(edge-preserving)的视觉模型进行图像对齐;而 Nielsen F.则提出基于几何点特征优化的匹配方法。2000年。Kang E, Cohen I,Medioni G.提出基于图像的高级特征进行图像拼接的方法,他 是利用特征图像关系图来进行图像对齐。通过利用图像的低级特征到后来 利用其高级特征,人们对图像的分析和理解日益深入,图像拼接技术的研究也逐渐成熟起来。Image registration method based on geometric features is another research hotspot of image stitching technology. In 1994, Blaszka obtained some low-level feature models, such as edge models, corner models and vertex models, through two-dimensional Gaussian blur filtering. Based on this, more and more people began to study the method of image stitching based on the low-level features in these images. In 1997, Zoghlami I, Faugeras O. Dedche R. proposed an image alignment algorithm based on geometric corner model, because the corner model provides more information than coordinate points; then in 1999, Bao P, Xu D. proposed the use of wavelet transform Extract the edge-preserving visual model for image alignment; while Nielsen F. proposed a matching method based on geometric point feature optimization. 2000. Kang E, Cohen I, Medioni G. Propose a method for image stitching based on high-level features of images, which utilizes feature-image relationship graphs for image alignment. By using the low-level features of the image and then using its high-level features, people's analysis and understanding of the image are getting deeper and deeper, and the research on image stitching technology has gradually matured.
目前国内外对图像拼接技术的研究主要是针对具有普适性的算法实 现,还没有对微纳级视觉所观测图像的拼接方法进行研究。目前图像拼接 技术的图像配准主要基于图像所提供的信息,而这种信息是一般是从相关 度和图像几何特征中获取的,误差一般是像素级别的,而在微纳视觉中像 素级别的误差是比较大的,因此这两种方法并不适用于微纳视觉中的图像 拼接。At present, the research on image stitching technology at home and abroad is mainly aimed at the realization of universal algorithms, and there is no research on the stitching method of images observed by micro-nano vision. At present, the image registration of image stitching technology is mainly based on the information provided by the image, and this information is generally obtained from the correlation and the geometric characteristics of the image. The error is generally at the pixel level, but in the micro-nano vision The error is relatively large, so these two methods are not suitable for image stitching in micro-nano vision.
发明内容SUMMARY OF THE INVENTION
为克服上述现有技术的不足,本发明提供了一种面向微纳级视觉观测 的图像拼接系统及方法,利用微纳运动平台内部的光栅传感器的测量信息 作为图像配准的信息来源从而实现高精度的图像配准过程,最终让面向微 纳级视觉观测的图像拼接方法能够拥有很高的拼接精度。In order to overcome the above-mentioned deficiencies of the prior art, the present invention provides an image stitching system and method for micro-nano visual observation, which utilizes the measurement information of the grating sensor inside the micro-nano motion platform as the information source for image registration, thereby achieving high accuracy. The precise image registration process finally enables the image stitching method for micro-nano visual observation to have high stitching accuracy.
为实现上述目的,本发明的一个或多个实施例提供了如下技术方案:To achieve the above object, one or more embodiments of the present invention provide the following technical solutions:
一种面向微纳级视觉观测的图像拼接系统,包括:An image stitching system for micro-nano visual observation, comprising:
微纳运动平台,用于固定观测物,设于显微镜物镜下方,且使得所述 物镜对准观测物;所述微纳运动平台中还设有一个或多个光栅传感器;The micro-nano motion platform is used to fix the observation object, and is arranged under the microscope objective lens, and makes the objective lens align the observation object; The micro-nano motion platform is also provided with one or more grating sensors;
图像获取装置,设于显微镜目镜上,用于拍摄观测物图像;an image acquisition device, which is arranged on the microscope eyepiece and is used to capture the image of the observation object;
计算装置,与微纳运动平台和图像获取装置通信连接,控制微纳运动 平台移动,并且每次移动后均通过图像获取装置采集观测物图像,且相邻 两次获取的观测物图像存在部分重叠,同时获取光栅传感器传输的微纳运 动平台的位置信息;根据各帧图像拍摄时微纳运动平台的位置信息,对各 帧图像进行配准和拼接。The computing device is connected in communication with the micro-nano motion platform and the image acquisition device, controls the movement of the micro-nano motion platform, and collects the image of the observed object through the image acquisition device after each movement, and the images of the observed object obtained in two adjacent times are partially overlapped At the same time, the position information of the micro-nano motion platform transmitted by the grating sensor is obtained; according to the position information of the micro-nano motion platform when each frame of images is captured, each frame of images is registered and spliced.
一种面向微纳级视觉观测的图像拼接方法,在进行微纳级视觉观测时, 基于微纳运动平台固定观测物,基于目镜上的图像获取装置采集图像,通 过移动所述微纳运动平台采集完整的观测物图像数据,且相邻两次获取的 观测物图像存在部分重叠,同时,每次移动均获取所述微纳运动平台的位 置信息;An image stitching method for micro-nano visual observation. When performing micro-nano visual observation, an observation object is fixed based on a micro-nano motion platform, images are collected based on an image acquisition device on an eyepiece, and collected by moving the micro-nano motion platform. The complete observation object image data, and the observation object images obtained twice adjacently have partial overlap, and at the same time, the position information of the micro-nano motion platform is obtained for each movement;
所述方法包括:根据各帧图像拍摄时微纳运动平台的位置信息,对各 帧图像进行配准和拼接。The method includes: registering and splicing each frame of images according to the position information of the micro-nano motion platform when each frame of images is taken.
以上一个或多个技术方案存在以下有益效果:One or more of the above technical solutions have the following beneficial effects:
本发明以微纳运动平台作为显微镜被观测物的载体,采用置于目镜处 的相机,通过驱动微纳运动平台移动获取多幅被观测物的图像,同时通过 微纳运动平台的光栅传感器记录下每幅图像拍摄时微纳运动平台的位置信 息,将其作为图像配准的信息来源从而实现高精度的图像配准,最终让面 向微纳级视觉观测的图像拼接方法能够拥有很高的拼接精度。The invention uses the micro-nano motion platform as the carrier of the observed object of the microscope, adopts a camera placed at the eyepiece, and drives the micro-nano motion platform to move to obtain a plurality of images of the observed object, and records the images through the grating sensor of the micro-nano motion platform at the same time. The position information of the micro-nano motion platform when each image is taken is used as the information source of image registration to achieve high-precision image registration, and finally the image stitching method for micro-nano visual observation can have high stitching accuracy. .
附图说明Description of drawings
构成本发明的一部分的说明书附图用来提供对本发明的进一步理解, 本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不 当限定。The accompanying drawings that form a part of the present invention are used to provide further understanding of the present invention, and the exemplary embodiments of the present invention and their descriptions are used to explain the present invention and do not constitute an improper limitation of the present invention.
图1为本发明一个或多个实施例中面向微纳级视觉观测的图像拼接系 统工作原理图;Fig. 1 is a working principle diagram of an image stitching system for micro-nano visual observation in one or more embodiments of the present invention;
图2为本发明一个或多个实施例中微纳运动平台稳态运动精度图;2 is a diagram of the steady-state motion accuracy of a micro-nano motion platform in one or more embodiments of the present invention;
图3为本发明一个或多个实施例中标定实验所用标记物;3 is a marker used in a calibration experiment in one or more embodiments of the present invention;
图4为本发明一个或多个实施例中标定实验坐标系关系图;Fig. 4 is the relationship diagram of the calibration experiment coordinate system in one or more embodiments of the present invention;
图5为本发明一个或多个实施例中图像拼接算法示意图;5 is a schematic diagram of an image stitching algorithm in one or more embodiments of the present invention;
图6为本发明一个或多个实施例中帽子函数示意图;6 is a schematic diagram of a hat function in one or more embodiments of the present invention;
图7为本发明一个或多个实施例中摄像头所采集的图片集;7 is a set of pictures collected by a camera in one or more embodiments of the present invention;
图8为本发明一个或多个实施例中实验所得拼接图像。FIG. 8 is a mosaic image obtained experimentally in one or more embodiments of the present invention.
具体实施方式Detailed ways
应该指出,以下详细说明都是示例性的,旨在对本发明提供进一步的 说明。除非另有指明,本文使用的所有技术和科学术语具有与本发明所属 技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the invention. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非 意图限制根据本发明的示例性实施方式。如在这里所使用的,除非上下文 另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的 是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步 骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used herein is for the purpose of describing specific embodiments only, and is not intended to limit the exemplary embodiments according to the present invention. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural as well, furthermore, it is to be understood that when the terms "comprising" and/or "including" are used in this specification, it indicates that There are features, steps, operations, devices, components and/or combinations thereof.
在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组 合。Embodiments of the invention and features of the embodiments may be combined with each other without conflict.
实施例一Example 1
本实施例提供了一种面向微纳级视觉观测的图像拼接系统,如图1所 示,包括:The present embodiment provides an image stitching system for micro-nano visual observation, as shown in Figure 1, including:
摄像头,所述摄像头安装在显微镜的目镜上,用于拍摄观测物的图像, 且拍摄的图像传输至上位机;a camera, which is installed on the eyepiece of the microscope and is used to capture an image of the observed object, and the captured image is transmitted to the host computer;
微纳运动平台,所述显微镜的物镜位于微纳运动平台的正上方;所述 微纳运动平台的上表面固定观测物,所述显微镜的物镜对准观测物,通过 驱动微纳运动平台按设定路径移动来拍摄观测物的一系列图像;The micro-nano motion platform, the objective lens of the microscope is located directly above the micro-nano motion platform; the observation object is fixed on the upper surface of the micro-nano motion platform, the objective lens of the microscope is aligned with the observation object, and the micro-nano motion platform is driven to press the set Move a fixed path to take a series of images of the observation object;
所述微纳运动平台的控制系统为Simulink xPC系统,该系统可以实时 采集微纳运动平台内置的两个光栅传感器的信息,从而获得观测物的位置 为微纳运动平台的闭环控制提供条件。Simulink xPC下位机系统与上位机通 过网线连接,与上位机进行通信。The control system of the micro-nano motion platform is a Simulink xPC system, which can collect the information of two grating sensors built in the micro-nano motion platform in real time, thereby obtaining the position of the observed object and providing conditions for the closed-loop control of the micro-nano motion platform. The Simulink xPC lower computer system is connected with the upper computer through the network cable, and communicates with the upper computer.
所述微纳运动平台的内置的两个光栅传感器分别与X方向和Y方向平 行安装,且两个光栅传感器相互垂直,用于感测微纳运动平台沿X方向和 Y方向运动过程中的位置信息,光栅的测量精度为2nm,最终实验的位置 控制误差在-5nm到+5nm范围之内,如图2所示。The built-in two grating sensors of the micro-nano motion platform are respectively installed in parallel with the X direction and the Y direction, and the two grating sensors are perpendicular to each other, and are used to sense the position of the micro-nano motion platform during the movement in the X and Y directions. Information, the measurement accuracy of the grating is 2 nm, and the position control error of the final experiment is within the range of -5 nm to +5 nm, as shown in Fig. 2.
所述微纳平台的上表面与观测物之间设有镀金薄膜,所述镀金薄膜作 为观测物的背景,这样可以保证观测物所处背景的洁净度,不会对图像处 理造成干扰。A gold-plated film is provided between the upper surface of the micro-nano platform and the observation object, and the gold-plated film is used as the background of the observation object, so that the cleanliness of the background where the observation object is located can be guaranteed, and the image processing will not be disturbed.
所述显微镜和微纳平台均封闭在玻璃罩内,从而减少空气中微小颗粒 物对参照物造成的污染。The microscope and the micro-nano platform are enclosed in a glass cover, thereby reducing the contamination of the reference object caused by the tiny particles in the air.
图像采集过程保证摄像头不抖动,摄像头的进光量和光强持续稳定以 及采集环境要有高洁净度;显微镜的镜头高低和水平面均可精细调节,这 样既可以更加快速发现标记物,又可以保证摄像头有很好的对焦质量;所 述显微镜是50倍镜头。摄像头型号为MER-531-20GM/C-P。The image acquisition process ensures that the camera does not shake, the light input and light intensity of the camera are continuously stable, and the acquisition environment must have high cleanliness; the lens height and horizontal plane of the microscope can be finely adjusted, so that markers can be found more quickly, and the camera can be guaranteed. There is good focus quality; the microscope is a 50x lens. The camera model is MER-531-20GM/C-P.
上位机获取微纳运动平台控制系统反馈的运动信息和观测物图像序 列,执行以下配准方法:The host computer obtains the motion information fed back by the micro-nano motion platform control system and the image sequence of the observed object, and executes the following registration methods:
面向微纳级视觉观测的图像拼接方法根据所采集得到的多幅原始图像 以及图像之间的位置信息进行超大幅图像的拼接过程如图5所示,具体步 骤如下:The image stitching method for micro- and nano-level visual observation performs the stitching process of super-large images according to the acquired multiple original images and the position information between the images, as shown in Figure 5. The specific steps are as follows:
步骤(1):获取观测物的图像序列,并选择图像序列中的第一帧图像 作为参考基准图像;Step (1): obtain the image sequence of the observed object, and select the first frame image in the image sequence as the reference reference image;
步骤(2):对输入的图像利用高斯函数进行降噪处理和图像均衡化处 理;Step (2): utilize Gaussian function to carry out noise reduction processing and image equalization processing to the input image;
步骤(3):根据标定实验中得出的图像坐标系和微纳运动平台坐标系 的关系,将光栅传感器记录的位置信息变换到图像坐标系中。Step (3): According to the relationship between the image coordinate system and the micro-nano motion platform coordinate system obtained in the calibration experiment, the position information recorded by the grating sensor is transformed into the image coordinate system.
步骤(4):建立起图详见的平移变换模型,如下所示:Step (4): Establish the translation transformation model as shown in the figure, as shown below:
其中θ为图像的旋转角度,m2和m5为平移量,x和y分别为当前图像中 像素的横纵坐标,和为由当前图像变换到参考基准图像坐标系的像素点。where θ is the rotation angle of the image, m 2 and m 5 are the translation amounts, x and y are the horizontal and vertical coordinates of the pixels in the current image, respectively, and is the pixel point transformed from the current image to the reference reference image coordinate system.
步骤(5):对图像进行配准,以光栅记录下的各帧图像相对于参考基 准图像的位置信息作为平移量m2和m5的输入信息,然后根据所建立起来的 图像平移变换模型,将各帧图像中的像素点变换到参考基准图像所在坐标 系中。Step (5): the image is registered, and the position information of each frame image recorded by the grating relative to the reference reference image is used as the input information of the translation amounts m 2 and m 5 , and then according to the established image translation transformation model, Transform the pixels in each frame image to the coordinate system where the reference reference image is located.
步骤(6):利用加权平均融合法对已经配准完成的参考基准图像进行 融合,该方法使得重叠区域的像素值不是简单的叠加,而是先进行加权后 再叠加平均。具体过程是这样的,首先设f代表融合后的图像,f1和f2分别 代表待拼接的两幅图像,则有:Step (6): Use the weighted average fusion method to fuse the reference reference images that have been registered. This method makes the pixel values of the overlapping area not simply superimposed, but firstly weighted and then superimposed and averaged. The specific process is as follows. First, let f represent the fused image, and f 1 and f 2 represent the two images to be spliced, respectively, as follows:
这里w1和w2分别是第一幅图像和第二幅图像中重叠区域对应像素的权 值,并满足w1+w2=1,0<w1<1,0<w2<1。选择适当的权值,可以使重 叠区域实现平滑过渡,同时消除拼接痕迹。本发明选择帽子函数加权平均 法来选择合适的权值,该方法对于图像中心区域的像素赋予较高的权值, 图像边缘区域的像素的权值较低,权值函数采用帽子函数(hatfunction):Here w 1 and w 2 are the weights of the pixels corresponding to the overlapping areas in the first image and the second image, respectively, and satisfy w 1 +w 2 =1, 0<w 1 <1, and 0<w 2 <1. Selecting appropriate weights can achieve smooth transitions in overlapping areas and eliminate splicing marks. The present invention selects the hat function weighted average method to select suitable weights. This method assigns higher weights to the pixels in the central area of the image, and the pixels in the edge area of the image have lower weights, and the weight function adopts a hat function (hatfunction). :
其中widthi和heighti表示第i个图像的宽和高,帽子函数w(x)如图6所 示。where width i and height i represent the width and height of the i-th image, and the hat function w(x) is shown in Figure 6.
所述步骤(1)中观测物的图像序列采集过程的具体步骤如下:The specific steps of the image sequence acquisition process of the observed object in the step (1) are as follows:
步骤(1.1):在微纳运动平台的初始位置获取摄像头采集的第一帧图像 作为参考帧。Step (1.1): Obtain the first frame of image captured by the camera at the initial position of the micro-nano motion platform as a reference frame.
步骤(1.2):驱动微纳运动平台根据设定的运动轨迹移动到下一位置, 获取摄像头拍摄的图像(此时图像和上一帧图像必须有重叠部分),记录下 此时光栅传感器的数值。Step (1.2): Drive the micro-nano motion platform to move to the next position according to the set motion trajectory, acquire the image captured by the camera (at this time, the image and the previous frame image must overlap), and record the value of the grating sensor at this time .
步骤(1.3):判断图像是否达到规定边界位置,如果达到边界位置则完 成图像采集工作,否则重复步骤(1.2)。Step (1.3): determine whether the image reaches the specified boundary position, if it reaches the boundary position, complete the image acquisition, otherwise repeat step (1.2).
由于光栅传感器信息记录下的运动信息所在坐标系和图像坐标系不 同,所以需要对两个坐标系进行标定,所述步骤(3)中通过标定过程获取 图像坐标系和微纳运动平台坐标系的关系包括:Since the coordinate system of the motion information recorded by the grating sensor information is different from the image coordinate system, it is necessary to calibrate the two coordinate systems. In the step (3), the image coordinate system and the micro-nano motion platform coordinate system are obtained through the calibration process. Relationships include:
步骤(3.1):将标记物(如图3所示)置于微纳运动平台上,驱动平台 缓慢沿着X轴移动,同时相机以每秒100帧的速度记录下标记物的序列图 像Ix。Step (3.1): place the marker (as shown in Figure 3) on the micro-nano motion platform, drive the platform to move slowly along the X-axis, while the camera records the sequence image Ix of the marker at a speed of 100 frames per second.
步骤(3.2):驱动平台缓慢沿着Y轴移动,同时相机以每秒100帧的 速度记录下标记物的序列图像Iy。Step (3.2): Drive the platform to move slowly along the Y axis, while the camera records the sequence image Iy of the marker at a speed of 100 frames per second.
步骤(3.3):对序列图像Ix和Iy进行二值化处理,然后提取出标记物 的质心坐标,将这些坐标绘制到直角坐标系中可以得到两条正交的直线(如 图4所示),这两条直线所构成的正交坐标系与其所在的X-O-Y直角坐标系 的夹角α便为图像坐标系和微纳运动平台坐标系的夹角。本实施例中所用标 记物为十字形,其质心记为十字的交叉中心。Step (3.3): Binarize the sequence images Ix and Iy, then extract the coordinates of the center of mass of the marker, and draw these coordinates into a rectangular coordinate system to obtain two orthogonal straight lines (as shown in Figure 4) , the angle α between the orthogonal coordinate system formed by these two straight lines and the X-O-Y rectangular coordinate system is the angle between the image coordinate system and the micro-nano motion platform coordinate system. The marker used in this embodiment is in the shape of a cross, and its center of mass is marked as the cross center of the cross.
步骤(3.4):目前已知相机的像素物理尺寸为4.8微米,所用显微镜物 镜的放大倍数为50倍,图像平移一个像素的距离相应的平台实际运动距离 为像素物理尺寸/显微镜放大倍数,即在该实验中一个像素表示平台移动 了4.8/50=0.096微米,因此可以得到光栅记录的运动信息和像素坐标系下 的单位换算关系。Step (3.4): At present, it is known that the pixel physical size of the camera is 4.8 microns, the magnification of the microscope objective lens used is 50 times, and the distance corresponding to the image translation by one pixel corresponds to the actual moving distance of the platform is the pixel physical size/microscope magnification, that is, in In this experiment, one pixel represents that the platform has moved 4.8/50=0.096 microns, so the motion information recorded by the grating and the unit conversion relationship in the pixel coordinate system can be obtained.
实验结果:Experimental results:
驱动纳米运动平台按照表1所示轨迹运动,其中纳米运动平台的控制 精度为±5nm(如图2)所示,最终从相机采集得到的9幅原始图像如图7 所示。本发明利用以上介绍的算法对其进行图像拼接,最终生成图8所示 结果。可以发现图8拼接效果良好,没有明显边沿痕迹,过渡平滑。The nano-motion platform is driven to move according to the trajectory shown in Table 1, where the control accuracy of the nano-motion platform is ±5 nm (as shown in Figure 2), and the 9 original images finally collected from the camera are shown in Figure 7. The present invention uses the algorithm described above to perform image stitching on it, and finally generates the result shown in FIG. 8 . It can be found that the stitching effect in Figure 8 is good, there is no obvious edge trace, and the transition is smooth.
表1Table 1
实施例二Embodiment 2
本实施例的目的是提供一种面向微纳级视觉观测的图像拼接方法。The purpose of this embodiment is to provide an image stitching method for micro-nano visual observation.
为了实现上述目的,本实施例公开了一种面向微纳级视觉观测的图像 拼接方法,在进行微纳级视觉观测时,基于微纳运动平台固定观测物,基 于目镜上的图像获取装置采集图像,通过移动所述微纳运动平台采集完整 的观测物图像数据,且相邻两次获取的观测物图像存在部分重叠,同时, 每次移动均获取所述微纳运动平台的位置信息;In order to achieve the above purpose, this embodiment discloses an image stitching method for micro-nano visual observation. When performing micro-nano visual observation, the observation object is fixed based on the micro-nano motion platform, and the image is collected based on the image acquisition device on the eyepiece. , collect the complete observation object image data by moving the micro-nano motion platform, and the observation object images obtained twice adjacently have partial overlap, and at the same time, each movement obtains the position information of the micro-nano motion platform;
所述方法包括:根据各帧图像拍摄时微纳运动平台的位置信息,对各 帧图像进行配准和拼接。The method includes: registering and splicing each frame of images according to the position information of the micro-nano motion platform when each frame of images is taken.
以上实施例二中涉及的各步骤与实施例一相对应,具体实施方式可参见 实施例一的相关说明部分。术语“计算机可读存储介质”应该理解为包括一个 或多个指令集的单个介质或多个介质;还应当被理解为包括任何介质,所 述任何介质能够存储、编码或承载用于由处理器执行的指令集并使处理器 执行本发明中的任一方法。The steps involved in the second embodiment above correspond to the first embodiment, and the specific implementation can refer to the relevant description part of the first embodiment. The term "computer-readable storage medium" should be understood to include a single medium or multiple media including one or more sets of instructions; it should also be understood to include any medium capable of storing, encoding or carrying for use by a processor The executed instruction set causes the processor to perform any of the methods of the present invention.
以上一个或多个实施例具有以下技术效果:The above one or more embodiments have the following technical effects:
本发明以微纳运动平台作为显微镜被观测物的载体,采用置于目镜处 的相机,通过驱动微纳运动平台移动获取多幅被观测物的图像,同时通过 微纳运动平台的光栅传感器记录下每幅图像拍摄时微纳运动平台的位置信 息,将其作为图像配准的信息来源从而实现高精度的图像配准,最终让面 向微纳级视觉观测的图像拼接方法能够拥有很高的拼接精度。The invention uses the micro-nano motion platform as the carrier of the observed object of the microscope, adopts a camera placed at the eyepiece, and drives the micro-nano motion platform to move to obtain a plurality of images of the observed object, and records the images through the grating sensor of the micro-nano motion platform at the same time. The position information of the micro-nano motion platform when each image is taken is used as the information source of image registration to achieve high-precision image registration, and finally the image stitching method for micro-nano visual observation can have high stitching accuracy. .
本领域技术人员应该明白,上述本发明的各模块或各步骤可以用通用 的计算机装置来实现,可选地,它们可以用计算装置可执行的程序代码来 实现,从而,可以将它们存储在存储装置中由计算装置来执行,或者将它 们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成 单个集成电路模块来实现。本发明不限制于任何特定的硬件和软件的结合。Those skilled in the art should understand that the above modules or steps of the present invention can be implemented by a general-purpose computer device, or alternatively, they can be implemented by a program code executable by the computing device, so that they can be stored in a storage device. The device is executed by a computing device, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps in them are fabricated into a single integrated circuit module for implementation. The present invention is not limited to any specific combination of hardware and software.
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于 本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精 神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明 的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.
上述虽然结合附图对本发明的具体实施方式进行了描述,但并非对本 发明保护范围的限制,所属领域技术人员应该明白,在本发明的技术方案 的基础上,本领域技术人员不需要付出创造性劳动即可做出的各种修改或 变形仍在本发明的保护范围以内。Although the specific embodiments of the present invention have been described above in conjunction with the accompanying drawings, they do not limit the scope of protection of the present invention. Those skilled in the art should understand that on the basis of the technical solutions of the present invention, those skilled in the art do not need to pay creative efforts. Various modifications or deformations that can be made are still within the protection scope of the present invention.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910555841.1A CN110288528B (en) | 2019-06-25 | 2019-06-25 | An image stitching system and method for micro-nano visual observation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910555841.1A CN110288528B (en) | 2019-06-25 | 2019-06-25 | An image stitching system and method for micro-nano visual observation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110288528A CN110288528A (en) | 2019-09-27 |
CN110288528B true CN110288528B (en) | 2020-12-29 |
Family
ID=68005715
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910555841.1A Active CN110288528B (en) | 2019-06-25 | 2019-06-25 | An image stitching system and method for micro-nano visual observation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110288528B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110807732B (en) * | 2019-10-11 | 2023-08-29 | 武汉兰丁智能医学股份有限公司 | Panoramic stitching system and method for microscopic images |
CN114067049B (en) * | 2021-11-11 | 2024-05-10 | 四川泛华航空仪表电器有限公司 | Model generation method based on optical microscopy |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709944A (en) * | 2016-12-14 | 2017-05-24 | 上海微小卫星工程中心 | Satellite remote sensing image registration method |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102074046B (en) * | 2010-12-17 | 2012-05-23 | 浙江大学 | Phased array three-dimensional sonar image off-line processing system and method |
CN202599978U (en) * | 2012-06-12 | 2012-12-12 | 浙江大学 | Three-scanner atomic power microscan detecting device |
EP3186661B1 (en) * | 2014-08-26 | 2021-04-07 | Massachusetts Institute of Technology | Methods and apparatus for three-dimensional (3d) imaging |
CN106447607B (en) * | 2016-08-25 | 2017-11-03 | 中国科学院长春光学精密机械与物理研究所 | A kind of image split-joint method and device |
CN106643558A (en) * | 2017-03-06 | 2017-05-10 | 中国科学院光电技术研究所 | Wide spectrum interference morphology detection method based on phase longitudinal splicing |
US10412395B2 (en) * | 2017-03-10 | 2019-09-10 | Raytheon Company | Real time frame alignment in video data |
CN109559275B (en) * | 2018-11-07 | 2024-05-21 | 深圳迈瑞生物医疗电子股份有限公司 | Microscopic image stitching method of urine analyzer |
-
2019
- 2019-06-25 CN CN201910555841.1A patent/CN110288528B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709944A (en) * | 2016-12-14 | 2017-05-24 | 上海微小卫星工程中心 | Satellite remote sensing image registration method |
Also Published As
Publication number | Publication date |
---|---|
CN110288528A (en) | 2019-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111414798B (en) | Head posture detection method and system based on RGB-D image | |
Itoh et al. | Interaction-free calibration for optical see-through head-mounted displays based on 3d eye localization | |
CN109003311B (en) | Calibration method of fisheye lens | |
CN103150715B (en) | Image mosaic processing method and processing device | |
CN106091984B (en) | A method for acquiring 3D point cloud data based on line laser | |
CN102830793B (en) | Sight tracing and equipment | |
CN103776390B (en) | Multi-view-field data splicing method based on three-dimensional natural texture data scanning machine | |
CN108288294A (en) | A kind of outer ginseng scaling method of a 3D phases group of planes | |
CN102567989A (en) | Space positioning method based on binocular stereo vision | |
CN101576771B (en) | Calibration method of eye tracker based on non-uniform sample interpolation | |
CN102842117B (en) | Method for correcting kinematic errors in microscopic vision system | |
CN105931222B (en) | The method for realizing high-precision camera calibration with low precision two dimensional surface target | |
CN107255443A (en) | Binocular vision sensor field calibration method and device under a kind of complex environment | |
CN110889829A (en) | A monocular ranging method based on fisheye lens | |
US20120148145A1 (en) | System and method for finding correspondence between cameras in a three-dimensional vision system | |
CN112949478A (en) | Target detection method based on holder camera | |
CN111667536A (en) | Parameter calibration method based on zoom camera depth estimation | |
CN103440643A (en) | Single-linear-array camera calibration method | |
CN110288528B (en) | An image stitching system and method for micro-nano visual observation | |
TW201635242A (en) | Generation method, device and system for indoor two-dimension plan | |
CN113781581B (en) | Depth Distortion Model Calibration Method Based on Target Loose Attitude Constraint | |
CN110648362A (en) | A Binocular Stereo Vision Badminton Positioning Recognition and Attitude Calculation Method | |
CN105976324B (en) | Vehicle image splicing method | |
CN109308472B (en) | Three-dimensional sight estimation method based on iris projection matching function | |
CN113283329B (en) | Eye tracking system, eye tracker, eye tracking method, eye tracking device, and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |