CN116912715A - A UAV visual servo control method and system for wind turbine blade inspection - Google Patents
A UAV visual servo control method and system for wind turbine blade inspection Download PDFInfo
- Publication number
- CN116912715A CN116912715A CN202310708227.0A CN202310708227A CN116912715A CN 116912715 A CN116912715 A CN 116912715A CN 202310708227 A CN202310708227 A CN 202310708227A CN 116912715 A CN116912715 A CN 116912715A
- Authority
- CN
- China
- Prior art keywords
- wind turbine
- camera
- point
- image
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 57
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000000007 visual effect Effects 0.000 title claims abstract description 41
- 230000002787 reinforcement Effects 0.000 claims abstract description 24
- 230000006870 function Effects 0.000 claims abstract description 18
- 239000013598 vector Substances 0.000 claims description 38
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 230000009471 action Effects 0.000 claims description 12
- 230000011218 segmentation Effects 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 6
- 238000003709 image segmentation Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 4
- 238000013144 data compression Methods 0.000 claims description 4
- 230000003287 optical effect Effects 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 2
- 230000001788 irregular Effects 0.000 abstract description 2
- 230000006872 improvement Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000011217 control strategy Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 201000001371 inclusion conjunctivitis Diseases 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000002344 surface layer Substances 0.000 description 1
- 206010044325 trachoma Diseases 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
- G06F17/13—Differential equations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Operations Research (AREA)
- Algebra (AREA)
- Remote Sensing (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical field
本发明属于无人机巡检的技术领域,主要涉及了一种面向风机叶片巡检的无人机视觉伺服控制方法及系统。The invention belongs to the technical field of UAV inspection, and mainly relates to a UAV visual servo control method and system for inspection of wind turbine blades.
背景技术Background technique
风力发电作为一种技术成熟的可再生能源,已经在全球范围内得到了大规模地应用。然而,风机叶片的表层很容易出现磨损,导致沙眼和刮痕等损伤,通过无人机的自动检测,叶片表面的细节可被近距离捕捉和分析。对叶片早期缺陷的检测可以有效降低后期的维护成本,设计具有自主性,稳定性,适应性和经济性的风机叶片无人机巡检方法具有重大意义。As a renewable energy source with mature technology, wind power has been widely used around the world. However, the surface layer of wind turbine blades is prone to wear and tear, leading to damage such as trachoma and scratches. Through automatic inspection by drones, the details of the blade surface can be captured and analyzed at close range. The detection of early blade defects can effectively reduce later maintenance costs. It is of great significance to design a drone inspection method for wind turbine blades that is autonomous, stable, adaptable and economical.
基于无人机自主完成任务的研究也取得了许多成果,不少自主巡检的方案被提出,有基于风机外观和先验位姿数据使用定位系统完成位置控制的,有基于多种三维传感器,如:毫米波雷达、激光雷达、超声波传感器、多目摄像头等完成局部三维重建后进行巡检的,还有通过对风机叶片姿态进行线性拟合后直接巡检的,这些方法大多依赖于先验数据和额外传感器,然而,风机通常安装在环境恶劣的地区,这对飞行的稳定性构成了很大的挑战。同时,这些额外的传感器通常价格高昂且难以维护。Research based on the autonomous completion of tasks by UAVs has also achieved many results. Many autonomous inspection solutions have been proposed. Some use positioning systems to complete position control based on the appearance and prior posture data of the wind turbine. Some are based on a variety of three-dimensional sensors. For example: millimeter-wave radar, lidar, ultrasonic sensors, multi-camera, etc. perform inspections after completing local three-dimensional reconstruction, and direct inspections after linear fitting of the attitude of the wind turbine blades. Most of these methods rely on priori data and additional sensors, however, wind turbines are usually installed in areas with harsh environments, which poses a great challenge to flight stability. At the same time, these additional sensors are often expensive and difficult to maintain.
为了提高无人机在复杂环境中的适应能力,降低维护成本并提高可靠性,利用机器视觉的方法设计具有自主性,稳定性,适应性和经济性的风机叶片无人机巡检方法具有重大意义。In order to improve the adaptability of UAVs in complex environments, reduce maintenance costs and improve reliability, it is of great significance to use machine vision methods to design UAV inspection methods for wind turbine blades that are autonomous, stable, adaptable and economical. significance.
发明内容Contents of the invention
本发明正是针对现有技术风机叶片无人机自动巡检中,依赖先验信息,自动化程度低,风机叶片巡检时无人机缺乏调整,造成巡检稳定性低,巡检拍摄的图像质量低的问题,提供一种面向风机叶片巡检的无人机视觉伺服控制方法及系统,通过无人机的拍摄功能,以不同角度拍摄风机叶片的全景信息并记录拍摄时的相机位姿信息,根据多组信息求解风机关键点的坐标,完成面向风机叶片的无人机巡检起止点和方向的路径规划;最后使用强化学习的方法进行视觉伺服,使无人机贴近叶片,贴合叶片曲面飞行并拍照。本发明方法在无需获得风机叶片长度和方向信息的情况下,自主完成风机关键点定位,进行高精度视觉跟踪,并纠正由于叶片不规则形状和短时阵风导致的巡检路线偏移问题。The present invention is aimed at the existing technology of automatic inspection of wind turbine blades by drones, which relies on a priori information and has a low degree of automation. The drone lacks adjustment during inspection of wind turbine blades, resulting in low inspection stability and poor inspection images. To solve the problem of low quality, a drone visual servo control method and system for wind turbine blade inspection is provided. Through the drone's shooting function, the panoramic information of the wind turbine blades is captured at different angles and the camera pose information is recorded during shooting. , solve the coordinates of key points of the wind turbine based on multiple sets of information, and complete the path planning of the starting and ending points and directions of the UAV inspection facing the wind turbine blades; finally, use the reinforcement learning method to perform visual servoing to make the UAV close to the blades and fit the blades Fly over curved surfaces and take photos. The method of the present invention can independently complete the positioning of key points of the wind turbine without obtaining information on the length and direction of the wind turbine blades, perform high-precision visual tracking, and correct the inspection route deviation problem caused by irregular blade shapes and short-term gusts.
为了实现上述目的,本发明采取的技术方案是:一种面向风机叶片巡检的无人机视觉伺服控制方法,包括如下步骤:In order to achieve the above objectives, the technical solution adopted by the present invention is: a UAV visual servo control method for wind turbine blade inspection, including the following steps:
S1:位置信息获取:通过无人机的拍摄功能,以不同角度拍摄风机叶片的全景信息并记录拍摄时的相机位姿信息,所述全景信息和相机位姿信息至少包括三组,分别拍摄于风机叶片的不同位置,且每个位置均包括风机的整个叶片;S1: Obtain location information: Use the shooting function of the drone to shoot the panoramic information of the wind turbine blades at different angles and record the camera pose information during shooting. The panoramic information and camera pose information include at least three groups, which were shot in Different positions of the wind turbine blades, and each position includes the entire blade of the wind turbine;
S2:风机关键点坐标求解:根据步骤S1获取的多组全景信息和相机位姿信息求解获得风机关键点的坐标,所述风机的关键点为风机叶片的叶尖点;关键点的直线Li在空间中所对应的直线方程为:S2: Solve the coordinates of the key points of the wind turbine: The coordinates of the key points of the wind turbine are obtained according to the multiple sets of panoramic information and camera pose information obtained in step S1. The key points of the wind turbine are the tip points of the wind turbine blades; the straight line Li of the key point is The corresponding straight line equation in space is:
其中,t为参数,xi为直线Li三维方向向量的x值,Xi为相机获取图像时,相机坐标的X值,yi为直线Li三维方向向量的y值,Yi为相机获取图像时,相机坐标的Y值,z_i为直线Li三维方向向量的z值,Zi为相机获取图像时,相机坐标的Z值;Among them, t is the parameter, x i is the x value of the three-dimensional direction vector of the straight line Li, Xi is the X value of the camera coordinates when the camera acquires the image, yi is the y value of the three-dimensional direction vector of the straight line Li, and Yi is the camera The Y value of the coordinate, z_i is the z value of the three-dimensional direction vector of the straight line Li, Zi is the Z value of the camera coordinate when the camera acquires the image;
S3:路径规划:根据步骤S2求解获得的风机关键点坐标,完成面向风机叶片的无人机巡检起止点和方向的路径规划;所述路径规划的要求为确保风机叶片最大的局部图像在相机视野中所占面积介于50%-75%;S3: Path planning: According to the coordinates of the key points of the wind turbine obtained in step S2, complete the path planning of the starting and ending points and directions of the drone inspection facing the wind turbine blades; the requirements of the path planning are to ensure that the largest local image of the wind turbine blades is in the camera The area occupied in the field of view is between 50% and 75%;
S4:轨迹实施:根据步骤S3获得的路径,使用强化学习的方法进行视觉伺服,使无人机贴近叶片,贴合叶片曲面飞行并拍照。S4: Trajectory implementation: According to the path obtained in step S3, use the reinforcement learning method to perform visual servoing, so that the drone is close to the blade, flies along the blade curved surface and takes pictures.
作为本发明的一种改进,所述步骤S2具体包括:As an improvement of the present invention, step S2 specifically includes:
S21:利用语义分割模型,将图像中属于风机叶片的像素从全景信息中进行分离,获取风机叶片的全景分割图;S21: Use the semantic segmentation model to separate the pixels belonging to the wind turbine blades in the image from the panoramic information, and obtain the panoramic segmentation map of the wind turbine blades;
S22:根据步骤S21获取的全景分割图,提取风机三个叶尖上的点的像素坐标,作为关键点的像素坐标;S22: According to the panoramic segmentation image obtained in step S21, extract the pixel coordinates of the points on the three blade tips of the wind turbine as the pixel coordinates of the key points;
S23:根据步骤S22提取的像素坐标值、拍摄图像时的相机位姿、相机的参数,求解风机叶片关键点在空间中的坐标,具体为:平面S1为相机在点A位置拍摄时点C所在的像平面,平面上点B为相机拍摄图像的中心,即A处相机的光轴与平面S1的交点,点B在图像中的像素坐标为行,/>列,相机的朝向即为向量AB的朝向;S23: According to the pixel coordinate values extracted in step S22, the camera pose when taking the image, and the camera parameters, solve the coordinates of the key points of the wind turbine blades in space, specifically: plane S 1 is point C when the camera shoots at point A. The image plane where it is located, point B on the plane is the center of the image captured by the camera, that is, the intersection point of the optical axis of the camera at A and the plane S 1 , the pixel coordinate of point B in the image is OK,/> Column, the direction of the camera is the direction of vector AB;
平面S2为相机在D点位置拍摄时点C所在的像平面,点C在平面S1和平面S2的交线上,点C在平面S1上的平面像素坐标为第a1行,第b1列,则在像素坐标系中能求得相机中心点到图像平面的像素距离如下式所示,Plane S 2 is the image plane where point C is located when the camera shoots at point D. Point C is on the intersection of plane S 1 and plane S 2. The plane pixel coordinates of point C on plane S1 are row a1 and b1. column, then the pixel distance from the camera center point to the image plane can be obtained in the pixel coordinate system as shown in the following formula,
其中,θ为相机的FOV角的角度;在以相机A为原点,向量为直线L1的方向向量,向量/>的值具体为:Among them, θ is the angle of the camera's FOV angle; with camera A as the origin, the vector is the direction vector of straight line L1, vector/> The value of is specifically:
其中,a1为点C在平面S1上的行号,b1为点C在平面S1上的列号,W为相机在局部空间坐标系o-xyz中的航向角;Among them, a 1 is the row number of point C on plane S1, b 1 is the column number of point C on plane S1, and W is the heading angle of the camera in the local space coordinate system o-xyz;
由于已知直线过A点,A点坐标为(X0,Y0,Z0)可求得直线Li的参数方程如下式:Since it is known that the straight line passes through point A and the coordinates of point A are (X 0 , Y 0 , Z 0 ), the parametric equation of the straight line Li can be obtained as follows:
其中,t为参数,Li为风机某个关键点在空间中所对应的直线方程,多个空间直线的交点即为关键点的空间位置。Among them, t is a parameter, Li is the straight line equation corresponding to a key point of the wind turbine in space, and the intersection of multiple space straight lines is the spatial position of the key point.
作为本发明的一种改进,所述步骤S3具体包括:As an improvement of the present invention, step S3 specifically includes:
S31:根据关键点的三维坐标信息,求得风机的朝向和风机叶片中心点坐标;S31: Based on the three-dimensional coordinate information of the key points, obtain the direction of the wind turbine and the coordinates of the center point of the wind turbine blade;
S32:根据步骤S31获得的风机朝向和风机叶片中心点坐标,在距离风机叶片正反面的固定距离内,确定视觉伺服的起始点和终点并给出巡检方向;所述固定距离由风机叶片尺寸大小和相机的FOV角共同决定。S32: According to the fan orientation and fan blade center point coordinates obtained in step S31, within a fixed distance from the front and back of the fan blade, determine the starting point and end point of the visual servo and provide the inspection direction; the fixed distance is determined by the fan blade size The size is determined by a combination of the camera's FOV angle.
作为本发明的另一种改进,所述步骤S31中风机朝向的求解方法具体为:任取风机三个叶片中的两个叶片三维坐标做差,得到一个向量,再次另外两个叶片三维坐标做差,得到另一个向量,对两个向量求外积,获取平面的方向向量,即风机朝向;As another improvement of the present invention, the method for solving the direction of the wind turbine in step S31 is specifically: take the three-dimensional coordinates of two of the three blades of the wind turbine and make a difference to obtain a vector, and then calculate the three-dimensional coordinates of the other two blades. Difference, get another vector, take the outer product of the two vectors, and get the direction vector of the plane, that is, the direction of the fan;
所述风机叶片中心点坐标的求解具体为:对风机三个叶片的三维坐标求均值。The solution of the center point coordinates of the wind turbine blades is specifically: averaging the three-dimensional coordinates of the three blades of the wind turbine.
作为本发明的又一种改进,所述步骤S4具体包括:As another improvement of the present invention, step S4 specifically includes:
S41:将拍摄到的图像输入图像分割算法,将叶片图像和背景图像分离;S41: Input the captured image into the image segmentation algorithm to separate the leaf image and background image;
S42:将分割后的图像进行数据压缩后,作为状态输入强化学习模型进行训练,所述强化学习模型的输出动作为垂直于巡检方向的一维修正向量值,进行修正动作,并计算奖励函数;S42: After data compression of the segmented image, it is used as a state input to the reinforcement learning model for training. The output action of the reinforcement learning model is a fixed positive vector value perpendicular to the inspection direction, performs correction actions, and calculates the reward function. ;
S43:根据奖励值、状态和动作,更新强化学习参数,优化强化学习模型;当奖励值达到阈值时,训练完成,使用训练完成的修正模型进行视觉伺服,完成轨迹实施。S43: Update the reinforcement learning parameters and optimize the reinforcement learning model according to the reward value, status and action; when the reward value reaches the threshold, the training is completed, and the trained modified model is used for visual servoing to complete the trajectory implementation.
作为本发明的更进一步改进,所述步骤S42中的奖励函数具体为:As a further improvement of the present invention, the reward function in step S42 is specifically:
w=-(d2+k·s)w=-(d 2 +k·s)
式中,w为奖励函数值,d为视觉伺服中偏移值,s为伺服步数,k为系数。In the formula, w is the reward function value, d is the offset value in visual servoing, s is the number of servo steps, and k is the coefficient.
作为本发明的更进一步改进,所述步骤S1中,无人机相机拍摄的图像边缘裁剪5%后再使用,并使用图像标定卡对相机进行标定,所述相机的FOV数值为经过剪裁和标定修改后的。As a further improvement of the present invention, in step S1, the edge of the image captured by the drone camera is cropped by 5% before use, and the camera is calibrated using an image calibration card. The FOV value of the camera is after cropping and calibration after edited.
为了实现上述目的,本发明还采取的技术方案是:一种面向风机叶片巡检的无人机视觉伺服控制系统,包括计算机程序,所述计算机程序被处理器执行时实现如上述任一种所述方法的步骤。In order to achieve the above object, the present invention also adopts a technical solution: a UAV visual servo control system for wind turbine blade inspection, including a computer program. When the computer program is executed by a processor, any of the above-mentioned methods are implemented. Describe the steps of the method.
与现有技术相比,本发明具有的有益效果:本发明提出了一种面向风机叶片巡检的无人机视觉伺服控制方法及系统,具有高度的自主性,在巡检过程中,鲁棒性好,获取的图片质量高。本发明在先验的风机叶片长度叶片朝向和叶片停机角度未知的情况下,自主计算风机叶片关键点的能力,通过关键点计算出每个叶片的期望巡检路径,同时依据相机所拍摄的图像,利用视觉伺服的方法,在巡检过程中不断对无人机位置进行修正,在风机叶片形状变化较大,遇到短时阵风情况下,能修正无人机位置,拍摄高质量的风机叶片图片,与现有的风机无人机自动巡检方法相比,具有更高的自主化水平、适用性和鲁棒性,具有较高的工程价值。Compared with the existing technology, the present invention has beneficial effects: The present invention proposes a UAV visual servo control method and system for wind turbine blade inspection, which has a high degree of autonomy and is robust during the inspection process. The performance is good and the pictures obtained are of high quality. This invention has the ability to independently calculate the key points of the wind turbine blades when the length of the wind turbine blades, the blade orientation and the blade stopping angle are unknown a priori, and the expected inspection path of each blade is calculated through the key points, and at the same time based on the images captured by the camera , using the method of visual servoing to continuously correct the position of the drone during the inspection process. When the shape of the wind turbine blades changes greatly and encounters short-term gusts, the position of the drone can be corrected and high-quality wind turbine blades can be photographed. Picture, compared with the existing wind turbine drone automatic inspection method, it has a higher level of autonomy, applicability and robustness, and has high engineering value.
附图说明Description of the drawings
图1为本发明方法的步骤流程图;Figure 1 is a step flow chart of the method of the present invention;
图2为本发明步骤S2中,基于单目相机的空间关键点定位示意图;Figure 2 is a schematic diagram of spatial key point positioning based on a monocular camera in step S2 of the present invention;
图3为本发明方法巡检过程中视觉伺服的示意图。Figure 3 is a schematic diagram of visual servoing during the inspection process of the method of the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施方式,进一步阐明本发明,应理解下述具体实施方式仅用于说明本发明而不用于限制本发明的范围。The present invention will be further clarified below with reference to the accompanying drawings and specific embodiments. It should be understood that the following specific embodiments are only used to illustrate the present invention and are not intended to limit the scope of the present invention.
实施例1Example 1
一种面向风机叶片巡检的无人机视觉伺服控制方法,如图1所示,包括如下步骤:A UAV visual servo control method for wind turbine blade inspection, as shown in Figure 1, includes the following steps:
S1:位置信息获取:无人机飞行至风机叶片前方不少于三个不同的位置,以不同角度拍摄风机叶片全景信息,并记录拍摄时的相机位姿信息。S1: Acquisition of position information: The drone flies to no less than three different positions in front of the wind turbine blades, captures panoramic information of the wind turbine blades at different angles, and records the camera pose information during shooting.
S2:风机关键点坐标求解:根据拍摄不少于三张的图片和拍摄时的相机位姿信息求解算法求出以风机叶片三个叶尖点作为关键点的坐标;所述步骤具体包括:S2: Calculating the coordinates of key points of the wind turbine: Calculate the coordinates of the three tip points of the wind turbine blades as key points based on the solution algorithm of taking no less than three pictures and the camera pose information during the shooting; the steps specifically include:
S21:利用语义分割模型,将图像中属于风机叶片的像素从背景中进行分离,获取风机叶片的全景分割图;S21: Use the semantic segmentation model to separate the pixels belonging to the wind turbine blades in the image from the background, and obtain a panoramic segmentation image of the wind turbine blades;
S22:根据全景分割图,并结合风机明显的外部特征,提取风机三个叶尖上的点的像素坐标,作为关键点的像素坐标;S22: Based on the panoramic segmentation map and combined with the obvious external features of the wind turbine, extract the pixel coordinates of the points on the three blade tips of the wind turbine as the pixel coordinates of the key points;
S23:根据提取的像素坐标值、拍摄图像时的相机位姿、相机的参数,求解风机叶片关键点在空间中的坐标。在局部空间坐标系o-xyz中点A(X0,Y0,Z0)和点D是空间中相机拍摄位置。当相机分辨率为A行B列(单位为像素,下同),相机的FOV角的角度为θ度,相机位置位于A(X0,Y0,Z0)点时,相机水平拍摄,以X轴正方向为0度,航向角为W。平面S1为相机在点A位置拍摄时点C所在的像平面,平面上点B为相机拍摄图像的中心,即A处相机的光轴与平面S1的交点,点B在图像中的像素坐标为行,/>列。因此,相机的朝向即为向量AB的朝向。平面S2为相机在D点位置拍摄时点C所在的像平面,点C为需要求解的目标点,点C在平面S1和平面S2的交线上,点C在平面S1上的平面像素坐标为第a1行,第b1列,则在像素坐标系中能求得相机中心点到图像平面的像素距离如下式所示,S23: Based on the extracted pixel coordinate values, the camera pose when taking the image, and the camera parameters, solve the coordinates of the key points of the wind turbine blades in space. In the local space coordinate system o-xyz, point A (X 0 , Y 0 , Z 0 ) and point D are the camera shooting positions in space. When the camera resolution is row A and column B (unit is pixel, the same below), the angle of the camera's FOV angle is θ degrees, and the camera position is at point A (X 0 , Y 0 , Z 0 ), the camera shoots horizontally, with The positive direction of the X-axis is 0 degrees, and the heading angle is W. Plane S 1 is the image plane where point C is located when the camera shoots at point A. Point B on the plane is the center of the image captured by the camera, that is, the intersection of the optical axis of the camera at A and plane S 1. The pixel of point B in the image The coordinates are OK,/> List. Therefore, the orientation of the camera is the orientation of vector AB. Plane S 2 is the image plane where point C is located when the camera shoots at point D. Point C is the target point that needs to be solved. Point C is on the intersection of plane S 1 and plane S 2. Point C is on the plane on plane S1. The pixel coordinates are row a1 and column b1. In the pixel coordinate system, the pixel distance from the camera center point to the image plane can be obtained as follows:
其中,在以相机A为原点,向量为直线L1的方向向量,向量/>的值如下式所示,其中W为相机在局部空间坐标系o-xyz中的航向角,其余参数根据相机的线性成像模型求得。Among them, with camera A as the origin, the vector is the direction vector of straight line L1, vector/> The value of is shown in the following formula, where W is the heading angle of the camera in the local spatial coordinate system o-xyz, and the remaining parameters are obtained according to the linear imaging model of the camera.
由于已知直线过A点,A点坐标为(X0,Y0,Z0)可求得直线L的参数方程如下式:Since it is known that the straight line passes through point A and the coordinates of point A are (X 0 , Y 0 , Z 0 ), the parametric equation of straight line L can be obtained as follows:
(t为参数) (t is a parameter)
L1为一张图片中,风机某个关键点在空间中所对应的直线方程,使用同样的方法,针对另外数张图片中的同一个关键点进行求解,最终可以获得不少于三条不平行的空间直线的方程,关键点的空间位置,即为这些空间直线的交点。L 1 is the straight line equation corresponding to a key point of the wind turbine in space in one picture. Using the same method to solve the same key point in several other pictures, we can finally obtain no less than three non-parallel lines. The equation of the space straight line, the spatial position of the key point is the intersection point of these space straight lines.
S3:路径规划:根据风机三个关键点坐标,进行针对叶片的无人机巡检起止点和方向的路径规划;所述步骤具体包括:S3: Path planning: Based on the coordinates of the three key points of the wind turbine, perform path planning of the starting and ending points and directions of the drone inspection of the blades; the steps include:
S31:根据三个关键点的三维坐标信息,可以求得风机的朝向,和风机叶片中心点坐标。S31: Based on the three-dimensional coordinate information of the three key points, the direction of the wind turbine and the coordinates of the center point of the wind turbine blade can be obtained.
S32:根据风机中心点坐标和风机叶尖坐标,以及风机朝向,在距离风机三个叶片正反面一共6个面一定距离确定视觉伺服的起始点和终点并给出巡检方向,距离风机叶片的距离由风机叶片尺寸大小和相机的FOV角共同决定,确保风机叶片最大的局部图像在相机视野中所占面积介于50%-75%之间。S32: Based on the coordinates of the fan center point and the fan tip coordinates, as well as the direction of the fan, determine the starting point and end point of the visual servo at a certain distance from the front and back sides of the three blades of the fan, and give the inspection direction. The distance is determined by the size of the wind blade and the FOV angle of the camera, ensuring that the largest partial image of the wind blade occupies between 50% and 75% of the area in the camera's field of view.
S4:轨迹实施:根据步骤S3获得的路径,使用强化学习的方法进行视觉伺服,使无人机贴近叶片,贴合叶片曲面飞行并拍照,该步骤具体包括:S4: Trajectory implementation: According to the path obtained in step S3, use the reinforcement learning method to perform visual servoing, so that the drone is close to the blade, flies along the blade curved surface and takes pictures. This step specifically includes:
S41:将拍摄到的图像输入图像分割算法,将叶片图像和背景分离;S41: Input the captured image into the image segmentation algorithm to separate the leaf image from the background;
S42:将分割后的图像进行数据压缩后,作为状态输入强化学习模型进行训练,强化学习的输出动作为垂直于巡检方向的一维修正向量值,进行修正动作,并计算奖励函数,奖励函数设计如下:S42: After data compression of the segmented image, it is used as a state input to the reinforcement learning model for training. The output action of the reinforcement learning is a fixed positive vector value perpendicular to the inspection direction. The correction action is performed and the reward function is calculated. The reward function The design is as follows:
w=-(d2+k·s)w=-(d 2 +k·s)
式中w为奖励函数值,d为视觉伺服中偏移值,s为伺服步数,k为系数,k·s用于惩罚方向计算不准确时,伺服算法导致飞行时间太长的问题,伺服相机丢失目标时,强化学习该轮次结束。为了惩罚这种智能体在该情况下的决策,此时奖励函数额外给予一个大绝对值的负数,具体数值根据系统的具体参数决定。In the formula, w is the reward function value, d is the offset value in visual servoing, s is the number of servo steps, k is the coefficient, k·s is used to penalize the direction calculation incorrectly, the servo algorithm leads to the problem of too long flight time, the servo When the camera loses its target, the reinforcement learning round ends. In order to punish the decision-making of this kind of agent in this situation, the reward function is given an additional negative number with a large absolute value. The specific value is determined according to the specific parameters of the system.
S43:根据奖励值、状态和动作,更新强化学习参数,获得更好的控制策略;在奖励值达到阈值时,认为训练完成,使用训练完成的修正模型进行视觉伺服,减少巡检过程中的误差,达到贴合叶片曲面飞行目的。S43: Update the reinforcement learning parameters according to the reward value, status and action to obtain a better control strategy; when the reward value reaches the threshold, the training is considered complete, and the trained modified model is used for visual servoing to reduce errors during the inspection process. , to achieve the purpose of flying according to the curved surface of the blade.
实施例2Example 2
一种面向风机叶片巡检的无人机视觉伺服控制方法,无人机飞行至风机叶片前方三个不同的位置,以不同角度拍摄风机叶片全景信息,并记录拍摄时的相机位姿信息,全景指的是需要将风机整个叶片包含在内,以便后续求解叶片叶尖三个关键点。根据拍摄的三张图片和拍摄时的相机位姿信息求解算法求出风机叶片三个关键点坐标;根据风机三个关键点坐标,进行对视觉伺服的起止点和方向进行规划;使用强化学习的方法进行视觉伺服,使无人机贴近叶片,贴合叶片形状飞行并拍照。该方法具体包括如下步骤:A UAV visual servo control method for wind turbine blade inspection. The UAV flies to three different positions in front of the wind turbine blades, takes panoramic information of the wind turbine blades at different angles, and records the camera pose information during the shooting. The panoramic view It means that the entire blade of the wind turbine needs to be included in order to subsequently solve the three key points of the blade tip. Based on the three pictures taken and the camera pose information during the shooting, the solution algorithm calculates the coordinates of the three key points of the wind turbine blades; based on the coordinates of the three key points of the wind turbine, the start and end points and direction of the visual servo are planned; using reinforcement learning The method uses visual servoing to make the drone get close to the blades, fly according to the shape of the blades and take pictures. The method specifically includes the following steps:
S1:无人机飞行至风机叶片前方三个不同的位置,以不同角度拍摄风机叶片全景信息,并记录拍摄时的相机位姿信息。S1: The drone flies to three different positions in front of the wind turbine blades, captures the panoramic information of the wind turbine blades at different angles, and records the camera pose information during shooting.
本实施例中,无人机搭载RTK获取经纬度信息,通过9轴IMU获取朝向信息。通过经纬高坐标系和东北天坐标系的换算算法,以起飞点为原点,起飞时飞机位置为正方向,建立东北天坐标系,该坐标系即为本实例中的空间局部坐标系。相机通过两轴云台与无人机连接,相机的位姿信息均为在此坐标系中的位姿信息。In this embodiment, the drone is equipped with RTK to obtain latitude and longitude information, and the orientation information is obtained through a 9-axis IMU. Through the conversion algorithm between the longitude and latitude high coordinate system and the northeast sky coordinate system, with the take-off point as the origin and the aircraft position at take-off as the positive direction, a northeast sky coordinate system is established. This coordinate system is the spatial local coordinate system in this example. The camera is connected to the drone through a two-axis gimbal, and the pose information of the camera is the pose information in this coordinate system.
本实施例中,风机高度为80米,叶片长度为40米,无人机携带的相机FOV角为90度。基于以上信息,无人机飞行至风机叶片前方45米处拍摄风机叶片选项图像。In this embodiment, the height of the wind turbine is 80 meters, the blade length is 40 meters, and the FOV angle of the camera carried by the drone is 90 degrees. Based on the above information, the drone flew 45 meters in front of the wind turbine blades to capture images of the wind turbine blade options.
S2:根据拍摄的三张图片和拍摄时的相机位姿信息求解算法求出风机三个关键点坐标。S2: Find the coordinates of the three key points of the wind turbine based on the solving algorithm of the three pictures taken and the camera pose information at the time of shooting.
使用语义分割算法对风机叶片全景图进行图像分割,整个图像被二分类,其中风机叶片部分的像素值被设定为255,表现为白色,其余像素值被设定为0,表现为黑色。在以上二值化图片中,选取最上的白色像素点,求它们的坐标均值,作为第一个关键点坐标,选取最左的白色像素点,求它们的坐标均值,作为第二个关键点坐标,选取最右的白色像素点,求它们的坐标均值,作为第三个关键点坐标,并且根据风机的轮廓,排除噪点,以及判断其是否处于图像边缘等特殊情况,排除非关键点的干扰数据;The semantic segmentation algorithm is used to perform image segmentation on the wind turbine blade panorama. The entire image is classified into two categories. The pixel value of the wind blade part is set to 255, which is white, and the remaining pixel values are set to 0, which is black. In the above binary image, select the top white pixels and find their average coordinates as the first key point coordinates. Select the leftmost white pixels and find their average coordinates as the second key point coordinates. , select the rightmost white pixel point, find the mean of their coordinates, and use it as the third key point coordinate. According to the outline of the wind turbine, noise points are eliminated, and special circumstances such as judging whether it is at the edge of the image are eliminated to eliminate interference data of non-key points. ;
根据提取的像素坐标值、拍摄图像时的相机位姿、相机的参数,求解风机叶片关键点在空间中的坐标。如图2所示,在局部空间坐标系o-xyz中点A(X0,Y0,Z0)和点D是空间中相机拍摄位置所在的两个点。当相机分辨率为A行B列(单位为像素,下同),相机的FOV角的角度为θ度,相机位置位于A(X0,Y0,Z0)点时,相机水平拍摄,以X轴正方向为0度,航向角为W。平面S1为相机在点A位置拍摄时点C所在的像平面,平面上点B为相机拍摄图像的中心,即A处相机的光轴与平面S1的交点,点B在图像中的像素坐标为行,/>列。因此,相机的朝向即为向量AB的朝向。平面S2为相机在D点位置拍摄时点C所在的像平面,点C为需要求解的目标点,点C在平面S1和平面S2的交线上,点C在平面S1上的平面像素坐标为第a1行,第b1列,则在像素坐标系中能求得相机中心点到图像平面的像素距离如下式所示。Based on the extracted pixel coordinate values, the camera pose when taking the image, and the camera parameters, the coordinates of the key points of the wind turbine blades in space are solved. As shown in Figure 2, in the local space coordinate system o-xyz, point A (X 0 , Y 0 , Z 0 ) and point D are the two points in the space where the camera shooting position is located. When the camera resolution is row A and column B (unit is pixel, the same below), the angle of the camera's FOV angle is θ degrees, and the camera position is at point A (X 0 , Y 0 , Z 0 ), the camera shoots horizontally, with The positive direction of the X-axis is 0 degrees, and the heading angle is W. Plane S 1 is the image plane where point C is located when the camera shoots at point A. Point B on the plane is the center of the image captured by the camera, that is, the intersection of the optical axis of the camera at A and plane S 1. The pixel of point B in the image The coordinates are OK,/> List. Therefore, the orientation of the camera is the orientation of vector AB. Plane S 2 is the image plane where point C is located when the camera shoots at point D. Point C is the target point that needs to be solved. Point C is on the intersection of plane S 1 and plane S 2. Point C is on the plane on plane S1. The pixel coordinates are row a1 and column b1. In the pixel coordinate system, the pixel distance from the camera center point to the image plane can be obtained as shown in the following formula.
本实施例中,当相机分辨率为1024×1024像素,相机的FOV角为90度,像素。In this embodiment, when the camera resolution is 1024×1024 pixels, the camera’s FOV angle is 90 degrees. pixels.
在以相机A为原点,向量为直线L1的方向向量,向量/>的值如下式所示,其中W为相机在局部空间坐标系o-xyz中的航向角,其余参数根据相机的线性成像模型求得。With camera A as the origin, the vector is the direction vector of straight line L1, vector/> The value of is shown in the following formula, where W is the heading angle of the camera in the local spatial coordinate system o-xyz, and the remaining parameters are obtained according to the linear imaging model of the camera.
由于已知直线过A点,A点坐标为(X0,Y0,Z0)可求得直线L的参数方程如下式,当i取1时即为直线L1的方程。Since it is known that the straight line passes through point A, and the coordinates of point A are (X 0 , Y 0 , Z 0 ), the parametric equation of straight line L can be obtained as follows. When i is 1, it is the equation of straight line L 1 .
其中,t为参数,L1为一张图片中,风机某个关键点在空间中所对应的直线方程,使用同样的方法,针对另两张图片中的同一个关键点进行求解,最终可以获得三条不平行的空间直线的方程,风机叶片三个关键点的空间位置,即为三条空间直线的交点。Among them, t is a parameter, and L 1 is the straight line equation corresponding to a key point of the wind turbine in one picture in space. Use the same method to solve the same key point in the other two pictures, and finally obtain The equations of three non-parallel space straight lines, the spatial positions of the three key points of the wind turbine blade are the intersection points of the three space straight lines.
S3:根据求解获得的风机关键点坐标,完成面向风机叶片的无人机巡检起止点和方向的路径规划。S3: Based on the coordinates of the key points of the wind turbine obtained from the solution, complete the path planning of the starting and ending points and direction of the drone inspection facing the wind turbine blades.
如图3所示根据三个关键点的三维坐标信息,可以求得风机的朝向,和风机叶片中心点坐标。As shown in Figure 3, based on the three-dimensional coordinate information of the three key points, the orientation of the wind turbine and the coordinates of the center point of the wind turbine blade can be obtained.
本实例中,取得朝向的步骤可以将3个点编号,例如风机上侧叶片关键点为1号点,左侧叶片关键点为2号点,右侧叶片关键点3号点,取1号点2号点坐标做差,得到一个向量,取1号点3号点坐标做差,得到另一个向量,对两个向量求外积,就能获取垂直于三点构成平面的方向向量,中心点的获取直接三点求均值即可。In this example, the step of obtaining the orientation can number 3 points. For example, the key point of the upper blade of the wind turbine is point 1, the key point of the left blade is point 2, and the key point of the right blade is point 3. Take point 1. Difference the coordinates of point 2 to get a vector, take the difference of the coordinates of point 1 and point 3 to get another vector, take the outer product of the two vectors, and you can get the direction vector perpendicular to the plane formed by the three points, the center point To obtain it, just average the three points.
根据风机中心点坐标和风机叶尖坐标,以及风机朝向,在距离风机三个叶片正反面一共6个面一定距离确定视觉伺服的起始点和终点并给出巡检方向,距离风机叶片的距离由风机叶片尺寸大小和相机的FOV角共同决定,确保风机叶片最大的局部图像在相机视野中所占面积介于50%-75%之间。According to the coordinates of the fan center point and the fan tip coordinates, as well as the direction of the fan, the starting point and end point of the visual servo are determined at a certain distance from the front and back of the three blades of the fan, and the inspection direction is given. The distance from the fan blade is given by The size of the wind blade and the FOV angle of the camera are jointly determined to ensure that the largest partial image of the wind blade occupies between 50% and 75% of the area in the camera's field of view.
本实例中相机分辨率为1024×1024像素,相机的FOV角为90度,叶片长度40米,无人机与风机叶片间的距离设定为2.5米In this example, the camera resolution is 1024×1024 pixels, the camera’s FOV angle is 90 degrees, the blade length is 40 meters, and the distance between the drone and the wind turbine blades is set to 2.5 meters.
S4:将拍摄到的图像输入图像分割算法,将叶片近景处的图像和背景分离;S4: Input the captured image into the image segmentation algorithm to separate the close-up image of the blade from the background;
使用语义分割算法对风机叶片全景图进行图像分割,整个图像被二分类,其中风机叶片部分的像素值被设定为255,表现为白色,其余像素值被设定为0,表现为黑色。The semantic segmentation algorithm is used to perform image segmentation on the wind turbine blade panorama. The entire image is classified into two categories. The pixel value of the wind blade part is set to 255, which is white, and the remaining pixel values are set to 0, which is black.
将分割后的图像进行数据压缩后,作为状态输入强化学习模型进行训练,强化学习的输出动作为垂直于巡检方向的一维修正向量值,进行修正动作,并计算奖励函数,奖励函数设计如下:After data compression of the segmented image, it is used as a state input to the reinforcement learning model for training. The output action of the reinforcement learning is a one-dimensional positive vector value perpendicular to the inspection direction. The correction action is performed and the reward function is calculated. The reward function is designed as follows :
w=-(d2+k·s)w=-(d 2 +k·s)
式中w为奖励函数值,d为视觉伺服中偏移值,s为伺服步数,k为系数,k·s用于惩罚方向计算不准确时,伺服算法导致飞行时间太长的问题,伺服相机丢失目标时,强化学习该轮次结束,为了惩罚这种智能体在该情况下的决策,此时奖励函数额外给予一大绝对值的负数,具体数值根据由系统的具体参数决定。In the formula, w is the reward function value, d is the offset value in visual servoing, s is the number of servo steps, k is the coefficient, k·s is used to penalize the direction calculation incorrectly, the servo algorithm leads to the problem of too long flight time, the servo When the camera loses its target, the round of reinforcement learning ends. In order to punish the agent's decision-making in this situation, the reward function is given an additional negative number with a large absolute value. The specific value is determined by the specific parameters of the system.
本实例中,使用1024×1024像素的相机巡检,设定惩罚系数为-90000。In this example, a 1024×1024 pixel camera is used for inspection, and the penalty coefficient is set to -90000.
本实例使用的强化学习算法可以为DDPG算法,输入状态变量为连续数帧压缩后的图像信息,输出的修正变量为垂直于巡检路径的向量,设定输入图像为连续四帧图像。The reinforcement learning algorithm used in this example can be the DDPG algorithm. The input state variable is the compressed image information of several consecutive frames. The output correction variable is a vector perpendicular to the inspection path. The input image is set to four consecutive frames of images.
根据奖励值,状态,动作,更新强化学习参数,获得更好的控制策略;Update reinforcement learning parameters based on reward values, states, and actions to obtain better control strategies;
在奖励值达到阈值时,认为训练完成,使用训练完成的修正模型进行视觉伺服,减少巡检过程中的误差,达到贴合叶片形状的目的。When the reward value reaches the threshold, the training is considered completed, and the trained correction model is used for visual servoing to reduce errors during the inspection process and achieve the purpose of fitting the blade shape.
实施例3Example 3
本实施例与实施例1和3的不同之处在于:本方法中的无人机还可以搭载基于UWB的定位设备。因而空间局部坐标系的建立可基于UWV设备定位的空间局部坐标系,无需通过经纬度坐标系换算成东北天坐标系。The difference between this embodiment and Embodiments 1 and 3 is that the UAV in this method can also be equipped with a UWB-based positioning device. Therefore, the establishment of the spatial local coordinate system can be based on the spatial local coordinate system of the UWV equipment positioning, without the need to convert the longitude and latitude coordinate system into the northeast sky coordinate system.
此外,因无人机搭载的巡检用相机边缘有较大畸变,直接使用会导致定位误差。因此,在飞行过程中,将该相机拍摄的图像边缘剪裁5%畸变过大部分,同时使用图像标定卡对相机进行标定。本实例中,用于定位的图像中像素的坐标为通过标定模型修正后的,相机的FOV数值为经过剪裁和标定修改后的。In addition, because the edge of the inspection camera mounted on the drone has large distortion, direct use will lead to positioning errors. Therefore, during the flight, the edge of the image captured by the camera was clipped with 5% distortion, and an image calibration card was used to calibrate the camera. In this example, the coordinates of the pixels in the image used for positioning are modified by the calibration model, and the FOV value of the camera is modified by clipping and calibration.
综上,本方法在不依赖风机叶片尺度和形态等先验数据的情况下,利用无人机携带的传感器和巡检用相机确定风机叶片关键点的坐标位置。根据所确定的关键点坐标,本方法采用视觉伺服技术在沿叶片进行巡检过程中对巡检轨迹进行修正,解决了动态环境中风机叶片成像精度不稳定、细节缺失等问题。在视觉伺服的反馈控制环节,本方法采用强化学习技术,相较于传统的PID手动调参,实现了更优的控制性能和更广泛的适应性。本发明相对于现有技术方案,具有更高的自主性和鲁棒性。In summary, this method uses the sensors carried by drones and inspection cameras to determine the coordinate positions of key points of wind turbine blades without relying on a priori data such as the scale and shape of the wind turbine blades. Based on the determined key point coordinates, this method uses visual servo technology to correct the inspection trajectory during the inspection process along the blades, solving problems such as unstable imaging accuracy and lack of details of turbine blades in dynamic environments. In the feedback control link of visual servoing, this method uses reinforcement learning technology, which achieves better control performance and wider adaptability than traditional PID manual parameter adjustment. Compared with existing technical solutions, the present invention has higher autonomy and robustness.
需要说明的是,以上内容仅仅说明了本发明的技术思想,不能以此限定本发明的保护范围,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰均落入本发明权利要求书的保护范围之内。It should be noted that the above content only illustrates the technical idea of the present invention and cannot limit the protection scope of the present invention. For those of ordinary skill in the technical field, without departing from the principle of the present invention, they can also make Several improvements and modifications are made, and these improvements and modifications fall within the protection scope of the claims of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310708227.0A CN116912715A (en) | 2023-06-15 | 2023-06-15 | A UAV visual servo control method and system for wind turbine blade inspection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310708227.0A CN116912715A (en) | 2023-06-15 | 2023-06-15 | A UAV visual servo control method and system for wind turbine blade inspection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116912715A true CN116912715A (en) | 2023-10-20 |
Family
ID=88354051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310708227.0A Pending CN116912715A (en) | 2023-06-15 | 2023-06-15 | A UAV visual servo control method and system for wind turbine blade inspection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116912715A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117212077A (en) * | 2023-11-08 | 2023-12-12 | 云南滇能智慧能源有限公司 | Wind wheel fault monitoring method, device and equipment of wind turbine and storage medium |
CN117536797A (en) * | 2023-10-24 | 2024-02-09 | 华能安徽怀宁风力发电有限责任公司 | Unmanned aerial vehicle-based fan blade inspection system and method |
CN117893933A (en) * | 2024-03-14 | 2024-04-16 | 国网上海市电力公司 | Unmanned inspection fault detection method and system for power transmission and transformation equipment |
US12276256B1 (en) | 2023-10-24 | 2025-04-15 | Huaneng Anhui Huaining Wind Power Generation Co., Ltd. | Wind turbine blade inspection system and method based on unmanned aerial vehicle |
-
2023
- 2023-06-15 CN CN202310708227.0A patent/CN116912715A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117536797A (en) * | 2023-10-24 | 2024-02-09 | 华能安徽怀宁风力发电有限责任公司 | Unmanned aerial vehicle-based fan blade inspection system and method |
CN117536797B (en) * | 2023-10-24 | 2024-05-31 | 华能安徽怀宁风力发电有限责任公司 | Unmanned aerial vehicle-based fan blade inspection system and method |
US12276256B1 (en) | 2023-10-24 | 2025-04-15 | Huaneng Anhui Huaining Wind Power Generation Co., Ltd. | Wind turbine blade inspection system and method based on unmanned aerial vehicle |
CN117212077A (en) * | 2023-11-08 | 2023-12-12 | 云南滇能智慧能源有限公司 | Wind wheel fault monitoring method, device and equipment of wind turbine and storage medium |
CN117212077B (en) * | 2023-11-08 | 2024-02-06 | 云南滇能智慧能源有限公司 | Wind wheel fault monitoring method, device and equipment of wind turbine and storage medium |
CN117893933A (en) * | 2024-03-14 | 2024-04-16 | 国网上海市电力公司 | Unmanned inspection fault detection method and system for power transmission and transformation equipment |
CN117893933B (en) * | 2024-03-14 | 2024-05-24 | 国网上海市电力公司 | A method and system for unmanned inspection fault detection of power transmission and transformation equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111462135B (en) | Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation | |
US11897606B2 (en) | System and methods for improved aerial mapping with aerial vehicles | |
CN112164015B (en) | Monocular vision autonomous inspection image acquisition method, device and power inspection UAV | |
CN116912715A (en) | A UAV visual servo control method and system for wind turbine blade inspection | |
CN110908401B (en) | A UAV autonomous inspection method for unknown tower structure | |
CN107844750B (en) | Water surface panoramic image target detection and identification method | |
CN108803668B (en) | Intelligent inspection unmanned aerial vehicle nacelle system for static target monitoring | |
WO2020014909A1 (en) | Photographing method and device and unmanned aerial vehicle | |
CN109940603B (en) | A point-to-point error compensation control method for an inspection robot | |
CN106873619B (en) | Processing method of flight path of unmanned aerial vehicle | |
CN113793270B (en) | A geometric correction method for aerial images based on UAV attitude information | |
CN106530239B (en) | Low-altitude tracking method for small unmanned rotorcraft moving target based on large field of view bionic fisheye | |
CN105187723A (en) | Shooting processing method for unmanned aerial vehicle | |
CN111709994B (en) | Autonomous unmanned aerial vehicle visual detection and guidance system and method | |
CN110083177A (en) | A kind of quadrotor and control method of view-based access control model landing | |
CN113525631A (en) | An underwater terminal docking system and method based on optical vision guidance | |
CN109764864B (en) | A method and system for indoor UAV pose acquisition based on color recognition | |
CN116202489A (en) | Method and system for co-locating power transmission line inspection machine and pole tower and storage medium | |
CN117115271A (en) | Binocular camera external parameter self-calibration method and system in unmanned aerial vehicle flight process | |
CN116824080A (en) | Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion | |
CN112802109A (en) | Method for generating automobile aerial view panoramic image | |
CN110223233B (en) | Unmanned aerial vehicle aerial photography image building method based on image splicing | |
CN108803652A (en) | A kind of autonomous tracking control method of rotor craft | |
Greatwood et al. | Perspective correcting visual odometry for agile mavs using a pixel processor array | |
CN112037274B (en) | A method for determining the viewpoint of a multi-rotor UAV based on sunlight conditions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |