[go: up one dir, main page]

CN113112478B - Pose recognition method and terminal equipment - Google Patents

Pose recognition method and terminal equipment Download PDF

Info

Publication number
CN113112478B
CN113112478B CN202110404780.6A CN202110404780A CN113112478B CN 113112478 B CN113112478 B CN 113112478B CN 202110404780 A CN202110404780 A CN 202110404780A CN 113112478 B CN113112478 B CN 113112478B
Authority
CN
China
Prior art keywords
point cloud
data
pose
laser
regional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110404780.6A
Other languages
Chinese (zh)
Other versions
CN113112478A (en
Inventor
李强
毕艳飞
柴黎林
李贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ubtech Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN202110404780.6A priority Critical patent/CN113112478B/en
Publication of CN113112478A publication Critical patent/CN113112478A/en
Application granted granted Critical
Publication of CN113112478B publication Critical patent/CN113112478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention is applicable to the technical field of equipment control, and provides a pose recognition method and terminal equipment, wherein the method comprises the following steps: acquiring pose data to be identified of terminal equipment; the pose data comprises position data and pose data; acquiring an area point cloud map with the position data as a center; acquiring corresponding laser point cloud data under the actual pose of the terminal equipment through a built-in laser radar; determining confidence scores corresponding to the pose data according to the laser point cloud data and the regional point cloud map; and generating a pose recognition result based on the confidence score of the pose data. The invention can determine the reliability of the current recognized pose data and can greatly improve the robustness of the terminal equipment in the scene with easily changed environment.

Description

Pose recognition method and terminal equipment
Technical Field
The invention belongs to the technical field of equipment control, and particularly relates to a pose recognition method and terminal equipment.
Background
Along with the continuous development of intellectualization and automation, the application field of intelligent robots is also more and more extensive, for example, the intelligent robots can be applied to the fields of household cleaning, automatic delivery, route navigation and the like, and the convenience and the intellectualization degree of the life of users are greatly improved. In the running process of the intelligent robot, one of important factors influencing the navigation and map construction accuracy is how to accurately identify the pose of the intelligent robot depending on the navigation technology and the map self-construction technology.
In the existing pose recognition technology, pose data are generally obtained through a sensor, and the pose of an intelligent robot is recognized by recognizing the relative positions of key markers in a scene. However, in indoor scenes such as home, the positions of the key markers are easy to change, or new obstacles are added in the original travelling path, so that the pose recognition of the intelligent robot is affected, the reliability of the pose recognition is reduced, and the robustness in the scene where the environment is easy to change is reduced.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a pose recognition method and terminal equipment, so as to solve the problems of lower reliability of pose recognition and lower robustness in a scene where the environment is easy to change in the existing pose recognition technology.
A first aspect of an embodiment of the present invention provides a method for identifying a pose, which is applied to a terminal device, and includes:
acquiring pose data to be identified of terminal equipment; the pose data comprises position data and pose data;
acquiring an area point cloud map with the position data as a center;
acquiring corresponding laser point cloud data under the actual pose of the terminal equipment through a built-in laser radar;
Determining confidence scores corresponding to the pose data according to the laser point cloud data and the regional point cloud map;
and generating a pose recognition result based on the confidence score of the pose data.
A second aspect of an embodiment of the present invention provides a pose recognition device, including:
the pose data acquisition unit is used for acquiring pose data to be identified of the terminal equipment; the pose data comprises position data and pose data;
the regional point cloud map acquisition unit is used for acquiring a regional point cloud map with the position data as a center;
the laser point cloud data acquisition unit is used for acquiring corresponding laser point cloud data under the actual pose of the terminal equipment through a built-in laser radar;
the confidence score determining unit is used for determining confidence scores corresponding to the pose data according to the laser point cloud data and the regional point cloud map;
and the pose recognition result generating unit is used for generating a pose recognition result based on the confidence score of the pose data.
A third aspect of the embodiments of the present invention provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the first aspect when executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the first aspect.
The pose recognition method and the terminal device provided by the embodiment of the invention have the following beneficial effects:
when the pose recognition is needed, pose data to be recognized of the terminal equipment are obtained, and then a regional point cloud map corresponding to position data in the pose data is determined; on the other hand, when the terminal equipment acquires pose data, laser point cloud data corresponding to the current pose can be acquired through a built-in laser radar, a confidence score corresponding to the current pose data can be determined by comparing the regional point cloud map with the laser point cloud data, a corresponding pose recognition result is obtained based on the confidence of the pose data, the reliability degree of the currently recognized pose data is determined, and when the reliability degree is higher, corresponding normal response operation can be executed based on the recognized pose data, for example, the terminal equipment is controlled to run on a preset track; when the reliability is low, abnormal response operation can be performed, for example, the pose data of the terminal equipment is re-identified or the map is updated, and the robustness of the terminal equipment in a scene where the environment is easy to change can be greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an implementation of a pose recognition method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of laser point cloud data and an area point cloud map according to an embodiment of the present invention;
fig. 3 is a flowchart of a specific implementation of a pose recognition method S104 according to a second embodiment of the present invention;
fig. 4 is a flowchart of a specific implementation of a pose recognition method S1041 provided in a third embodiment of the present invention;
fig. 5 is a flowchart of implementation of a gesture recognition method S101 and S105 according to a fourth embodiment of the present invention;
fig. 6 is a flowchart of a specific implementation of a pose recognition method S102 according to a fifth embodiment of the present invention;
FIG. 7 is a block diagram of a pose recognition device according to an embodiment of the present invention;
Fig. 8 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
When the pose recognition is needed, pose data to be recognized of the terminal equipment are obtained, and then a regional point cloud map corresponding to position data in the pose data is determined; on the other hand, when the terminal equipment acquires pose data, laser point cloud data corresponding to the current pose can be acquired through a built-in laser radar, a confidence score corresponding to the current pose data can be determined by comparing the regional point cloud map with the laser point cloud data, a corresponding pose recognition result is obtained based on the confidence of the pose data, the reliability degree of the currently recognized pose data is determined, and when the reliability degree is higher, corresponding normal response operation can be executed based on the recognized pose data, for example, the terminal equipment is controlled to run on a preset track; when the reliability is low, abnormal response operation can be executed, such as re-recognizing pose data of the terminal equipment or updating a map, so that the problems of low reliability of pose recognition and low robustness in scenes where the environment is easy to change in the pose recognition technology are solved.
In the embodiment of the present invention, the execution body of the flow is a terminal device, and the terminal device includes but is not limited to: the terminal equipment can also be movable equipment such as intelligent model automobiles, unmanned aerial vehicles and the like; in one possible implementation manner, the execution subject of the process may be another electronic device that establishes a communication connection with the terminal device, in which case, the terminal device may send pose data to the electronic device, so that the electronic device outputs a pose recognition result about the terminal device, based on which the electronic device may be a device capable of performing a pose recognition task, such as a computer, a smart phone, or a tablet computer, and send the pose recognition result obtained by recognition to the terminal device, so that the terminal device performs a corresponding response operation based on the pose recognition result; the electronic device can also determine corresponding response operation based on the pose recognition result, and send a control instruction corresponding to the response operation to the terminal device so as to control the terminal device to execute corresponding actions. In the following embodiments, the execution subject of the flow is described by taking a terminal device as an example.
Fig. 1 shows a flowchart of implementation of a pose recognition method according to a first embodiment of the present invention, which is described in detail below:
in S101, pose data to be identified of terminal equipment is obtained; the pose data includes position data and pose data.
In this embodiment, the terminal device may determine, through a built-in data acquisition module, current pose data, where the pose data is in a to-be-identified state, and because there may be a deviation in the internal data acquisition module, the pose data may be invalid or pose data with abnormal recognition, where the reliability of the pose data needs to be identified, that is, the pose recognition result corresponding to the position data is output through S102 to S105. For example, the data acquisition module includes: the positioning module and the gesture recognition module may be, for example, a global positioning system GPS module, and if the terminal device determines the device position based on other principles, for example, determining the device position based on a wireless non-fidelity WIFI signal or a bluetooth low energy signal, the positioning module may also be a WIFI communication module, or a bluetooth low energy module, etc., and determine the position data of the terminal device by searching for a wireless signal of the corresponding wireless device and a signal strength of the wireless signal; the pose module may include one or more than two sensors, and determine pose data of the terminal device through values fed back by the sensors, where the sensors include, but are not limited to: gyroscope, acceleration sensor and angular velocity sensor etc., if the terminal equipment is an intelligent robot, and this intelligent robot has a plurality of movable joints, and the robot has a plurality of degrees of freedom when changing self gesture, then can dispose corresponding sensor on every movable joint, and terminal equipment can confirm this terminal equipment's current gesture data based on the sensing value of the sensor feedback that each movable joint corresponds.
In one possible implementation manner, the position data is specifically used to indicate an absolute position of the terminal device, for example, the position data may be a longitude and a latitude of a position where the terminal device is located, optionally, the position data may also be used to indicate a relative position of the terminal device, for example, a scene where the terminal device is located includes a plurality of calibration objects, and when the terminal device determines pose data, the collected position data may be a distance value between the terminal device and each calibration object, and in this case, the number of the calibration objects may be three or more.
In one possible implementation, the pose data includes, but is not limited to: the attitude angle, the orientation information of the respective faces, and the like may be used for information representing the attitude of the terminal device. Of course, if the terminal device includes a plurality of movable joints, each movable joint has a plurality of degrees of freedom, when determining the posture data of the terminal device, the corresponding angles of the movable joints in the degrees of freedom may be determined, and then the posture data of the terminal device may be determined according to the angles of the plurality of movable joints in the degrees of freedom.
In one possible implementation manner, the terminal device may be configured with a triggering condition for pose recognition, and if the terminal device detects that any one of the triggering conditions for pose recognition configured in advance is currently satisfied, the terminal device may execute operations of S101 to S105 to determine the pose of the terminal device. For example, the triggering condition of pose recognition may be a time triggering condition, and the terminal device may be configured with multiple triggering moments of pose recognition, where the time interval between each triggering moment may be the same (i.e. have a preset triggering period) or may be different. The time interval between the triggering moments of the gesture recognition can be determined according to the density of the obstacle of the scene where the gesture recognition is located, if the density of the obstacle is larger in a certain scene, namely the current environment is more complex, the gesture of the terminal equipment needs to be frequently confirmed at the moment so as to realize accurate control, and therefore the time interval of the corresponding triggering moment is shorter; otherwise, if the density of the obstacle is smaller in the scene where the terminal equipment is located, that is, the current environment is simpler, the pose of the terminal equipment does not need to be confirmed frequently at the moment, so that the time interval of the corresponding trigger time can be longer.
In a possible implementation manner, the triggering condition of pose recognition may also be an event triggering condition, for example, the terminal device collides with the obstacle when traveling based on a preset map, that is, the pose of the terminal device is abnormal, so that the terminal device collides with the obstacle only when deviating from a preset track, and at this time, the triggering condition of pose recognition may be determined to be satisfied, and the pose of the terminal device is determined.
In S102, a regional point cloud map centered on the location data is acquired.
In this embodiment, after determining the current pose data, the terminal device needs to determine the reliability of the pose data, that is, needs to calculate the confidence score corresponding to the pose data. Therefore, the terminal device may extract the position data from the pose data and obtain the regional point cloud map based on the position data as the center, where the regional point cloud map is stored in advance, and may be stored in a local memory of the terminal device or a cloud server, or may be stored in an external memory of course.
In one possible implementation manner, the terminal device may associate a corresponding preset position for each regional point cloud area, address the terminal device based on the position data after obtaining the current position data, select a preset position matched with the position data from all stored preset positions, and use a regional point cloud map of the matched preset position as a regional point cloud map corresponding to the position data.
In one possible implementation, the terminal device may have a global point cloud map corresponding to the current scene stored in advance, in which case the terminal device may intercept the regional point cloud map centered on the location data from the global point cloud map. Likewise, the global point cloud map may be stored in a local memory of the terminal device, or may be stored in a cloud server.
In one possible implementation manner, the regional point cloud map may be constructed by the terminal device in a running process. For example, when the terminal device operates in the current scene for the first time, laser data obtained in the traveling process can be obtained through a built-in laser radar, wherein the traveling process is specifically determined based on a preset exploration route, the terminal device can obtain laser data corresponding to all directions on the exploration route, and therefore a global point cloud map corresponding to the current scene is generated based on all the collected laser data, and the global point cloud map comprises the regional point cloud map. Optionally, the terminal device may update the area point cloud map in the traveling process, and if the matching degree between the laser data obtained by the terminal device in a certain traveling process and the laser data obtained at the previous corresponding position is low, the area point cloud map may be updated based on the current laser data.
In a possible implementation manner, the area point cloud map may be constructed by other devices outside the terminal device, where a manner of constructing the area point cloud map by the other devices is the same as the foregoing manner, and details are not repeated. In this case, the other device may send the constructed regional point cloud map to the terminal device provided in this embodiment, and of course, the other device may also upload the constructed map to a third party device, for example, the cloud server or upload the constructed map to the management device of the current scene, where the terminal device may download the regional point cloud map corresponding to the location data through the third party device.
In S103, laser point cloud data corresponding to the actual pose of the terminal device is obtained through a built-in laser radar.
In this embodiment, the terminal device may acquire the laser point cloud data corresponding to the current pose through the built-in laser radar while acquiring the pose data. The laser point cloud data is used for representing: and the relative position relation between each scene object in the current scene and the current pose of the terminal equipment. Alternatively, the laser point cloud data may specifically include a distance value between the terminal device and the laser point cloud data in each spatial dimension.
In this embodiment, since the laser point cloud data collected by the laser radar is collected based on the actual pose of the terminal device, that is, the laser point cloud data can reflect the actual pose of the terminal device, the laser point cloud data is data with higher reliability.
In this embodiment, a laser radar is configured in the terminal device, where the laser radar has a certain acquisition angle and acquisition depth, and based on this, the laser radar has a certain effective range, and the relative positions between each pixel point and the terminal device in the effective range are obtained, so as to construct the laser point cloud data.
In S104, a confidence score corresponding to each pose data is determined according to the laser point cloud data and the regional point cloud map.
In this embodiment, after acquiring the regional point cloud map determined based on the pose data to be identified and the laser point cloud data corresponding to the actual pose, the terminal device may match the two point cloud data, so as to determine the confidence score corresponding to the pose data based on the matching degree between the two point cloud data. Because the regional point cloud map is determined based on pose data to be identified, and the laser point cloud data is acquired by a laser radar based on the actual pose, if the matching degree of the regional point cloud map and the laser radar is higher, the closer the pose data to be identified and the actual pose of the terminal equipment are, correspondingly, the higher the confidence score corresponding to the pose data is; otherwise, if the matching degree between the regional point cloud map and the laser point cloud data is lower, the difference between the pose data to be identified and the actual pose of the terminal equipment is larger, correspondingly, the confidence score corresponding to the pose data is lower, and based on the conclusion, the terminal equipment can obtain the confidence score of the pose data to be identified through the difference degree between the laser point cloud data and the regional point cloud map.
In one possible implementation manner, the terminal device may be configured with a preset confidence conversion function, and the terminal device may import the laser point cloud data and the regional point cloud map into the confidence conversion function, so as to calculate a confidence score corresponding to the pose data. Optionally, the confidence conversion function specifically includes the following modules: the system comprises a point cloud data normalization module, a point cloud data matching module and a confidence score calculation module. The point cloud data normalization module performs normalization processing on the laser point cloud data and the regional point cloud map, for example, converts each parameter value in the laser point cloud data and the regional point cloud map into a unified dimension, and also performs matrix conversion on the laser point cloud data and/or the regional point cloud map, so that the laser point cloud data and/or the regional point cloud map are converted into a consistent angle, and then performs subsequent matching degree calculation; the point cloud data matching module is specifically configured to receive normalized laser point cloud data and an area point cloud map output by the point cloud data normalization module, and calculate a distance between the normalized laser point cloud data and the area point cloud map, where the distance is specifically determined based on distance values between corresponding points in the normalized laser point cloud data and the area point cloud map, and the distance may be an average value of distance values of corresponding points or a matrix formed based on corresponding distance values; the confidence score calculating module specifically calculates the confidence score based on the distance between the transmission and the transmission.
In one possible implementation manner, the confidence score calculation module inputs a distance matrix formed by the distance values between the laser point cloud data and each corresponding point in the regional point cloud map; in this case, the confidence score may be provided with a corresponding distance threshold, the corresponding point in the distance matrix whose distance value exceeds the distance threshold may be identified as an abnormal point, and the corresponding confidence score may be calculated based on the number of abnormal points or the percentage of the number of abnormal points to the total point cloud number.
In one possible implementation manner, the regional point cloud map may specifically be point cloud data centered on the location data and within a range of one week (i.e. 360 °). The laser point cloud data may specifically be point cloud data corresponding to a preset visual angle based on the current gesture. Fig. 2 is a schematic diagram of an area point cloud map and laser point cloud data according to an embodiment of the present application. As shown in fig. 2, the above-mentioned area point cloud map is specifically an area 1, and the laser point cloud data is specifically an area 2, it is seen that the area of the area point cloud map may be an area larger than the laser point cloud data.
In S105, a pose recognition result is generated based on the confidence score of the pose data.
In this embodiment, after the confidence score is obtained by calculation, the terminal device may generate a pose recognition result corresponding to the pose data to be recognized. Wherein, the above-mentioned pose recognition result includes but is not limited to: the pose recognition is free of abnormality, pose deviation, pose recognition abnormality and the like, and by outputting the pose recognition result, the pose recognition of the terminal equipment can be realized, and reliability evaluation can be given to the pose recognition result, so that the terminal equipment can perform corresponding operation based on the pose recognition result.
In one possible implementation manner, if the terminal device recognizes that the pose recognition result is that the pose recognition is not abnormal, the pose data to be recognized is the same as the actual pose of the terminal device, and the corresponding operation can be performed based on the pose data. For example, if the terminal device travels according to a preset track, it may be determined that the track is not deviated currently, and may operate based on maintaining an original travel policy.
In one possible implementation manner, if the terminal device recognizes that the pose recognition result is a pose deviation, it indicates that there is a certain deviation between pose data to be recognized and an actual pose of the terminal device, but the difference is not too large, and the pose of the terminal device can be adjusted.
In one possible implementation manner, if the terminal device recognizes that the pose recognition result is that the pose recognition is abnormal, the pose data to be recognized and the actual pose of the terminal device have larger deviation, and the actual pose is not the expected pose, so that the terminal device can be controlled to move to the preset pose.
It should be noted that, the terminal device executes the corresponding response operation based on the pose recognition result, which may be determined according to the task type currently executed by the terminal device, which is not illustrated here.
As can be seen from the above, when the pose recognition is required, the pose data to be recognized of the terminal device is obtained, and then the regional point cloud map corresponding to the position data in the pose data is determined; on the other hand, when the terminal equipment acquires pose data, laser point cloud data corresponding to the current pose can be acquired through a built-in laser radar, a confidence score corresponding to the current pose data can be determined by comparing the regional point cloud map with the laser point cloud data, a corresponding pose recognition result is obtained based on the confidence of the pose data, the reliability degree of the currently recognized pose data is determined, and when the reliability degree is higher, corresponding normal response operation can be executed based on the recognized pose data, for example, the terminal equipment is controlled to run on a preset track; when the reliability is low, abnormal response operation can be performed, for example, the pose data of the terminal equipment is re-identified or the map is updated, and the robustness of the terminal equipment in a scene where the environment is easy to change can be greatly improved.
Fig. 3 shows a flowchart of a specific implementation of a pose recognition method S104 according to a second embodiment of the present invention. Referring to fig. 3, with respect to the embodiment described in fig. 1, in the method for identifying a pose provided in this embodiment, S104 includes: s1041 to S1042 are specifically described as follows:
further, the determining, according to the laser point cloud data and the regional point cloud map, a confidence score corresponding to each pose data includes:
in S1041, the laser point cloud data and the regional point cloud map are imported into a preset point cloud matching degree algorithm, and the matching degree between the laser point cloud data and the regional point cloud map is calculated.
In this embodiment, the terminal device may be configured with a point cloud matching algorithm, which is specifically configured to calculate a matching degree between any two point cloud data. If the matching degree between the two point cloud data is higher, the corresponding scene similarity is higher when the point cloud data is shot, and for the same scene, the two point cloud data can be considered to be in the same pose; otherwise, if the matching degree between the two point cloud data is lower, the corresponding scene similarity is lower when the point cloud data is shot, and for the same scene, the two point cloud data can be considered to be in different poses. Based on the above, in order to determine whether the pose data to be identified is the same as the actual pose of the terminal device, the terminal device may import the laser point cloud data and the regional point cloud map into the matching degree algorithm, and calculate to obtain the matching degree between the two.
Optionally, the point cloud matching algorithm may specifically use an iterative nearest point algorithm or a normal distribution transformation algorithm. The terminal equipment can respectively calculate the corresponding accuracy rates of the two algorithms under the current scene, and select the algorithm with higher accuracy rate as the point cloud matching algorithm.
In S1042, the matching degree is imported into a preset evaluation function to obtain a confidence score corresponding to the gesture data; the evaluation function is specifically:
wherein score (p) is the degree of matching; f [ score (p) ] is the evaluation function; bel (x) is the confidence score.
In this embodiment, after the terminal device calculates the matching degree, the terminal device may import a preset evaluation function to convert the matching degree into a corresponding confidence score. Specifically, the higher the matching degree is, the closer the identified pose data is to the actual pose of the terminal device, and the higher the corresponding confidence score is. It should be noted that, the above evaluation function is not a linear function, that is, the matching degree and the confidence degree are in a nonlinear relation, because the environment where the terminal device is located may be a changeable environment, for example, in a home environment, the home is easy to shift or increase or decrease, so if the obtained laser point cloud data in the current pose has a larger deviation in a part of the areas between the preset area point cloud maps, the overall matching degree value is lower, even if the pose data is consistent with the actual pose, the matching degree is also lower. Based on the above, the confidence scores of the terminal equipment corresponding to the terminal equipment tend to be the same in the interval with higher matching degree, so that the fault tolerance can be increased under the scene with larger environmental change; in the interval with lower matching degree, the mutual difference is increased, namely the corresponding confidence scores are larger, so that the accuracy of identifying pose anomalies can be improved while the fault tolerance in a variable scene is ensured.
For example, if the calculated matching degree is 60%, that is, 0.6, the confidence score may be calculated to be 100% after conversion by the evaluation function.
According to the embodiment of the application, the degree of difference between the pose data to be identified and the actual position can be determined according to the degree of matching between the two point cloud data by calculating the degree of matching between the laser point cloud data and the regional point cloud map and converting the degree of matching between the laser point cloud data and the regional point cloud map into the corresponding confidence score, and the confidence score of the pose data can be determined based on the degree of difference, so that the accuracy of the confidence score can be improved.
Fig. 4 shows a flowchart of a specific implementation of a pose recognition method S1041 according to a third embodiment of the present application. Referring to fig. 4, with respect to the embodiment illustrated in fig. 3, the method S1041 for identifying a pose provided in this embodiment includes S401 to S403, which are specifically described as follows:
further, the determining a plurality of model groups with adjacent relations based on the three-dimensional model, and establishing a contact force function corresponding to each model group includes:
in S401, second feature points corresponding to any N first feature points in the laser point cloud data are searched in the regional point cloud map, and a point cloud conversion matrix is generated based on the N first feature points and the N second feature points; and N is a positive integer greater than or equal to 3.
In this embodiment, before the terminal device needs to calculate the matching degree between the laser point cloud data and the regional point cloud map, the terminal device may convert the laser point cloud data, and in order to improve the accuracy of identification, the point cloud conversion matrix may be used to eliminate errors caused by differences in the acquired postures, because the postures of the laser point cloud data may have a certain deviation from the corresponding postures of the regional point cloud map when the laser point cloud data is acquired.
In one possible implementation, since the regional point cloud map may be point cloud data within a next week based on the position data, and the laser point cloud data may be point cloud data within a preset visual angle, that is, the range of the regional point cloud map may be larger than the range of the laser point cloud data, it is necessary to determine a point cloud conversion matrix for the laser point cloud data.
In this embodiment, when the terminal device needs to determine the point cloud conversion matrix, at least three feature points need to be acquired from two point cloud data, and according to the at least three feature points, the conversion matrix in the three-dimensional space dimension can be determined. Based on the above, the terminal device may determine at least 3 arbitrary first feature points from the laser point cloud data, determine second feature points corresponding to the first feature points in the regional point cloud map, and generate the point cloud conversion matrix based on the determined first feature points and the position coordinates corresponding to the second feature points.
In one possible implementation manner, a marker is configured in the current scene, and a point of the marker in the point cloud data is taken as the characteristic point. Based on this, the terminal device may determine a point corresponding to the marker in the regional point cloud map as the first feature point, and determine a point corresponding to the marker in the laser point cloud data as the second feature point.
In S402, a laser transformation matrix corresponding to the laser point cloud data is generated based on the point cloud transformation matrix.
In this embodiment, after obtaining the point cloud conversion matrix, the terminal device may perform conversion based on the laser point cloud data, so as to generate the laser conversion matrix. After the conversion is completed, the terminal equipment can quickly determine the corresponding point of each point in the laser point cloud data in the regional point cloud data.
In S403, a deviation distance between the first feature point in the laser transformation matrix and a second feature point corresponding to the first feature point in the regional point cloud map is calculated.
In this embodiment, since the terminal device determines the position of each first feature point in the laser point cloud data, after performing matrix conversion, the terminal device may also locate the position of each first feature point in the laser conversion matrix, and the correspondence between each first feature point and the corresponding second feature point is also kept unchanged, where the smaller the deviation distance between the first feature point and the corresponding second feature point, the more similar the two point cloud data are represented; on the contrary, the larger the deviation distance between the first feature point and the corresponding second feature point is, the larger the difference between the two point cloud data is, so that the matching degree between the two can be calculated through the deviation distance between the feature points.
In one possible implementation manner, the terminal device may further establish a corresponding point between each point in the laser conversion matrix and the regional point cloud map based on the corresponding relationship between the first feature point and the second feature point, thereby establishing a corresponding relationship between each point, and determining the matching degree between the laser point cloud data and the regional point cloud map based on all the deviation distances.
In S404, the matching degree is obtained based on all the deviation distances.
In this embodiment, the terminal device may import a preset matching degree calculation function according to the deviation distance between the corresponding feature points, so as to calculate the matching degree. The larger the deviation distance is, the lower the corresponding matching degree is; conversely, the smaller the offset distance, the higher the corresponding matching degree.
In one possible implementation, the terminal device may calculate a mean value of the deviation distances, and determine the above-mentioned matching degree based on the inverse of the mean value of the deviation distances.
In the embodiment of the application, the first characteristic points and the second characteristic points with the association relationship in the two point cloud data are identified, the point cloud conversion matrix is generated based on the characteristic points, and the laser point cloud data are converted to obtain the laser conversion matrix, so that the deviation distance between the associated characteristic points can be calculated, the matching degree between the two characteristic points is calculated according to the deviation degree, and the accuracy of the matching degree is improved.
Fig. 5 shows a flowchart of a specific implementation of a gesture recognition method S101 and S104 according to a fourth embodiment of the present invention. Referring to fig. 5, with respect to the embodiments described in any one of fig. 1 to 4, a method S101 for identifying a pose provided in this embodiment includes: s1011, S105 includes S1051 to S1504, respectively, and is described in detail below:
further, the acquiring pose data to be identified of the terminal device includes:
in S1011, the pose data is acquired in a preset acquisition period during the movement of the terminal device.
In this embodiment, in the moving process of the terminal device, it is necessary to identify its pose in real time, and adjust the moving direction, the moving speed, and the like based on its pose, so the terminal device may be configured with an acquisition period, and periodically acquire pose data of the terminal device.
Correspondingly, the generating a pose recognition result based on the confidence score of the pose data includes:
in S1051, if the pose recognition result corresponding to any one of the acquisition periods is a pose deviation, adjusting the pose of the terminal device or updating the pose data of the acquisition period based on the laser point cloud data and the regional point cloud map.
In this embodiment, if the terminal device detects that the pose recognition result corresponding to any acquisition period is pose deviation, that is, the difference between pose data to be recognized and the actual pose of the terminal device is smaller, the terminal device may adjust the actual pose of the terminal device according to the degree of difference between the laser point cloud data and the regional point cloud map, where in this case, there are two adjustment manners for the terminal device, one of the two adjustment manners is to adjust the actual pose of the terminal device so that the actual pose is consistent with the pose data to be recognized; the other is to update the pose data so that the updated pose data is consistent with the actual pose.
Therefore, for the first mode, the terminal equipment determines the adjustment angle and the adjustment distance, adjusts the current gesture of the terminal equipment based on the adjustment angle and the adjustment distance, and adjusts the actual gesture of the terminal equipment to be consistent with the gesture data to be identified;
for the second mode, the terminal device may determine a center position corresponding to the laser point cloud data from the global point cloud map, and update the pose data based on the determined center position.
In the embodiment of the application, the terminal equipment can acquire the pose data periodically in a preset acquisition period in the moving process so as to periodically identify the pose of the terminal equipment, and when the pose is deviated, the pose or the pose data of the terminal equipment can be adjusted in real time so as to improve the accuracy of the moving process.
Optionally, the generating a pose recognition result based on the confidence score of the pose data may further include:
in S1052, if the pose recognition results corresponding to M consecutive acquisition periods are pose recognition anomalies, generating a position loss instruction; and M is greater than or equal to a preset abnormal response threshold.
In this embodiment, the terminal device may be configured with a corresponding abnormal response threshold, if pose recognition results corresponding to a plurality of continuous acquisition periods are pose recognition anomalies, the pose data to be recognized by the terminal device continuously for a plurality of times is larger in deviation from the actual pose of the terminal device, that is, the terminal device does not move based on a preset movement track, based on which the terminal device may determine that the current situation of position loss is present, and in order to respond to the abnormal situation, the terminal device may generate a position loss instruction so as to repair the abnormal situation.
In S1053, the terminal device is controlled to move to a preset reset position in response to the position loss instruction.
In this embodiment, at least one reset position may be configured in the current scene of the terminal device, where the reset position may be configured with a guiding signal, and the terminal device may move to the positioning point based on the guiding signal, so that the position of the terminal device can be retrieved. The guiding signal may be a wireless WIFI signal, or may be a bluetooth signal, etc., and the terminal device may move to the reset position according to the characteristic that the signal strength has a strong correlation with the position.
In the embodiment of the application, when the continuous abnormal pose recognition of the pose data of the terminal equipment is detected, the terminal equipment is recognized to be in a position lost state and is moved to the reset position, so that the abnormal situation of the position lost can be automatically repaired, and the robustness of the abnormal situation is improved.
Optionally, the generating a pose recognition result based on the confidence score of the pose data may further include:
s1054, if the pose recognition results corresponding to the continuous Q acquisition periods are pose recognition anomalies, updating the regional point cloud map based on the laser point cloud data acquired by the acquisition periods; and Q is smaller than a preset abnormal response threshold.
In this embodiment, if the terminal device detects that the pose recognition result of the continuous Q pose data is that the pose recognition is abnormal, but the number Q of the recognition abnormalities is not greater than the abnormal response threshold, at this time, after Q acquisition periods, the terminal device can accurately recognize the pose of the terminal device again, that is, indicate that the terminal device does not lose the current position, and the abnormal situation may be caused by a large environmental change, such as a shift or an increase or a decrease of a home, and at this time, the terminal device may update the corresponding regional point cloud map based on the laser point cloud data acquired periodically, so that the regional point cloud map may be matched with the scene after the change of the obstacle.
In the embodiment of the application, the regional point cloud map can be updated based on the laser point cloud data under the condition that the abnormal number of the pose recognition result is less, so that the purpose of updating the regional point cloud map in real time is realized, and the adaptability of the terminal equipment to the variable environment is improved.
Fig. 6 shows a flowchart of a specific implementation of a pose recognition method according to a fifth embodiment of the present application. Referring to fig. 6, with respect to the embodiments described in any one of fig. 1 to 4, the method for identifying a pose provided in this embodiment includes: s1021 to S1023 are described in detail as follows:
further, the gesture data includes a gesture angle of the terminal device; the obtaining the regional point cloud map with the position data as the center comprises the following steps:
in S1021, a center point corresponding to the location data is determined from a preset global point cloud map.
In this embodiment, the terminal device stores a global point cloud map in advance, where each point in the global point cloud map may be associated with a preset position, and the terminal device may compare the current identified position data with the preset positions associated with each point in the global point cloud map, and use a point corresponding to the preset position matched with the position data as the center point.
In S1022, the effective recognition area is determined by using the center point as the center, the attitude angle as the initial angle, and the preset angular resolution as the radius.
In this embodiment, the terminal device may acquire an area within a circle range corresponding to the center point as a center, and identify the area as an effective identification area, where the center point is the center, and the gesture angle obtained by identification in the gesture data is the starting angle, and the preset angle resolution is the effective identification radius.
In S1023, a point cloud map corresponding to the effective identification area is intercepted in the global point cloud map as the area point cloud map.
In this embodiment, the terminal device may extract, from the global point cloud map, point cloud data corresponding to the effective identification area, and use the extracted point cloud data as the area point cloud map.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Fig. 7 is a block diagram of a pose recognition device according to an embodiment of the present invention, where the terminal device includes units for executing the steps in the embodiment corresponding to fig. 1. Please refer to fig. 1 and the related description of the embodiment corresponding to fig. 1. For convenience of explanation, only the portions related to the present embodiment are shown.
Referring to fig. 7, the pose recognition device includes:
a pose data acquisition unit 71 for acquiring pose data to be identified by the terminal device; the pose data comprise position data and pose angles;
a regional point cloud map acquisition unit 72 for acquiring a regional point cloud map centered on the position data;
the laser point cloud data acquisition unit 73 is used for acquiring corresponding laser point cloud data under the actual pose of the terminal equipment through a built-in laser radar;
a confidence score determining unit 74, configured to determine a confidence score corresponding to each pose data according to the laser point cloud data and the regional point cloud map;
a pose recognition result generation unit 75 for generating a pose recognition result based on the confidence score of the pose data.
Optionally, the confidence score determination unit 74 includes:
the matching degree calculation unit is used for importing the laser point cloud data and the regional point cloud map into a preset point cloud matching degree algorithm and calculating the matching degree between the laser point cloud data and the regional point cloud map;
an evaluation function importing unit, configured to import the matching degree into a preset evaluation function, so as to obtain a confidence score corresponding to the gesture data; the evaluation function is specifically:
Wherein score (p) is the degree of matching; f [ score (p) ] is the evaluation function; bel (x) is the confidence score.
Optionally, the matching degree calculating unit includes:
the characteristic point determining unit is used for searching second characteristic points corresponding to any N first characteristic points in the laser point cloud data in the regional point cloud map, and generating a point cloud conversion matrix based on the N first characteristic points and the N second characteristic points; the N is a positive integer greater than or equal to 3;
the laser conversion matrix generation unit is used for generating a laser conversion matrix corresponding to the laser point cloud data based on the point cloud conversion matrix;
the deviation distance calculation unit is used for calculating the deviation distance between the first characteristic point in the laser conversion matrix and a second characteristic point corresponding to the first characteristic point in the regional point cloud map;
and the deviation distance conversion unit is used for obtaining the matching degree based on all the deviation distances.
Optionally, the pose data obtaining unit is specifically configured to obtain the pose data with a preset acquisition period in a moving process of the terminal device;
correspondingly, the pose recognition result generating unit 75 includes:
And the pose deviation response unit is used for adjusting the pose of the terminal equipment or updating the pose data of the acquisition period based on the laser point cloud data and the regional point cloud map if the pose recognition result corresponding to any acquisition period is the pose deviation.
Optionally, the pose recognition result generating unit 75 further includes:
the position loss instruction response unit is used for generating a position loss instruction if the pose recognition results corresponding to the continuous M acquisition periods are abnormal in pose recognition; the M is larger than or equal to a preset abnormal response threshold value;
and the position loss repair unit is used for responding to the position loss instruction and controlling the terminal equipment to move to a preset reset position.
Optionally, the pose recognition result generating unit 75 further includes:
the map updating unit is used for updating the regional point cloud map based on the laser point cloud data acquired in the acquisition period if the pose recognition results corresponding to the continuous Q acquisition periods are abnormal in pose recognition; and Q is smaller than a preset abnormal response threshold.
Optionally, the gesture data includes a gesture angle of the terminal device; the regional point cloud map acquisition unit 72 includes:
The center point determining unit is used for determining a center point corresponding to the position data from a preset global point cloud map;
the effective recognition area determining unit is used for determining an effective recognition area by taking the central point as a circle center, the attitude angle as a starting angle and a preset angle resolution as a radius;
and the regional point cloud map generation unit is used for intercepting a point cloud map corresponding to the effective identification region from the global point cloud map as the regional point cloud map.
Therefore, the pose recognition device provided by the embodiment of the invention can also acquire pose data to be recognized of the terminal equipment when pose recognition is required, and then determine the regional point cloud map corresponding to the position data in the pose data; on the other hand, when the terminal equipment acquires pose data, laser point cloud data corresponding to the current pose can be acquired through a built-in laser radar, a confidence score corresponding to the current pose data can be determined by comparing the regional point cloud map with the laser point cloud data, a corresponding pose recognition result is obtained based on the confidence of the pose data, the reliability degree of the currently recognized pose data is determined, and when the reliability degree is higher, corresponding normal response operation can be executed based on the recognized pose data, for example, the terminal equipment is controlled to run on a preset track; when the reliability is low, abnormal response operation can be performed, for example, the pose data of the terminal equipment is re-identified or the map is updated, and the robustness of the terminal equipment in a scene where the environment is easy to change can be greatly improved.
Fig. 8 is a schematic diagram of a terminal device according to another embodiment of the present invention. As shown in fig. 8, the terminal device 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82 stored in the memory 81 and executable on the processor 80, such as a pose recognition program. The processor 80, when executing the computer program 82, implements the steps in the above-described respective pose recognition method embodiments, such as S101 to S105 shown in fig. 1. Alternatively, the processor 80, when executing the computer program 82, performs the functions of the units in the above-described device embodiments, such as the functions of the modules 71 to 75 shown in fig. 7.
By way of example, the computer program 82 may be divided into one or more units, which are stored in the memory 81 and executed by the processor 80 to complete the present invention. The one or more elements may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 82 in the terminal device 8. For example, the computer program 82 may be divided into a pose data acquisition unit, a regional point cloud map acquisition unit, a laser point cloud data acquisition unit, a confidence score determination unit, and a pose recognition result generation unit, each of which functions specifically as described above.
The terminal device may include, but is not limited to, a processor 80, a memory 81. It will be appreciated by those skilled in the art that fig. 8 is merely an example of the terminal device 8 and does not constitute a limitation of the terminal device 8, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The processor 80 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8. The memory 81 may be an external storage device of the terminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the terminal device 8. The memory 81 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 81 may also be used to temporarily store data that has been output or is to be output.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (8)

1. The pose recognition method is applied to terminal equipment and is characterized by comprising the following steps of:
acquiring pose data to be identified of terminal equipment; the pose data comprises position data and pose data;
acquiring an area point cloud map with the position data as a center;
Acquiring corresponding laser point cloud data under the actual pose of the terminal equipment through a built-in laser radar;
determining confidence scores corresponding to the pose data according to the laser point cloud data and the regional point cloud map;
generating a pose recognition result based on the confidence score of the pose data;
the determining the confidence score corresponding to each pose data according to the laser point cloud data and the regional point cloud map comprises the following steps:
importing the laser point cloud data and the regional point cloud map into a preset point cloud matching degree algorithm, and calculating the matching degree between the laser point cloud data and the regional point cloud map;
importing the matching degree into a preset evaluation function to obtain a confidence score corresponding to the gesture data; the evaluation function is specifically:
wherein score (p) is the degree of matching; f [ score (p) ] is the evaluation function; bel (x) is the confidence score;
the step of importing the laser point cloud data and the regional point cloud map into a preset point cloud matching degree algorithm, and calculating the matching degree between the laser point cloud data and the regional point cloud map comprises the following steps:
Searching second characteristic points corresponding to any N first characteristic points in the laser point cloud data in the regional point cloud map, and generating a point cloud conversion matrix based on the N first characteristic points and the N second characteristic points; the N is a positive integer greater than or equal to 3;
generating a laser conversion matrix corresponding to the laser point cloud data based on the point cloud conversion matrix;
calculating the deviation distance between the first characteristic point in the laser conversion matrix and a second characteristic point corresponding to the first characteristic point in the regional point cloud map;
and obtaining the matching degree based on all the deviation distances.
2. The method for identifying according to claim 1, wherein the acquiring pose data to be identified by the terminal device includes:
acquiring the pose data in a preset acquisition period in the moving process of the terminal equipment;
correspondingly, generating a pose recognition result based on the confidence score of the pose data comprises:
and if the pose recognition result corresponding to any acquisition period is pose deviation, adjusting the pose of the terminal equipment or updating the pose data of the acquisition period based on the laser point cloud data and the regional point cloud map.
3. The method of claim 2, wherein the generating a pose recognition result based on the confidence score of the pose data further comprises:
if the pose recognition results corresponding to the continuous M acquisition periods are pose recognition anomalies, generating a position loss instruction; the M is larger than or equal to a preset abnormal response threshold value;
and responding to the position loss instruction, and controlling the terminal equipment to move to a preset reset position.
4. The method of claim 2, wherein the generating a pose recognition result based on the confidence score of the pose data further comprises:
if the pose recognition results corresponding to the continuous Q acquisition periods are pose recognition anomalies, updating the regional point cloud map based on the laser point cloud data acquired in the acquisition periods; and Q is smaller than a preset abnormal response threshold.
5. The identification method according to any one of claims 1 to 4, wherein the gesture data includes a gesture angle of the terminal device; the obtaining the regional point cloud map with the position data as the center comprises the following steps:
determining a center point corresponding to the position data from a preset global point cloud map;
Determining an effective recognition area by taking the central point as a circle center, the gesture data as an initial angle and a preset angle resolution as a radius;
and intercepting a point cloud map corresponding to the effective identification area from the global point cloud map as the area point cloud map.
6. The utility model provides a recognition device of position appearance which characterized in that includes:
the pose data acquisition unit is used for acquiring pose data to be identified of the terminal equipment; the pose data comprises position data and pose data;
the regional point cloud map acquisition unit is used for acquiring a regional point cloud map with the position data as a center;
the laser point cloud data acquisition unit is used for acquiring corresponding laser point cloud data under the actual pose of the terminal equipment through a built-in laser radar;
the confidence score determining unit is used for determining confidence scores corresponding to the pose data according to the laser point cloud data and the regional point cloud map;
a pose recognition result generating unit, configured to generate a pose recognition result based on the confidence score of the pose data;
the confidence score determination unit includes:
the matching degree calculation unit is used for importing the laser point cloud data and the regional point cloud map into a preset point cloud matching degree algorithm and calculating the matching degree between the laser point cloud data and the regional point cloud map;
An evaluation function importing unit, configured to import the matching degree into a preset evaluation function, so as to obtain a confidence score corresponding to the gesture data; the evaluation function is specifically:
wherein score (p) is the degree of matching; f [ score (p) ] is the evaluation function; bel (x) is the confidence score;
the matching degree calculation unit includes:
the characteristic point determining unit is used for searching second characteristic points corresponding to any N first characteristic points in the laser point cloud data in the regional point cloud map, and generating a point cloud conversion matrix based on the N first characteristic points and the N second characteristic points; the N is a positive integer greater than or equal to 3;
the laser conversion matrix generation unit is used for generating a laser conversion matrix corresponding to the laser point cloud data based on the point cloud conversion matrix;
the deviation distance calculation unit is used for calculating the deviation distance between the first characteristic point in the laser conversion matrix and a second characteristic point corresponding to the first characteristic point in the regional point cloud map;
and the deviation distance conversion unit is used for obtaining the matching degree based on all the deviation distances.
7. A terminal device, characterized in that it comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor executing the steps of the method according to any one of claims 1 to 5.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 5.
CN202110404780.6A 2021-04-15 2021-04-15 Pose recognition method and terminal equipment Active CN113112478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110404780.6A CN113112478B (en) 2021-04-15 2021-04-15 Pose recognition method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110404780.6A CN113112478B (en) 2021-04-15 2021-04-15 Pose recognition method and terminal equipment

Publications (2)

Publication Number Publication Date
CN113112478A CN113112478A (en) 2021-07-13
CN113112478B true CN113112478B (en) 2023-12-15

Family

ID=76717131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110404780.6A Active CN113112478B (en) 2021-04-15 2021-04-15 Pose recognition method and terminal equipment

Country Status (1)

Country Link
CN (1) CN113112478B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664684B (en) * 2022-12-13 2024-04-05 荣耀终端有限公司 Positioning method, electronic device and computer readable storage medium
CN118314531B (en) * 2024-06-07 2024-08-30 浙江聿力科技有限公司 Government service behavior pose monitoring management method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108732584A (en) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 Method and apparatus for updating map
CN110561423A (en) * 2019-08-16 2019-12-13 深圳优地科技有限公司 pose transformation method, robot and storage medium
CN111076733A (en) * 2019-12-10 2020-04-28 亿嘉和科技股份有限公司 Robot indoor map building method and system based on vision and laser slam
CN111735439A (en) * 2019-03-22 2020-10-02 北京京东尚科信息技术有限公司 Map construction method, map construction device and computer-readable storage medium
CN112414403A (en) * 2021-01-25 2021-02-26 湖南北斗微芯数据科技有限公司 Robot positioning and attitude determining method, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064506B (en) * 2018-07-04 2020-03-13 百度在线网络技术(北京)有限公司 High-precision map generation method and device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108732584A (en) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 Method and apparatus for updating map
CN111735439A (en) * 2019-03-22 2020-10-02 北京京东尚科信息技术有限公司 Map construction method, map construction device and computer-readable storage medium
CN110561423A (en) * 2019-08-16 2019-12-13 深圳优地科技有限公司 pose transformation method, robot and storage medium
CN111076733A (en) * 2019-12-10 2020-04-28 亿嘉和科技股份有限公司 Robot indoor map building method and system based on vision and laser slam
CN112414403A (en) * 2021-01-25 2021-02-26 湖南北斗微芯数据科技有限公司 Robot positioning and attitude determining method, equipment and storage medium

Also Published As

Publication number Publication date
CN113112478A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
KR101948728B1 (en) Method and system for collecting data
CN110936383B (en) Obstacle avoiding method, medium, terminal and device for robot
CN105844631B (en) A kind of object localization method and device
CN108680156B (en) Robot positioning method for multi-sensor data fusion
CN107544501A (en) A kind of intelligent robot wisdom traveling control system and its method
CN113112478B (en) Pose recognition method and terminal equipment
Valiente et al. A comparison of EKF and SGD applied to a view-based SLAM approach with omnidirectional images
Ioannidis et al. A path planning method based on cellular automata for cooperative robots
Hashemifar et al. Augmenting visual SLAM with Wi-Fi sensing for indoor applications
Deng et al. Long-range binocular vision target geolocation using handheld electronic devices in outdoor environment
US20180275663A1 (en) Autonomous movement apparatus and movement control system
CN118226860B (en) Robot motion control method, device, robot and storage medium
CN111630346B (en) Improved positioning of mobile devices based on images and radio words
CN115962787B (en) Map updating and automatic driving control method, device, medium and vehicle
CN113985465A (en) Sensor fusion positioning method, system, readable storage medium and computer device
Liu Slam algorithm for multi-robot communication in unknown environment based on particle filter
Yu et al. Indoor localization based on fusion of apriltag and adaptive monte carlo
CN111986232A (en) Target object detection method, target object detection device, robot and storage medium
KR102481615B1 (en) Method and system for collecting data
Jametoni et al. A study on autonomous drone positioning method
KR102252823B1 (en) Apparatus and method for tracking targets and releasing warheads
JP2023503750A (en) ROBOT POSITIONING METHOD AND DEVICE, DEVICE, STORAGE MEDIUM
KR20200043329A (en) Method and system for collecting data
CN111580530A (en) Positioning method, positioning device, autonomous mobile equipment and medium
Shi et al. Online topological map building and qualitative localization in large-scale environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant