[go: up one dir, main page]

CN109035336A - Method for detecting position, device, equipment and storage medium based on image - Google Patents

Method for detecting position, device, equipment and storage medium based on image Download PDF

Info

Publication number
CN109035336A
CN109035336A CN201810719611.XA CN201810719611A CN109035336A CN 109035336 A CN109035336 A CN 109035336A CN 201810719611 A CN201810719611 A CN 201810719611A CN 109035336 A CN109035336 A CN 109035336A
Authority
CN
China
Prior art keywords
image
target organism
capture apparatus
key point
organism
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810719611.XA
Other languages
Chinese (zh)
Other versions
CN109035336B (en
Inventor
迟至真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810719611.XA priority Critical patent/CN109035336B/en
Publication of CN109035336A publication Critical patent/CN109035336A/en
Application granted granted Critical
Publication of CN109035336B publication Critical patent/CN109035336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present application provides a kind of method for detecting position based on image, device, equipment and storage medium, by obtaining the image for being erected at multiple capture apparatus of different direction and obtaining in synchronization shooting, wherein the time synchronization of the multiple capture apparatus;The key point of target organism and body and the head of the target organism are detected in the image of the multiple capture apparatus shooting, and the first area position where target organism described in the image of each capture apparatus shooting is determined based on testing result;The inner parameter and external parameter of first area position and each capture apparatus based on the target organism in the image that each capture apparatus is shot, determine the actual three-dimensional position of the target organism.The embodiment of the present application can be improved improve organism in the picture in two-dimensional position and actual environment three-dimensional position positioning accuracy.

Description

Method for detecting position, device, equipment and storage medium based on image
Technical field
The invention relates to field of artificial intelligence more particularly to a kind of method for detecting position based on image, Device, equipment and storage medium.
Background technique
With the propulsion of social intelligence, unmanned supermarket is as a kind of new retail environments by extensive concern.Currently, nobody The relevant technologies of supermarket be not also it is very mature, specifically how by multi-cam judge customer location and constant position tracking be One difficult point.
Current solution is mainly image-region where obtaining human body by the method for human body critical point detection Rectangle frame positions by the rectangle frame and tracks position of human body, and the accuracy of the rectangle frame is highly dependent upon the accurate of key point Property, once key point missing inspection or erroneous detection, will result in the rectangle frame i.e. inaccuracy of human body positioning.
Summary of the invention
The embodiment of the present application provides a kind of method for detecting position based on image, device, equipment and storage medium, to mention High organism in the picture in two-dimensional position and actual environment three-dimensional position positioning accuracy.
The embodiment of the present application first aspect provides a kind of method for detecting position based on image, comprising: acquisition is erected at not The image obtained with multiple capture apparatus in orientation in synchronization shooting, wherein the time synchronization of the multiple capture apparatus; The key point of target organism and the body of the target organism are detected in the image of the multiple capture apparatus shooting And head, the first area position where target organism described in the image of each capture apparatus shooting is determined based on testing result It sets;Based on first area position of the target organism in the image that each capture apparatus is shot and each capture apparatus Inner parameter and external parameter determine the actual three-dimensional position of the target organism.
The embodiment of the present application second aspect provides a kind of position detecting device based on image, comprising: obtains module, is used for It obtains and is erected at the image that multiple capture apparatus of different direction are obtained in synchronization shooting, wherein the multiple shooting is set Standby time synchronization;Detection module, for detecting the key of target organism in the image that the multiple capture apparatus is shot The body and head of point and the target organism are determined described in the image of each capture apparatus shooting based on testing result First area position where target organism;First determining module, for being based on the target organism in each capture apparatus The inner parameter and external parameter of first area position and each capture apparatus in the image of shooting determine that the target is raw The actual three-dimensional position of object.
The embodiment of the present application third aspect provides a kind of computer equipment, comprising: one or more processors;Storage dress It sets, for storing one or more programs, when one or more of programs are executed by one or more of processors, so that One or more of processors realize the method as described in above-mentioned first aspect.
The embodiment of the present application fourth aspect provides a kind of computer readable storage medium, is stored thereon with computer program, The method as described in above-mentioned first aspect is realized when the program is executed by processor.
Based on aspects above, the embodiment of the present application is by obtaining the multiple capture apparatus for being erected at different direction same Moment shoots the image obtained, and the key point and mesh of target organism are detected in the image of these capture apparatus shooting Body and the head for marking organism determine in the image of capture apparatus shooting the where target organism based on testing result One regional location, thus the first area position based on target organism in the image that each capture apparatus is shot, and each bat Inner parameter and the external parameter of equipment are taken the photograph to determine the actual three-dimensional position of target organism.Since the embodiment of the present application is base Firstth area of the target organism in each image is determined in the testing result and body of key point and the testing result on head Domain position, thus, it is possible to the first area position position inaccurate caused by avoiding the problem that because of key point missing inspection or erroneous detection, And because head sizes it is small caused by erroneous detection problem, to improve organism in the picture in two-dimensional position and actual environment The positioning accuracy of three-dimensional position.
It should be appreciated that content described in foregoing invention content part is not intended to limit the pass of embodiments herein Key or important feature, it is also non-for limiting scope of the present application.The other feature of this public affairs application will be become by description below It is readily appreciated that.
Detailed description of the invention
Fig. 1 is a kind of flow chart of method for detecting position based on image provided by the embodiments of the present application;
Fig. 2 is a kind of method schematic diagram of determining first area position provided by the embodiments of the present application;
Fig. 3 is the execution method flow diagram of step S12 provided by the embodiments of the present application a kind of;
Fig. 4 a is a kind of image schematic diagram that capture apparatus shooting obtains provided by the embodiments of the present application;
Fig. 4 b is the schematic diagram of the result of a kind of head provided by the embodiments of the present application and body detection;
Fig. 4 c is the schematic diagram that the result of critical point detection is carried out on the basis of Fig. 4 b;
Fig. 5 is the execution method flow diagram of step S12 provided by the embodiments of the present application a kind of;
Fig. 6 a is the distributed areas of the target organism that is obtained based on preset critical point detection model inspection on the image Schematic diagram;
Fig. 6 b is head zone and body based on preset head and the detected target organism of body detection model Body region schematic diagram;
Fig. 6 c is the schematic diagram based on the target organism region determined Fig. 6 a and Fig. 6 b;
Fig. 7 is a kind of structural schematic diagram of position detecting device based on image provided by the embodiments of the present application;
Fig. 8 is a kind of structural schematic diagram of detection module 72 provided by the embodiments of the present application;
Fig. 9 is a kind of structural schematic diagram of detection module 72 provided by the embodiments of the present application.
Specific embodiment
Embodiments herein is more fully described below with reference to accompanying drawings.Although showing that the application's is certain in attached drawing Embodiment, it should be understood that, the application can be realized by various forms, and should not be construed as being limited to this In the embodiment that illustrates, providing these embodiments on the contrary is in order to more thorough and be fully understood by the application.It should be understood that It is that being given for example only property of the accompanying drawings and embodiments effect of the application is not intended to limit the protection scope of the application.
The specification and claims of the embodiment of the present application and the term " first " in above-mentioned attached drawing, " second ", " Three ", the (if present)s such as " 4th " are to be used to distinguish similar objects, without for describing specific sequence or successive time Sequence.It should be understood that the data used in this way are interchangeable under appropriate circumstances, for example so as to the embodiment of the present application described herein It can be performed in other sequences than those illustrated or described herein.In addition, term " includes " and " having " and he Any deformation, it is intended that cover it is non-exclusive include, for example, contain the process, method of a series of steps or units, System, product or equipment those of are not necessarily limited to be clearly listed step or unit, but may include being not clearly listed Or the other step or units intrinsic for these process, methods, product or equipment.
Under the new public safety such as unmanned supermarket, how to judge customer location by multiple cameras and continue tracking to be one A technological difficulties.The commodity association for needing to be taken customer and customer in entire shopping process, needs to be continuously obtained customer Position and motion profile.Judge that the method for position of human body is mainly based upon human body critical point detection technology in the picture at present The rectangle frame for representing human body region is obtained, and positions and track position of human body according to the rectangle frame.But it should be due to square The determination of shape frame highly dependent upon key point and detection accuracy, when there is key point erroneous detection or missing inspection, it is easy to make The problem of rectangular frame is inaccurate, leads to human body position inaccurate.
In view of the above-mentioned problems existing in the prior art, the embodiment of the present application provides a kind of position detection side based on image Method, this method pass through the critical point detection of progress target organism in the image that multiple capture apparatus are shot and body and head Portion's detection, to integrate the testing result of the two to determine region set of the target organism in each image, and then according to mesh Inner parameter and the external parameter of regional location and each capture apparatus of the organism in each image are marked to determine target organism The practical three-dimensional position of body.Position inaccurate is asked in the picture for organism caused by avoiding because of key point missing inspection or erroneous detection Topic, and because head sizes it is small caused by erroneous detection problem, improve organism in the picture in two-dimensional position and actual environment The positioning accuracy of three-dimensional position.
The technical solution of the embodiment of the present application is specifically described below with reference to attached drawing.
Fig. 1 is a kind of flow chart of method for detecting position based on image provided by the embodiments of the present application, and this method can be with It is executed by a kind of position detecting device (hereinafter referred to as position detecting device) based on image.Referring to Fig. 1, this method includes step Rapid S11-S13:
S11, acquisition are erected at the image that multiple capture apparatus of different direction are obtained in synchronization shooting, wherein institute State the time synchronization of multiple capture apparatus.
Wherein, so-called multiple capture apparatus can be directed at the same calibration object in the present embodiment, can also be respectively aligned to Different calibration objects, position, direction and the shooting angle of each capture apparatus, which can according to need, to be set.In addition multiple Capture apparatus can carry out time synchronization by reading network time, the synchronous letter that can also be sent by receiving specific device It number synchronizes, is not specifically limited in the present embodiment.
S12, the key point and the target that target organism is detected in the image of the multiple capture apparatus shooting The body of organism and head, where determining target organism described in the image of each capture apparatus shooting based on testing result First area position.
Target organism in the present embodiment can be human body and be also possible to other organisms.
Name in the present embodiment for " first area position " be only used for by organism regional location in the picture with Other positions are distinguished, and do not have other meanings.When determining first area position of the target organism in each image, It can be detected based on key point of preset first detection model to target organism in each image, be based on preset second Model detects the body of target organism and head.Preferably, the first model and the second model may each be preparatory instruction Practice the neural network model obtained.Wherein, so-called key point can be the arbitrary point on organism in the present embodiment, for example, Point on hand, the point on arm, point on leg etc., but it is not limited to the point on these positions.
In addition, the detection of the detection of key point and body and head may be performed simultaneously in the present embodiment, it can also To be executed based on preset order, it is not specifically limited in the present embodiment.For example, in a kind of possible mode, it can be first each Body and head detection are carried out in image, obtain head position and body position of the target organism in each image, and be based on Head position and body position of the target organism in each image determine the whole general area in each image of target organism Domain position further, then carries out critical point detection, and determine mesh based on the testing result of the two in above-mentioned general area position Mark first area position of the organism in each image.This mode is not only able to fixed caused by avoiding key point missing inspection or erroneous detection The problem of position inaccuracy, additionally it is possible to reduce the calculation amount of critical point detection, improve detection efficiency.In alternatively possible mode In, the detection on critical point detection and body and head can be carried out in each image simultaneously, based on target organism in each image The distributed areas of key point and the region of the body of target organism and head on body, it is comprehensive to determine target organism each First area position in image excludes the interference of key point erroneous detection or missing inspection to first area position is determined.It is exemplary, Fig. 2 It is a kind of method schematic diagram of determining first area position provided by the embodiments of the present application, as shown in Fig. 2, in image 20, region 21 be the region where detected target organism head, where region 22 is detected target organism body Region, region 23 are the distributed areas for detecting resulting key point, to determine life according to region 23, region 21 and region 22 First area position where object is 24.Certainly it limits it is only for illustrating rather than uniquely.
S13, the first area position based on the target organism in the image that each capture apparatus is shot, and each bat The inner parameter and external parameter for taking the photograph equipment determine the actual three-dimensional position of the target organism.
Wherein, the inner parameter of capture apparatus alleged by the present embodiment includes but is not limited to: focal length, and visual field (FOV) is differentiated Rate.The external parameter of capture apparatus alleged by the present embodiment includes but is not limited to: coordinate position, direction and pitch angle.
The present embodiment is and each in the first area position based on target organism in the image that each capture apparatus is shot The inner parameter and external parameter of capture apparatus, when determining the actual three-dimensional position of target organism, selectable method includes It is a variety of:
In a kind of possible mode, can based on the vertex position in region of the target organism on each image, and The inner parameter and external parameter for respectively taking the photograph equipment determine the actual three-dimensional position of target organism.
In alternatively possible mode, can position of the key point based on target organism on each image, and The inner parameter and external parameter of each capture apparatus, determine the three-dimensional position of each key point, the three-dimensional position based on each key point Determine three-dimensional position of the organism in real space.
Certainly those skilled in the art should understand that be above two mode merely to clearly illustrating this implementation technology Two kinds of most probable implementations cited by scheme, rather than whole implementations.
Further, due to being closed other than being concerned about the position of organism toward contact in the scenes such as unmanned supermarket The motion track of heart organism and behavior, in order to meet the needs of practical application, the present embodiment is obtaining current target life After the three-dimensional position of object, it is also based on three-dimensional position of the target organism before current time and generates target organism Motion track, preferably to analyze the behavior of organism.Or it can be based on target organism in each image The inner parameter and external parameter of the position of key point and each capture apparatus determine the three-dimensional position of each key point, and based on each The three-dimensional set of key point determines the posture of organism.To reach the purpose of more preferable analysis organism behavior.
The image that the present embodiment is obtained by multiple capture apparatus that acquisition is erected at different direction in synchronization shooting, And the key point of target organism and the body and head of target organism are detected in the image of these capture apparatus shooting Portion determines the first area position in an image for capture apparatus shooting where target organism based on testing result, thus base In first area position and each capture apparatus of the target organism in the image that each capture apparatus is shot inner parameter and External parameter determines the actual three-dimensional position of target organism.Due to the present embodiment be testing result based on key point and The testing result on body and head determines first area position of the target organism in each image, thus, it is possible to avoid Cause because of the problem of first area position position inaccurate caused by key point missing inspection or erroneous detection, and because head sizes are small Erroneous detection problem, thus improve organism in the picture in two-dimensional position and actual environment three-dimensional position positioning accuracy.
Above-described embodiment is further optimized and extended with reference to the accompanying drawing.
Fig. 3 is the execution method flow diagram of step S12 provided by the embodiments of the present application a kind of, as shown in figure 3, in Fig. 1 reality On the basis of applying example, the method comprising the steps of S21-S23:
S21, body and the head that target organism is detected in the image of the multiple capture apparatus shooting, based on described Regional location where the body of target organism and the regional location where the head of the target organism, determine each image Described in target organism it is whole where second area position.
S22, critical point detection is carried out in the second area position of each image.
S23, determine that distributing position of the key point of the target organism in the second area position of each image is institute State first area position of the target organism on each image.
Exemplary, Fig. 4 a is a kind of image schematic diagram that capture apparatus shooting obtains provided by the embodiments of the present application, image It include target organism 41 in 40.It is primarily based on preset neural network model to detect image 40, obtains target organism It is whole to obtain target organism 41 based on region 42 and region 43 for the region 43 where region 42 and body where 41 head of body Regional location (i.e. second area position) 44 where body, testing result is as shown in Figure 4 b.Further, in regional location 44 Middle carry out critical point detection obtains key point shown in Fig. 4 c, and the position for determining that key point is distributed in regional location 44 is First area position 45 where target organism 41.
Certain merely illustrative explanation of above-mentioned example rather than unique restriction of the invention.
The present embodiment is raw according to target by first carrying out head detection and body detection in the image of capture apparatus shooting Object head position in the picture and body position, determine the Position Approximate of the entirety of target organism in the picture, then Critical point detection is carried out in the Position Approximate, and distributing position of the key point in the Position Approximate is existed as target organism First area position in image is not only able to the adverse effect for eliminating key point erroneous detection or missing inspection positions organism, moreover it is possible to The calculation amount for enough reducing critical point detection, improves detection efficiency.
Fig. 5 is the execution method flow diagram of step S12 provided by the embodiments of the present application a kind of, as shown in figure 3, in Fig. 1 reality On the basis of applying example, the method comprising the steps of S31-S33:
S31, the key point that target organism is detected in the image of the multiple capture apparatus shooting, determine in each image The distributed areas of the key point of the target organism.
S32, body and the head that target organism is detected in the image of the multiple capture apparatus shooting, determine each figure The head of the target organism as described in and the regional location where body.
S33, be directed to each image, the regional location where head and body based on target organism in described image, The distributed areas of key point on the image on the target organism are corrected, first area position is obtained.
It is exemplary, it is assumed that Fig. 6 a be the target organism that is obtained based on preset critical point detection model inspection on the image Distributed areas schematic diagram, wherein region 61 be key point distributed areas.Fig. 6 b is detected based on preset head and body The head zone and body area schematic for the target organism that model inspection obtains, wherein region 62 is target organism head The region of portion in the picture, region 63 are the region of the body of target organism in the picture.Then it is based on Fig. 6 a and Fig. 6 b, row Except the interference of erroneous detection and missing inspection key point, the region 64 of target organism in the picture as fig. 6 c can be obtained.
Certainly it is only for illustrate rather than unique restriction to the application.
The present embodiment can eliminate the adverse effect that key point erroneous detection or missing inspection position organism, additionally it is possible to reduce crucial The calculation amount of point detection, improves detection efficiency.
Fig. 7 is a kind of structural schematic diagram of position detecting device based on image provided by the embodiments of the present application, such as Fig. 7 institute Show, which includes:
Module 71 is obtained, for obtaining the figure for being erected at multiple capture apparatus of different direction and obtaining in synchronization shooting Picture, wherein the time synchronization of the multiple capture apparatus;
Detection module 72, for detecting the key point of target organism in the image that the multiple capture apparatus is shot, And body and the head of the target organism, target described in the image of each capture apparatus shooting is determined based on testing result First area position where organism;
First determining module 73, for the firstth area based on the target organism in the image that each capture apparatus is shot The inner parameter and external parameter of domain position and each capture apparatus determine the actual three-dimensional position of the target organism.
In a kind of possible design, first determining module 73, comprising:
Second determines submodule, the image shot for the key point based on the target organism in each capture apparatus Position and each capture apparatus inner parameter and external parameter, determine the actual three-dimensional position of the target organism.
In another possible design, described device further include:
Generation module, for based on the target organism the moment three-dimensional position and the target organism The three-dimensional position at body each moment before the moment, generates the motion track of the target organism.
In another possible design, described device further include:
Second determining module, for from the image that the multiple capture apparatus is shot detection obtain target organism After key point, position of the key point based on the target organism on the image that each capture apparatus is shot, and each shooting The inner parameter and external parameter of equipment determine each actual three-dimensional position of key point on the target organism;
Third determining module, described in determining based on each actual three-dimensional position of key point on the target organism The posture of target organism.
Device provided in this embodiment can be used in the method for executing Fig. 1 embodiment, executive mode and beneficial effect class Seemingly, it repeats no more herein.
Fig. 8 is a kind of structural schematic diagram of detection module 72 provided by the embodiments of the present application, as shown in figure 8, implementing in Fig. 7 On the basis of example, detection module 72 includes:
First detection sub-module 721 detects the body of target organism in the image of the multiple capture apparatus shooting And head, the regional location where the body based on the target organism and the region where the head of the target organism Position determines the second area position where target organism described in each image is whole;
Second detection sub-module 722, for carrying out critical point detection in the second area position of each image;
First determines submodule 723, for determining second area position of the key point in each image of the target organism The distributing position set is first area position of the target organism on each image.
Device provided in this embodiment can be used in the method for executing Fig. 3 embodiment, executive mode and beneficial effect class Seemingly, it repeats no more herein.
Fig. 9 is a kind of structural schematic diagram of detection module 72 provided by the embodiments of the present application, as shown in figure 9, implementing in Fig. 7 On the basis of example, detection module 72 includes:
Third detection sub-module 724, for detecting target organism in the image that the multiple capture apparatus is shot Key point determines the distributed areas of the key point of target organism described in each image;
4th detection sub-module 725, for detecting target organism in the image that the multiple capture apparatus is shot Body and head, the head for determining target organism described in each image and the regional location where body,
Submodule 726 is corrected in position, for being directed to each image, head and body based on target organism in described image Regional location where body corrects the distributed areas of key point on the image on the target organism, obtains First area position.
Device provided in this embodiment can be used in the method for executing Fig. 5 embodiment, executive mode and beneficial effect class Seemingly, it repeats no more herein.
The embodiment of the present application also provides a kind of computer equipment, comprising: one or more processors;
Storage device, for storing one or more programs, when one or more of programs are one or more of Processor executes, so that one or more of processors realize method described in any of the above-described embodiment.
The embodiment of the present application is also provided in a kind of computer readable storage medium, is stored thereon with computer program, the journey Method described in any of the above-described embodiment is realized when sequence is executed by processor.
Function described herein can be executed at least partly by one or more hardware logic components.Example Such as, without limitation, the hardware logic component for the exemplary type that can be used includes: field programmable gate array (FPGA), dedicated Integrated circuit (ASIC), Application Specific Standard Product (ASSP), the system (SOC) of system on chip, load programmable logic device (CPLD) etc..
For implement disclosed method program code can using any combination of one or more programming languages come It writes.These program codes can be supplied to the place of general purpose computer, special purpose computer or other programmable data processing units Device or controller are managed, so that program code makes defined in flowchart and or block diagram when by processor or controller execution Function/operation is carried out.Program code can be executed completely on machine, partly be executed on machine, as stand alone software Is executed on machine and partly execute or executed on remote machine or server completely on the remote machine to packet portion.
In the context of the disclosure, machine readable media can be tangible medium, may include or is stored for The program that instruction execution system, device or equipment are used or is used in combination with instruction execution system, device or equipment.Machine can Reading medium can be machine-readable signal medium or machine-readable storage medium.Machine readable media can include but is not limited to electricity Son, magnetic, optical, electromagnetism, infrared or semiconductor system, device or equipment or above content any conjunction Suitable combination.The more specific example of machine readable storage medium will include the electrical connection of line based on one or more, portable meter Calculation machine disk, hard disk, random access memory (RAM), read-only memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM Or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage facilities or Any appropriate combination of above content.
Although this should be understood as requiring operating in this way with shown in addition, depicting each operation using certain order Certain order out executes in sequential order, or requires the operation of all diagrams that should be performed to obtain desired result. Under certain environment, multitask and parallel processing be may be advantageous.Similarly, although containing several tools in being discussed above Body realizes details, but these are not construed as the limitation to the scope of the present disclosure.In the context of individual embodiment Described in certain features can also realize in combination in single realize.On the contrary, in the described in the text up and down individually realized Various features can also realize individually or in any suitable subcombination in multiple realizations.
Although having used specific to this theme of the language description of structure feature and/or method logical action, answer When understanding that theme defined in the appended claims is not necessarily limited to special characteristic described above or movement.On on the contrary, Special characteristic described in face and movement are only to realize the exemplary forms of claims.

Claims (14)

1. a kind of method for detecting position based on image characterized by comprising
It obtains and is erected at the image that multiple capture apparatus of different direction are obtained in synchronization shooting, wherein the multiple bat Take the photograph the time synchronization of equipment;
Detected in the image of the multiple capture apparatus shooting target organism key point and the target organism Body and head determine the first area where target organism described in the image of each capture apparatus shooting based on testing result Position;
Based on first area position of the target organism in the image that each capture apparatus is shot and each capture apparatus Inner parameter and external parameter determine the actual three-dimensional position of the target organism.
2. the method according to claim 1, wherein described examine in the image of the multiple capture apparatus shooting The key point of target organism and body and the head of the target organism are surveyed, determines that each shooting is set based on testing result First area position where target organism described in the image of standby shooting, comprising:
Body and the head that target organism is detected in the image of the multiple capture apparatus shooting, are based on the target organism Regional location where the body of body and the regional location where the head of the target organism, determine mesh described in each image Mark the second area position at the whole place of organism;
Critical point detection is carried out in the second area position of each image;
Determine that distributing position of the key point of the target organism in the second area position of each image is raw for the target First area position of the object on each image.
3. the method according to claim 1, wherein described examine in the image of the multiple capture apparatus shooting The key point of target organism and body and the head of the target organism are surveyed, determines that each shooting is set based on testing result First area position where target organism described in the image of standby shooting, comprising:
The key point that target organism is detected in the image of the multiple capture apparatus shooting, determines target described in each image The distributed areas of the key point of organism;
Body and the head that target organism is detected in the image of the multiple capture apparatus shooting, determine described in each image The head of target organism and the regional location where body;
For each image, regional location where head and body based on target organism in described image, to the mesh The upper distributed areas of key point on the image of mark organism are corrected, and first area position is obtained.
4. the method according to claim 1, wherein described clapped based on the target organism in each capture apparatus The inner parameter and external parameter of first area position and each capture apparatus in the image taken the photograph, determine the target organism The actual three-dimensional position of body, comprising:
Position and each capture apparatus of the key point based on the target organism on the image that each capture apparatus is shot Inner parameter and external parameter determine the actual three-dimensional position of the target organism.
5. method according to any of claims 1-4, which is characterized in that described to be based on the target organism each The inner parameter and external parameter of first area position and each capture apparatus in the image of capture apparatus shooting, determine institute After stating the actual three-dimensional position of target organism, the method also includes:
It is each before the moment in the three-dimensional position at the moment and the target organism based on the target organism The three-dimensional position at moment generates the motion track of the target organism.
6. method according to any of claims 1-4, which is characterized in that the method also includes:
After detection obtains the key point of target organism in the image that the multiple capture apparatus is shot, it is based on the target The inner parameter of position and each capture apparatus of the key point of organism on the image that each capture apparatus is shot and external ginseng Number, determines each actual three-dimensional position of key point on the target organism;
Based on the actual three-dimensional position of key point each on the target organism, the posture of the target organism is determined.
7. a kind of position detecting device based on image characterized by comprising
Module is obtained, for obtaining the image for being erected at multiple capture apparatus of different direction and obtaining in synchronization shooting, In, the time synchronization of the multiple capture apparatus;
Detection module, for detecting the key point of target organism, Yi Jisuo in the image that the multiple capture apparatus is shot Body and the head for stating target organism determine target organism described in the image of each capture apparatus shooting based on testing result The first area position at place;
First determining module, for the first area position based on the target organism in the image that each capture apparatus is shot It sets and the inner parameter and external parameter of each capture apparatus, determines the actual three-dimensional position of the target organism.
8. device according to claim 7, which is characterized in that the detection module, comprising:
First detection sub-module detects body and the head of target organism in the image of the multiple capture apparatus shooting, Regional location where body based on the target organism and the regional location where the head of the target organism, really Second area position where target organism described in fixed each image is whole;
Second detection sub-module, for carrying out critical point detection in the second area position of each image;
First determines submodule, for determining point of the key point of the target organism in the second area position of each image Cloth position is first area position of the target organism on each image.
9. device according to claim 7, which is characterized in that the detection module, comprising:
Third detection sub-module, for detecting the key point of target organism in the image that the multiple capture apparatus is shot, Determine the distributed areas of the key point of target organism described in each image;
4th detection sub-module, for detecting the body and head of target organism in the image that the multiple capture apparatus is shot Portion, the head for determining target organism described in each image and the regional location where body,
Submodule is corrected in position, for being directed to each image, where head and body based on target organism in described image Regional location, the distributed areas of key point on the image on the target organism are corrected, obtain the firstth area Domain position.
10. device according to claim 7, which is characterized in that first determining module, comprising:
Second determines submodule, for position of the key point based on the target organism on the image that each capture apparatus is shot It sets and the inner parameter and external parameter of each capture apparatus, determines the actual three-dimensional position of the target organism.
11. the device according to any one of claim 7-10, which is characterized in that described device further include:
Generation module, for being existed based on the target organism in the three-dimensional position at the moment and the target organism The three-dimensional position at each moment before the moment, generates the motion track of the target organism.
12. the device according to any one of claim 7-10, which is characterized in that described device further include:
Second determining module, for detecting the key for obtaining target organism from the image that the multiple capture apparatus is shot After point, position and each capture apparatus of the key point based on the target organism on the image that each capture apparatus is shot Inner parameter and external parameter, determine each actual three-dimensional position of key point on the target organism;
Third determining module, for determining the target based on each actual three-dimensional position of key point on the target organism The posture of organism.
13. a kind of computer equipment characterized by comprising
One or more processors;
Storage device, for storing one or more programs, when one or more of programs are by one or more of processing Device executes, so that one or more of processors realize such as method of any of claims 1-6.
14. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor Such as method of any of claims 1-6 is realized when execution.
CN201810719611.XA 2018-07-03 2018-07-03 Image-based position detection method, device, equipment and storage medium Active CN109035336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810719611.XA CN109035336B (en) 2018-07-03 2018-07-03 Image-based position detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810719611.XA CN109035336B (en) 2018-07-03 2018-07-03 Image-based position detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109035336A true CN109035336A (en) 2018-12-18
CN109035336B CN109035336B (en) 2020-10-09

Family

ID=65521507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810719611.XA Active CN109035336B (en) 2018-07-03 2018-07-03 Image-based position detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109035336B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040014955A1 (en) * 2001-12-17 2004-01-22 Carlos Zamudio Identification of essential genes of cryptococcus neoformans and methods of use
CN107292269A (en) * 2017-06-23 2017-10-24 中国科学院自动化研究所 Facial image false distinguishing method, storage, processing equipment based on perspective distortion characteristic
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040014955A1 (en) * 2001-12-17 2004-01-22 Carlos Zamudio Identification of essential genes of cryptococcus neoformans and methods of use
CN107292269A (en) * 2017-06-23 2017-10-24 中国科学院自动化研究所 Facial image false distinguishing method, storage, processing equipment based on perspective distortion characteristic
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera

Also Published As

Publication number Publication date
CN109035336B (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN108986164A (en) Method for detecting position, device, equipment and storage medium based on image
US9152243B2 (en) Object tracking using background and foreground models
JP5924862B2 (en) Information processing apparatus, information processing method, and program
US9047507B2 (en) Upper-body skeleton extraction from depth maps
CN113449570A (en) Image processing method and device
Pirri et al. A general method for the point of regard estimation in 3D space
US20210209793A1 (en) Object tracking device, object tracking method, and object tracking program
JP2014509535A (en) Gaze point mapping method and apparatus
US11989928B2 (en) Image processing system
KR20170036747A (en) Method for tracking keypoints in a scene
US20120281918A1 (en) Method for dynamically setting environmental boundary in image and method for instantly determining human activity
JP2015014882A5 (en)
KR20180020123A (en) Asynchronous signal processing method
TW201317904A (en) Tag detecting system, apparatus and method for detecting tag thereof
Diete et al. A smart data annotation tool for multi-sensor activity recognition
Gouidis et al. Accurate hand keypoint localization on mobile devices
Ran et al. Applications of a simple characterization of human gait in surveillance
Dong et al. Vector detection network: An application study on robots reading analog meters in the wild
D'Agostini et al. An augmented reality virtual assistant to help mild cognitive impaired users in cooking a system able to recognize the user status and personalize the support
CN111681268B (en) Method, device, equipment and storage medium for detecting misidentification of optical mark point serial numbers
US20240119620A1 (en) Posture estimation apparatus, posture estimation method, and computer-readable recording medium
KR20200068709A (en) Human body identification methods, devices and storage media
CN109035336A (en) Method for detecting position, device, equipment and storage medium based on image
CN114147696A (en) Power grid inspection robot positioning system and method based on 5G and Beidou
CN113643788B (en) Method and system for determining feature points based on multiple image acquisition devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant