[go: up one dir, main page]

CN117409044B - Intelligent object dynamic following method and device based on machine learning - Google Patents

Intelligent object dynamic following method and device based on machine learning Download PDF

Info

Publication number
CN117409044B
CN117409044B CN202311722469.1A CN202311722469A CN117409044B CN 117409044 B CN117409044 B CN 117409044B CN 202311722469 A CN202311722469 A CN 202311722469A CN 117409044 B CN117409044 B CN 117409044B
Authority
CN
China
Prior art keywords
moving object
historical
determining
moving
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311722469.1A
Other languages
Chinese (zh)
Other versions
CN117409044A (en
Inventor
王定华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kiosk Electronic Co ltd
Original Assignee
Shenzhen Kiosk Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kiosk Electronic Co ltd filed Critical Shenzhen Kiosk Electronic Co ltd
Priority to CN202311722469.1A priority Critical patent/CN117409044B/en
Publication of CN117409044A publication Critical patent/CN117409044A/en
Application granted granted Critical
Publication of CN117409044B publication Critical patent/CN117409044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an intelligent object dynamic following method and device based on machine learning, which comprises the steps of firstly acquiring a historical moving object image frame set and a historical moving object differential decision image frame set, converting the historical moving object image frame set into a historical moving object position sequence, determining a variable moving prediction factor set according to the historical moving object position sequence, determining an object moving deviation coefficient according to the variable moving prediction factor set and the moving object prediction position sequence, determining an object weight threshold point of a current moving object image frame, determining an object moving weight threshold difference according to the object weight threshold point and the current moving object image frame, determining a current moving object tracking decision domain according to the object moving weight threshold difference and the object moving deviation coefficient, dynamically adjusting following monitoring equipment of a moving object according to the current moving object tracking decision domain, and reducing tracking errors between the following monitoring equipment and the moving object in the intelligent object dynamic following process.

Description

Intelligent object dynamic following method and device based on machine learning
Technical Field
The application relates to the technical field of dynamic following, in particular to an intelligent object dynamic following method and device based on machine learning.
Background
The dynamic following technology is a hotspot of research in the fields of computer vision, machine learning, unmanned aerial vehicle, robot and the like in recent years, in the prior art, a target detection and tracking method has been greatly progressed, but challenges still exist in real-time dynamic following of a specific target object, in the traditional target detection method, an algorithm based on feature engineering needs to manually design features, is limited by factors such as target performance, image quality and the like, and may not accurately identify a target in a complex environment, while a target detection algorithm based on deep learning has stronger feature learning capability, can automatically learn features from data, has higher identification precision, has higher calculation complexity and is not suitable for real-time application.
The intelligent object dynamic following is based on target detection, so that the system can automatically identify, track and predict the movement and position of an object, a person or other targets, and realize real-time monitoring and following of the dynamic behavior of the intelligent object, while the intelligent object dynamic following process in the prior art is easily influenced by factors such as target size, deformation and the like, thereby causing overlarge tracking error between the following monitoring equipment and the moving object in the intelligent object dynamic following process.
Disclosure of Invention
The application provides a machine learning-based intelligent object dynamic following method and device, which are used for solving the technical problem of reducing tracking errors between following monitoring equipment and a moving object in the intelligent object dynamic following process.
In order to solve the technical problems, the application adopts the following technical scheme:
In a first aspect, the present application provides a machine learning-based intelligent object dynamic following method, including the steps of:
acquiring a historical moving object image frame set and a historical moving object difference decision image frame set in a machine learning database;
Converting the historical moving object image frame set into a historical moving object position sequence, determining a variable moving prediction factor set according to the historical moving object position sequence, and determining a moving object prediction position sequence corresponding to the historical moving object position sequence according to the variable moving prediction factor set;
determining an object movement deviation coefficient according to the historical movement object difference decision image frame set and the movement object prediction position sequence;
Acquiring a current moving object image frame, determining an object weight threshold point of the current moving object image frame, and determining an object moving weight threshold difference according to the object weight threshold point and the current moving object image frame;
determining a current moving object tracking decision domain according to the object moving weight threshold difference and the object moving deviation coefficient;
And dynamically adjusting the following monitoring equipment of the mobile object according to the current mobile object tracking decision domain.
In some embodiments, converting the set of historical moving object image frames into a sequence of historical moving object positions specifically includes:
determining a historical object position in each historical moving object frame in the historical moving object frame set;
and sequencing all the historical object positions according to the sequence of the shooting time of the corresponding historical moving object frames to obtain a historical moving object position sequence.
In some embodiments, determining the object movement deviation coefficient from the historical moving object delta decision image frame set and the moving object predicted position sequence specifically comprises:
determining a moving object position in each moving object delta decision image frame;
sequencing all the positions of the moving object according to the sequence of shooting time of the decision image frames corresponding to the difference of the moving object to obtain a moving object position sequence;
Obtaining the object movement deviation coefficient according to the moving object position sequence and the moving object predicted position sequence, wherein the object movement deviation coefficient is determined according to the following expression:
Wherein the method comprises the steps of Representing the object movement deviation coefficient,/>Representing the position of a moving object in a sequence of positionsTime sequence of moving object position,/>Representing the position of a moving object in a predicted position sequencePredicted position of moving object at time,/>The total number of moving object positions in the moving object position sequence is represented.
In some embodiments, determining the object weight threshold point for the current moving object image frame specifically includes:
Acquiring all the color saturation groups of the mobile objects in all the color saturation groups of the mobile objects;
Determining the total number of the color saturation groups of the moving object in the color saturation histogram of the moving object;
determining a moving object distribution factor corresponding to each moving object color saturation group;
obtaining a mobile object color saturation center value according to all mobile object color saturation groups, the total number of the mobile object color saturation groups and mobile object distribution factors corresponding to all mobile object color saturation groups;
And taking a pixel point corresponding to a pixel value equal to the color saturation central value of the moving object in the current moving object image frame as an object weight threshold point.
In some embodiments, determining the object movement weight amount from the object weight threshold point and the current moving object image frame specifically includes:
acquiring a center point of the current moving object image frame
Determining the object weight threshold point
Determining the object movement weight threshold difference according to the center point of the current movement object image frame and the object weight threshold point, wherein the object movement weight threshold difference is determined according to the following expression:
Wherein the method comprises the steps of Representing the amount of movement weight threshold of an object,/>Representing object weight threshold points,/>List center point of current moving object image frame,/>Representing the position of the object weight threshold point on the x-axis,/>Representing the position of the object weight threshold point on the y-axis,/>Representing the position of the center point of the current moving object image frame on the x-axis,/>Representing the position of the center point of the current moving object image frame on the y-axis.
In some embodiments, determining the set of variable movement predictors from the sequence of historical moving object positions specifically includes:
Determining an autocorrelation prediction map of each historical moving object position in the historical moving object position sequence;
determining a variable motion predictor for each autocorrelation prediction map;
The set of all the variable motion predictors is taken as the variable motion predictor set.
In a second aspect, the present application provides a machine learning-based intelligent object dynamic following device, which includes a dynamic following unit, where the dynamic following unit includes:
an image frame set determining module for acquiring a history moving object image frame set and a history moving object delta decision image frame set in a machine learning database;
the mobile object prediction position sequence determining module is used for converting the historical mobile object image frame set into a historical mobile object position sequence, determining a variable mobile prediction factor set according to the historical mobile object position sequence, and determining a mobile object prediction position sequence corresponding to the historical mobile object position sequence according to the variable mobile prediction factor set;
the object movement deviation coefficient determining module is used for determining an object movement deviation coefficient according to the historical movement object difference decision image frame set and the movement object prediction position sequence;
the object movement weight threshold difference determining module is used for acquiring a current movement object image frame, determining an object weight threshold point of the current movement object image frame, and determining an object movement weight threshold difference according to the object weight threshold point and the current movement object image frame;
The mobile object tracking decision domain determining module is used for determining a current mobile object tracking decision domain according to the object movement weight threshold difference and the object movement deviation coefficient;
and the tracking adjustment module is used for dynamically adjusting the following monitoring equipment of the mobile object according to the current mobile object tracking decision domain.
In a third aspect, the present application provides a computer device comprising a memory storing code and a processor configured to obtain the code and to perform the above-described machine learning based smart object dynamic following method.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the above-described intelligent object dynamic following method based on machine learning.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
In the intelligent object dynamic following method and device based on machine learning, firstly, a historical moving object image frame set and a historical moving object difference decision image frame set in a machine learning database are obtained, the historical moving object image frame set is converted into a historical moving object position sequence, a variable moving prediction factor set is determined according to the historical moving object position sequence, a moving object prediction position sequence corresponding to the historical moving object position sequence is determined according to the variable moving prediction factor set, an object moving deviation coefficient is determined according to the historical moving object difference decision image frame set and the moving object prediction position sequence, a current moving object image frame is obtained, an object weight threshold point of the current moving object image frame is determined, an object moving weight threshold difference is determined according to the object weight threshold point and the current moving object image frame, a current moving object tracking decision domain is determined according to the object moving weight threshold and the object moving deviation coefficient, and a following monitoring device of a moving object is dynamically adjusted according to the current moving object tracking decision domain.
According to the scheme, a historical moving object position sequence is determined through the historical moving object image frame set, and a variable moving prediction factor set is determined according to the historical moving object position sequence, so that influence of a larger part of moving deviation in the historical moving object position sequence on a moving object predicted position is reduced, and the accuracy of determining the moving object predicted position is improved; the object movement deviation coefficient is determined through the variable movement prediction factor set and the historical movement object difference amount decision image frame set, the accuracy of determining the current movement object tracking decision domain is improved, the object movement weight threshold point and the current movement object image frame are used for determining the object movement weight threshold difference amount, the distance moved by the movement object is highlighted, the error between the intelligent object dynamic following monitoring equipment and the movement object is reduced, the current movement object tracking decision domain is finally determined according to the object movement weight threshold difference amount and the object movement deviation coefficient, the following monitoring equipment is dynamically adjusted according to the current movement object tracking decision domain, and the tracking error between the following monitoring equipment and the movement object in the intelligent object dynamic following process can be reduced.
Drawings
FIG. 1 is an exemplary flow chart of a machine learning based smart object dynamic following method according to some embodiments of the application;
FIG. 2 is a schematic diagram of exemplary hardware and/or software of a dynamic follower unit shown in accordance with some embodiments of the application;
FIG. 3 is a schematic diagram of a computer device implementing a machine learning based smart object dynamic following method according to some embodiments of the application.
Detailed Description
The method comprises the steps of firstly obtaining a historical moving object image frame set and a historical moving object difference decision image frame set in a machine learning database, converting the historical moving object image frame set into a historical moving object position sequence, determining a variable moving prediction factor set according to the historical moving object position sequence, determining a moving object prediction position sequence corresponding to the historical moving object position sequence according to the variable moving prediction factor set, determining an object moving deviation coefficient according to the historical moving object difference decision image frame set and the moving object prediction position sequence, obtaining a current moving object image frame, determining an object weight threshold point of the current moving object image frame, determining an object moving weight threshold value according to the object weight threshold point and the current moving object image frame, determining a current moving object tracking decision domain according to the object moving weight threshold value and the object moving deviation coefficient, dynamically adjusting a moving object tracking monitoring device according to the current moving object tracking decision domain, and reducing tracking errors between the intelligent object tracking monitoring device and the moving object in the intelligent object dynamic tracking process.
In order to better understand the above technical solutions, the following detailed description will refer to the accompanying drawings and specific embodiments. Referring to FIG. 1, which is an exemplary flow chart of a machine-learning-based smart object dynamic following method 100, according to some embodiments of the present application, the machine-learning-based smart object dynamic following method 100 generally includes the steps of:
in step 101, a set of historical moving object image frames and a set of historical moving object delta decision image frames in a machine learning database are acquired.
In a specific implementation, a history moving object image frame set and a history moving object difference decision image frame set in a machine learning database are acquired, wherein the history moving object image frame set is a set of all history moving object image frames, and the history moving object difference decision image frame set is a set of all history moving object difference decision image frames.
In the present application, when the set of history moving object image frames is used for predicting the position of the moving object, for example, an image obtained by photographing the moving object in the first two days to the first thirty days may be used as the history moving object image frame in the set of history moving object image frames; in the present application, the history moving object difference decision image frame set is used for determining the object moving deviation coefficient, and in a specific implementation, for example, an image obtained by photographing the moving object in the previous day may be used as the history moving object difference decision image frame in the history moving object difference decision image frame set, and in the present application, the history moving object image frame in the history moving object image frame set and the history moving object difference decision image frame in the history moving object difference decision image frame set are both images obtained by photographing the moving object at a predetermined frequency in a predetermined time.
The time and frequency of the present application may be changed according to the need, and are not limited herein.
In step 102, the historical moving object image frame set is converted into a historical moving object position sequence, a variable moving prediction factor set is determined according to the historical moving object position sequence, and a moving object predicted position sequence corresponding to the historical moving object position sequence is determined according to the variable moving prediction factor set.
In some embodiments, the converting the historical moving object image frame set into the historical moving object position sequence may specifically be in the following manner:
determining a historical object position in each historical moving object frame in the historical moving object frame set;
and sequencing all the historical object positions according to the sequence of the shooting time of the corresponding historical moving object frames to obtain a historical moving object position sequence.
In specific implementation, determining a historical object position in each historical moving object frame in the historical moving object frame set, namely: the position of the history object in each of the history moving object frames in the set of history moving object frames may be determined according to a motion division method, or the position of the history object in each of the history moving object frames in the set of history moving object frames may be determined using a target detection or the like, without limitation; the history object position refers to a center position of a moving object in a history moving object frame set.
In some embodiments, the determining the set of variable motion predictors from the sequence of historical moving object positions may specifically be performed by:
Determining an autocorrelation prediction map of each historical moving object position in the historical moving object position sequence;
determining a variable motion predictor for each autocorrelation prediction map;
The set of all the variable motion predictors is taken as the variable motion predictor set.
In specific implementation, determining an autocorrelation prediction graph of each historical moving object position in the historical moving object position sequence, namely: the autocorrelation prediction graph takes the number of all the historical moving object positions in the historical moving object position sequence as an abscissa, namely, the number gradually increases from zero to the total number of the historical moving object positions in the historical moving object position sequence, takes the change degree of the position of each historical moving object position in the historical moving object position sequence at a corresponding time point and the previous historical moving object position of each historical moving object position as an ordinate, establishes a coordinate graph according to the abscissa and the ordinate, and brings the corresponding values of all the historical moving object positions in the historical moving object position sequence into the coordinate graph to obtain the autocorrelation prediction graph, wherein the autocorrelation prediction graph can be determined according to a StatsModels library in Python, or can be determined according to other software packages, and is not limited herein; determining a variable motion predictor for each autocorrelation prediction map, namely: when a diagonal cut point appears in the autocorrelation prediction graph, the abscissa corresponding to the diagonal cut point is used as the variable movement prediction factor.
It should be noted that, in the present application, the oblique cutoff point is a coordinate point when the gradient of the variation degree on the ordinate in the autocorrelation prediction graph is lower than or equal to the preset gradient; the change degree slope is a transverse and longitudinal ratio of two coordinate points selected from the change degree on the ordinate in the autocorrelation prediction graph, wherein the transverse and longitudinal ratio is as follows: the difference between the abscissa of each historical moving object position in the historical moving object position sequence and the abscissa of the previous historical moving object position of each historical moving object position in the historical moving object position sequence divided by the difference between the ordinate of each historical moving object position in the historical moving object position sequence and the ordinate of the previous historical moving object position of each historical moving object position in the historical moving object position sequence is compared with a preset slope, when the change degree slope is lower than or equal to the preset slope, the coordinate point with the smallest abscissa in the two coordinate points is taken as an oblique intercept point, the preset slope is 0.1 in the application, and in the specific embodiment, the preset slope can be valued according to specific requirements, and the change degree slope can be determined according to the following formula specifically without limitation:
Wherein the method comprises the steps of Representing the gradient of the degree of change,/>And/>Respectively represent coordinate values of coordinate point 1 and coordinate point 2 on the x-axis,/>And/>The coordinate values on the y-axis where the coordinate point 1 and the coordinate point 2 are located are respectively represented.
The change degree is a position difference between one history moving object position in the history moving object position sequence and a previous history moving object position in the history moving object position sequence, and the steps are repeated to calculate the position difference of the remaining history moving object positions in the history moving object position sequence.
Wherein in some implementations, a position difference of the one historical moving object position and a previous historical moving object position of the one historical moving object position is calculated, the position difference may be specifically determined according to the following expression:
Wherein the method comprises the steps of Representing the position difference,/>Representing the/>, in a sequence of historic moving object positionsCoordinates of the historic moving object positions on the x-axis,/>Representing the/>, in a sequence of historic moving object positionsThe historical mobile object positions are co-ordinate on the y-axis,Representing the/>, in a sequence of historic moving object positionsCoordinates in x-axis of a previous historical moving object position of the historical moving object positions,/>Representing the/>, in a sequence of historic moving object positionsCoordinates on the y-axis of a previous historical moving object position of the historical moving object positions.
In some embodiments, determining the moving object predicted position sequence corresponding to the historical moving object position sequence according to the variable movement predictor set may be specifically implemented according to the following steps:
Selecting one historical moving object position in the historical moving object position sequence;
obtaining a variable movement prediction factor corresponding to the historical movement object position in the variable movement prediction factor set;
Determining a moving object prediction position corresponding to the historical moving object position according to the historical moving object position and a variable movement prediction factor corresponding to the historical moving object position;
repeating the steps to determine the predicted position of the moving object corresponding to the residual historical moving object position in the historical moving object position sequence;
And sequencing all the predicted positions of the moving object according to the sequence of the corresponding historical moving object positions in the historical moving object position sequence, and taking the sequence obtained by sequencing as a sequence of the predicted positions corresponding to the movement.
Specifically, when determining the predicted moving object position corresponding to the historical moving object position according to the historical moving object position and the variable moving prediction factor corresponding to the historical moving object position, the following formula may be used to determine the predicted moving object position, namely:
Wherein the method comprises the steps of Indicating that the historical moving object is positioned at/>Predicted position of moving object corresponding to time/>Representing a variable movement predictor set corresponding to the one historical movement object position,/>, of the variable movement predictorsRepresenting presence in a sequence of historic moving object positions/>Before/after the momentThe history of days moves the object location.
In particular, the time is determined to be the time according to the variable movement prediction factorHow many past historical moving object positions in the previous sequence of historical moving object positions, namely: when variable movement predictor/>When the value is 5, the time is expressed as/>Historical moving object positions of the first 5 time points in time are obtained according to the historical moving object positions of the 5 time points and the corresponding 5 variable movement predictors, and the position of the moving object is obtained in/>For example, if it is necessary to obtain a predicted position of the moving object at the time point of 00 minutes and 00 seconds at the 12 th day of 11 months, and if the variable movement prediction factor is 5, the predicted position of the moving object at the time point of 11 months, 13 days, 12 minutes and 00 seconds at the 12 th day of 11 months, 11 days, 12 minutes and 00 seconds at the 12 th day of 11 months, 10 days, 12 minutes and 00 seconds at the 12 th day of 11 months, 09 days, and 12 minutes and 00 seconds at the 12 th day of 11 months can be determined from the historical moving object positions acquired respectively.
In the present application, the variable motion prediction factor set is a set of all variable motion predictors, and the variable motion predictors in the variable motion predictor set may be transformed according to different motion object prediction positions in the motion object prediction position sequence, that is, one motion object prediction position corresponds to one variable motion predictor, and according to the variable motion prediction factor set, it may be excluded that the motion track deviates too much or is a motion object position moving according to a normal track when the motion object in the motion object position sequence may be interfered by the outside or the motion object itself, thereby improving the tracking accuracy of the motion object.
In step 103, an object movement deviation coefficient is determined from the historical movement object delta decision image frame set and the movement object predicted position sequence.
In some embodiments, determining the object movement deviation coefficient according to the historical movement object difference amount decision image frame set and the movement object prediction position sequence may specifically be as follows:
determining a moving object position in each moving object delta decision image frame;
sequencing all the positions of the moving object according to the sequence of shooting time of the decision image frames corresponding to the difference of the moving object to obtain a moving object position sequence;
Obtaining the object movement deviation coefficient according to the moving object position sequence and the moving object predicted position sequence, wherein the object movement deviation coefficient can be specifically determined according to the following expression:
Wherein the method comprises the steps of Representing the object movement deviation coefficient,/>Representing the position of a moving object in a sequence of positionsTime sequence of moving object position,/>Representing the position of a moving object in a predicted position sequencePredicted position of moving object at time,/>The total number of moving object positions in the moving object position sequence is represented.
The object movement deviation coefficient is a deviation proportionality coefficient between the position of the moving object and the predicted position of the moving object, and is used for reducing errors when determining the current moving object tracking decision domain.
In step 104, a current moving object image frame is acquired, an object weight threshold point of the current moving object image frame is determined, and an object moving weight threshold difference is determined according to the object weight threshold point and the current moving object image frame.
The current moving object image frame represents an image frame acquired by the moving object at the current time.
In some embodiments, determining the object weight threshold point of the current moving object image frame may specifically be performed by:
determining a moving object color saturation group of each pixel point in the current moving object image frame;
determining a color saturation histogram of the mobile object according to all the color saturation groups of the mobile object;
and determining an object weight threshold point according to the moving object color saturation histogram.
It should be noted that the moving object color saturation group refers to the hue angle of the moving object in the current moving object image frameAnd degree of shade/>The hue angle/>The value of (2) is in the range of 0 to 360, the degree of shade/>For 0 to 1, determining the moving object color saturation set for each pixel in the current moving object image frame may specifically be determined according to the following expression:
Wherein the method comprises the steps of Representing all moving object color saturation groups [ first ] >Color saturation group of each mobile object,/>、/>And/>Representing the/>, in the current moving object image frameColor component values of red, green and blue channels of each pixel point in the RGB color space.
In specific implementation, determining a color saturation histogram of the mobile object according to all color saturation groups of the mobile object, namely: and taking the hue angle in the color saturation groups of the mobile object as an x axis and the color shade degree as a y axis, determining a coordinate system according to the x axis and the y axis, and taking the hue angle and the color shade degree of all the color saturation groups of the mobile object into the coordinate system to obtain a color saturation histogram of the mobile object.
Wherein, in some embodiments, determining the object weight threshold point according to the moving object color saturation histogram may specifically be according to the following manner, namely:
Acquiring the first of all the color saturation groups of the mobile object Color saturation array of each mobile object/>
Determining the total number of the mobile object color saturation groups in the mobile object color saturation histogram
Determining a moving object distribution factor corresponding to each moving object color saturation group
Obtaining a mobile object color saturation center value according to all mobile object color saturation groups, the total number of the mobile object color saturation groups and mobile object distribution factors corresponding to all mobile object color saturation groups, wherein the mobile object color saturation center value can be specifically determined according to the following formula:
Wherein the method comprises the steps of Representing a color saturation center value of the moving object; and taking a pixel point corresponding to a pixel value equal to the color saturation central value of the moving object in the current moving object image frame as an object weight threshold point.
In specific implementation, determining a mobile object distribution factor corresponding to each mobile object color saturation group, namely: determining the number of pixels corresponding to each moving object color saturation group in the moving object color saturation histogram, dividing the number of pixels corresponding to one moving object color saturation group by the total number of pixels to obtain the frequency of the moving object color saturation group, taking the frequency as the moving object distribution factor of the moving object color saturation group, repeating the steps, and determining the moving object distribution factors corresponding to all the moving object color saturation groups.
It should be noted that, in the present application, the color saturation center value of the moving object represents the center value of the color set in the moving object in the current moving object image frame, and the color saturation center value of the moving object may be used for determining the object weight threshold point.
In some embodiments, determining the object movement weight threshold amount from the object weight threshold point and the current moving object image frame may specifically be performed in the following manner:
acquiring a center point of the current moving object image frame
Determining the object weight threshold point
The object movement weight threshold difference is determined according to the center point of the current movement object image frame and the object weight threshold point, and the object movement weight threshold difference is determined according to the following expression:
Wherein the method comprises the steps of Representing the amount of movement weight threshold of an object,/>Representing object weight threshold points,/>List center point of current moving object image frame,/>Representing the position of the object weight threshold point on the x-axis,/>Representing the position of the object weight threshold point on the y-axis,/>Representing the position of the center point of the current moving object image frame on the x-axis,/>Representing the position of the center point of the current moving object image frame on the y-axis.
In step 105, a current moving object tracking decision field is determined based on the object movement weight threshold difference and the object movement deviation coefficient.
In some embodiments, determining the current moving object tracking decision field from the object movement weight threshold amount and the object movement deviation coefficient may be accomplished by:
obtaining the difference of the object movement weight threshold difference on the x-axis
Acquiring the difference of the object movement weight threshold difference on the y-axis
Determining weight threshold delta compensation factors
Acquiring the object weight threshold point
Obtaining the object movement deviation coefficient
Determining a current moving object tracking decision domain according to the difference of the object moving weight difference in the x axis and the y axis, the weight difference compensation factor, the object weight threshold point and the object weight threshold point, wherein the current moving object tracking decision domain can be specifically determined according to the following formula:
Wherein the method comprises the steps of Representing the current moving object tracking decision field,/>Representing the value of the object weight threshold point on the x-axis,/>Representing the value of the object weight threshold point on the y-axis,/>Representing the difference of the object movement weight threshold difference on the x-axis,/>Representing the difference of the object movement weight threshold difference on the y-axis,/>Is a constant.
It should be noted that, in the present application, the moving object tracking decision domain is used to represent the decision value of the moving object dynamic tracking position, where the decision value of the moving object tracking dynamic position is determined according to the above formula, and in addition, the present application can set the weight difference compensation factor according to the experimental data of the historical moving weight difference of the object, where the value range of the weight difference compensation factor is between 0 and 1, and the weight difference compensation factor compensates the moving weight difference of the object; in the present applicationThe value of (2) is 1, and the value is not limited herein according to actual data in specific application.
In step 106, the following monitoring device of the mobile object is dynamically adjusted according to the current mobile object tracking decision domain.
In some embodiments, the following monitoring device of the mobile object is dynamically adjusted according to the current mobile object tracking decision domain, namely: and adjusting the focusing point of the intelligent object dynamic following monitoring equipment according to the current moving object tracking decision domain, repeating the step of determining the current moving object tracking decision domain, and adjusting the focusing point of the intelligent object dynamic following monitoring equipment according to the newly determined current moving object tracking decision domain to dynamically adjust, thereby realizing the intelligent object dynamic following.
In addition, in another aspect of the present application, in some embodiments, the present application provides a smart object dynamic following device based on machine learning, the device including a dynamic following unit, referring to fig. 2, which is a schematic diagram of exemplary hardware and/or software of the dynamic following unit according to some embodiments of the present application, the dynamic following unit 200 includes: the image frame set determining module 201, the moving object predicted position sequence determining module 202, the object movement deviation coefficient determining module 203, the object movement weight threshold difference determining module 204, the moving object tracking decision domain determining module 205, and the tracking adjustment module 206 are respectively described as follows:
The image frame set determining module 201, in the present application, the image frame set determining module 201 is mainly used for obtaining a historical moving object image frame set and a historical moving object difference decision image frame set in a machine learning database;
The moving object predicted position sequence determining module 202 in the present application, the moving object predicted position sequence determining module 202 is mainly configured to convert the historical moving object image frame set into a historical moving object position sequence, determine a variable moving prediction factor set according to the historical moving object position sequence, and determine a moving object predicted position sequence corresponding to the historical moving object position sequence according to the variable moving prediction factor set;
The object movement deviation coefficient determining module 203 in the present application, the object movement deviation coefficient determining module 203 is mainly configured to determine an object movement deviation coefficient according to the historical movement object difference decision image frame set and the movement object prediction position sequence;
The object movement weight difference determining module 204 is mainly used for acquiring a current movement object image frame, determining an object weight threshold point of the current movement object image frame, and determining an object movement weight difference according to the object weight threshold point and the current movement object image frame;
The mobile object tracking decision domain determining module 205 is mainly used for determining a current mobile object tracking decision domain according to the object movement weight threshold difference and the object movement deviation coefficient in the application;
the tracking adjustment module 206, in the present application, the tracking adjustment module 206 is mainly configured to dynamically adjust the following monitoring device of the moving object according to the current moving object tracking decision domain.
In addition, the application also provides a computer device, which comprises a memory and a processor, wherein the memory stores codes, and the processor is configured to acquire the codes and execute the intelligent object dynamic following method based on machine learning.
In some embodiments, reference is made to FIG. 3, which is a schematic diagram of a computer device implementing a machine learning based smart object dynamic following method, according to some embodiments of the application. The machine learning based smart object dynamic following method in the above embodiments may be implemented by a computer device shown in fig. 3, which comprises at least one processor 301, a communication bus 302, a memory 303 and at least one communication interface 304.
Processor 301 may be a general purpose central processing unit (central processing unit, CPU), application-specific integrated circuit (ASIC), or one or more of the intelligent object dynamic following methods for controlling the execution of the machine learning based intelligent object dynamic following methods of the present application.
Communication bus 302 may include a path to transfer information between the above components.
The Memory 303 may be, but is not limited to, a read-only Memory (ROM) or other type of static storage device that can store static information and instructions, a random access Memory (random access Memory, RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-only Memory, EEPROM), a compact disc (compact disc read-only Memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 303 may be stand alone and be coupled to the processor 301 via the communication bus 302. Memory 303 may also be integrated with processor 301.
The memory 303 is used for storing program codes for executing the scheme of the present application, and the processor 301 controls the execution. The processor 301 is configured to execute program code stored in the memory 303. One or more software modules may be included in the program code. The determination of the object weight threshold points in the above embodiments may be implemented by one or more software modules in the processor 301 and in the program code in the memory 303.
Communication interface 304, using any transceiver-like device for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
In a specific implementation, as an embodiment, a computer device may include a plurality of processors, where each of the processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
The computer device may be a general purpose computer device or a special purpose computer device. In a specific implementation, the computer device may be a desktop, a laptop, a web server, a personal computer (PDA), a mobile handset, a tablet, a wireless terminal device, a communication device, or an embedded device. Embodiments of the application are not limited to the type of computer device.
In addition, the application further provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the intelligent object dynamic following method based on machine learning when being executed by a processor.
In summary, in the intelligent object dynamic following method and device based on machine learning disclosed in the embodiments of the present application, firstly, a historical moving object image frame set and a historical moving object difference decision image frame set in a machine learning database are obtained, the historical moving object image frame set is converted into a historical moving object position sequence, a variable moving prediction factor set is determined according to the historical moving object position sequence, a moving object prediction position sequence corresponding to the historical moving object position sequence is determined according to the variable moving prediction factor set, an object moving deviation coefficient is determined according to the historical moving object difference decision image frame set and the moving object prediction position sequence, a current moving object image frame is obtained, an object weight threshold point of the current moving object image frame is determined, an object moving weight threshold point is determined according to the object weight threshold point and the current moving object image frame, a current moving object tracking decision domain is determined according to the object moving weight threshold and the object moving deviation coefficient, a moving object following monitoring device of a moving object is dynamically adjusted according to the variable moving object tracking decision domain, and a monitoring device of the moving object dynamic following error of the intelligent object is reduced in the intelligent object dynamic following process.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The intelligent object dynamic following method based on machine learning is characterized by comprising the following steps:
acquiring a historical moving object image frame set and a historical moving object difference decision image frame set in a machine learning database;
Converting the historical moving object image frame set into a historical moving object position sequence, determining a variable moving prediction factor set according to the historical moving object position sequence, and determining a moving object prediction position sequence corresponding to the historical moving object position sequence according to the variable moving prediction factor set;
determining an object movement deviation coefficient according to the historical movement object difference decision image frame set and the movement object prediction position sequence;
Acquiring a current moving object image frame, determining an object weight threshold point of the current moving object image frame, and determining an object moving weight threshold difference according to the object weight threshold point and the current moving object image frame;
determining a current moving object tracking decision domain according to the object moving weight threshold difference and the object moving deviation coefficient;
dynamically adjusting the following monitoring equipment of the mobile object according to the current mobile object tracking decision domain;
The method comprises the following steps of:
Selecting one historical moving object position in the historical moving object position sequence;
obtaining a variable movement prediction factor corresponding to the historical movement object position in the variable movement prediction factor set;
Determining a moving object prediction position corresponding to the historical moving object position according to the historical moving object position and a variable movement prediction factor corresponding to the historical moving object position;
repeating the steps to determine the predicted position of the moving object corresponding to the residual historical moving object position in the historical moving object position sequence;
Sequencing all the predicted positions of the moving object according to the sequence of the corresponding historical moving object positions in the historical moving object position sequence, and taking the sequence obtained by sequencing as a predicted position sequence of the moving object;
Wherein, the mobile object prediction position corresponding to the historical mobile object position is determined according to the historical mobile object position and the variable mobile prediction factor corresponding to the historical mobile object position, namely, the mobile object prediction position corresponding to the historical mobile object position is determined according to the following formula:
Wherein the method comprises the steps of Indicating that the historical moving object is positioned at/>Predicted position of moving object corresponding to time/>Representing a variable movement predictor set corresponding to the one historical movement object position,/>, of the variable movement predictorsRepresenting presence in a sequence of historic moving object positions/>Before/after the momentThe history of days moves the object location.
2. The method of claim 1, wherein converting the set of historical moving object image frames into a sequence of historical moving object positions comprises:
determining a historical object position in each historical moving object frame in the historical moving object frame set;
and sequencing all the historical object positions according to the sequence of the shooting time of the corresponding historical moving object frames to obtain a historical moving object position sequence.
3. The method of claim 1, wherein determining an object movement deviation coefficient from the historical moving object delta decision image frame set and the moving object predicted position sequence specifically comprises:
determining a moving object position in each moving object delta decision image frame;
sequencing all the positions of the moving object according to the sequence of shooting time of the decision image frames corresponding to the difference of the moving object to obtain a moving object position sequence;
Obtaining the object movement deviation coefficient according to the moving object position sequence and the moving object predicted position sequence, wherein the object movement deviation coefficient is determined according to the following expression:
Wherein the method comprises the steps of Representing the object movement deviation coefficient,/>Representing the position of a moving object in a sequence of positionsTime sequence of moving object position,/>Representing the position of a moving object in a predicted position sequencePredicted position of moving object at time,/>The total number of moving object positions in the moving object position sequence is represented.
4. The method of claim 1, wherein determining the object weight threshold point for the current moving object image frame specifically comprises:
determining a moving object color saturation group of each pixel point in the current moving object image frame;
determining a color saturation histogram of the mobile object according to all the color saturation groups of the mobile object;
and determining an object weight threshold point according to the moving object color saturation histogram.
5. The method of claim 4, wherein determining an object weight threshold point from the moving object color saturation histogram comprises:
Acquiring all the color saturation groups of the mobile objects in all the color saturation groups of the mobile objects;
Determining the total number of the color saturation groups of the moving object in the color saturation histogram of the moving object;
determining a moving object distribution factor corresponding to each moving object color saturation group;
Obtaining a mobile object color saturation center value according to all mobile object color saturation groups, the total number of the mobile object color saturation groups and mobile object distribution factors corresponding to all mobile object color saturation groups;
And taking a pixel point corresponding to a pixel value equal to the color saturation central value of the moving object in the current moving object image frame as an object weight threshold point.
6. The method of claim 1, wherein determining an object movement weight threshold amount from the object weight threshold point and a current moving object image frame specifically comprises:
acquiring a center point of the current moving object image frame
Determining the object weight threshold point
Determining the object movement weight difference according to the center point of the current movement object image frame and the object weight threshold point, wherein the object movement weight difference is determined according to the following expression:
Wherein the method comprises the steps of Representing the amount of movement weight threshold of an object,/>Representing object weight threshold points,/>List center point of current moving object image frame,/>Representing the position of the object weight threshold point on the x-axis,/>Representing the position of the object weight threshold point on the y-axis,/>Representing the position of the center point of the current moving object image frame on the x-axis,/>Representing the position of the center point of the current moving object image frame on the y-axis.
7. The method of claim 1, wherein determining a set of variable movement predictors from the sequence of historical moving object positions comprises:
Determining an autocorrelation prediction map of each historical moving object position in the historical moving object position sequence;
determining a variable motion predictor for each autocorrelation prediction map;
The set of all the variable motion predictors is taken as the variable motion predictor set.
8. A smart object dynamic following device based on machine learning, which is controlled by the method of claim 1, wherein the smart object dynamic following device based on machine learning comprises a dynamic following unit, and the dynamic following unit comprises:
an image frame set determining module for acquiring a history moving object image frame set and a history moving object delta decision image frame set in a machine learning database;
the mobile object prediction position sequence determining module is used for converting the historical mobile object image frame set into a historical mobile object position sequence, determining a variable mobile prediction factor set according to the historical mobile object position sequence, and determining a mobile object prediction position sequence corresponding to the historical mobile object position sequence according to the variable mobile prediction factor set;
the object movement deviation coefficient determining module is used for determining an object movement deviation coefficient according to the historical movement object difference decision image frame set and the movement object prediction position sequence;
the object movement weight threshold difference determining module is used for acquiring a current movement object image frame, determining an object weight threshold point of the current movement object image frame, and determining an object movement weight threshold difference according to the object weight threshold point and the current movement object image frame;
The mobile object tracking decision domain determining module is used for determining a current mobile object tracking decision domain according to the object movement weight threshold difference and the object movement deviation coefficient;
and the tracking adjustment module is used for dynamically adjusting the following monitoring equipment of the mobile object according to the current mobile object tracking decision domain.
9. A computer device comprising a memory storing code and a processor configured to obtain the code and to perform the intelligent object dynamic following method based on machine learning as claimed in any one of claims 1 to 7.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the intelligent object dynamic following method based on machine learning according to any of claims 1 to 7.
CN202311722469.1A 2023-12-14 2023-12-14 Intelligent object dynamic following method and device based on machine learning Active CN117409044B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311722469.1A CN117409044B (en) 2023-12-14 2023-12-14 Intelligent object dynamic following method and device based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311722469.1A CN117409044B (en) 2023-12-14 2023-12-14 Intelligent object dynamic following method and device based on machine learning

Publications (2)

Publication Number Publication Date
CN117409044A CN117409044A (en) 2024-01-16
CN117409044B true CN117409044B (en) 2024-06-14

Family

ID=89492945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311722469.1A Active CN117409044B (en) 2023-12-14 2023-12-14 Intelligent object dynamic following method and device based on machine learning

Country Status (1)

Country Link
CN (1) CN117409044B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011243229A (en) * 2011-09-05 2011-12-01 Nippon Telegr & Teleph Corp <Ntt> Object tracking device and object tracking method
CN112200830A (en) * 2020-09-11 2021-01-08 山东信通电子股份有限公司 Target tracking method and device

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6937744B1 (en) * 2000-06-13 2005-08-30 Microsoft Corporation System and process for bootstrap initialization of nonparametric color models
CN101251928A (en) * 2008-03-13 2008-08-27 上海交通大学 Kernel-Based Object Tracking Method
JP4730431B2 (en) * 2008-12-16 2011-07-20 日本ビクター株式会社 Target tracking device
JP4922472B1 (en) * 2011-09-29 2012-04-25 楽天株式会社 Information processing apparatus, information processing method, information processing apparatus program, and recording medium
US9684830B2 (en) * 2014-11-14 2017-06-20 Intel Corporation Automatic target selection for multi-target object tracking
CN105631900B (en) * 2015-12-30 2019-08-02 浙江宇视科技有限公司 A kind of wireless vehicle tracking and device
US20190066311A1 (en) * 2017-08-30 2019-02-28 Microsoft Technology Licensing, Llc Object tracking
CN110458861B (en) * 2018-05-04 2024-01-26 佳能株式会社 Object detection and tracking method and device
CN110163068B (en) * 2018-12-13 2024-12-13 腾讯科技(深圳)有限公司 Target object tracking method, device, storage medium and computer equipment
CN109816701B (en) * 2019-01-17 2021-07-27 北京市商汤科技开发有限公司 Target tracking method and device and storage medium
CN109949375B (en) * 2019-02-02 2021-05-14 浙江工业大学 Mobile robot target tracking method based on depth map region of interest
US11720375B2 (en) * 2019-12-16 2023-08-08 Motorola Solutions, Inc. System and method for intelligently identifying and dynamically presenting incident and unit information to a public safety user based on historical user interface interactions
CN113961620A (en) * 2021-10-19 2022-01-21 阳光电源股份有限公司 Method and device for determining numerical weather forecast result
CN115589528A (en) * 2022-09-30 2023-01-10 浙江吉利控股集团有限公司 A moving target tracking method and related device
CN116309719A (en) * 2023-03-16 2023-06-23 之江实验室 Target tracking method, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011243229A (en) * 2011-09-05 2011-12-01 Nippon Telegr & Teleph Corp <Ntt> Object tracking device and object tracking method
CN112200830A (en) * 2020-09-11 2021-01-08 山东信通电子股份有限公司 Target tracking method and device

Also Published As

Publication number Publication date
CN117409044A (en) 2024-01-16

Similar Documents

Publication Publication Date Title
EP3186780B1 (en) System and method for image scanning
CN107886048B (en) Target tracking method and system, storage medium and electronic terminal
WO2022141178A1 (en) Image processing method and apparatus
US11604272B2 (en) Methods and systems for object detection
CN112534469B (en) Image detection method, image detection device, image detection apparatus, and medium
US11538238B2 (en) Method and system for performing image classification for object recognition
CN112232426B (en) Training method, device and equipment of target detection model and readable storage medium
CN111757008B (en) Focusing method, device and computer readable storage medium
EP3308559A1 (en) Method and system for determining a positioning interval of a mobile terminal
CN112070682A (en) Method and device for compensating image brightness
CN110599586A (en) Semi-dense scene reconstruction method and device, electronic equipment and storage medium
CN112053383A (en) Method and device for real-time positioning of robot
CN117146739A (en) Angle measurement verification method and system for optical sighting telescope
CN109543634A (en) Data processing method, device, electronic equipment and storage medium in position fixing process
CN117409044B (en) Intelligent object dynamic following method and device based on machine learning
CN116614620B (en) High-pixel optical lens assembly equipment and control method
US12080006B2 (en) Method and system for performing image classification for object recognition
CN110647898B (en) Image processing method, image processing device, electronic equipment and computer storage medium
CN114612926A (en) Method and device for counting number of people in scene, computer equipment and storage medium
CN116012413A (en) Image feature point tracking method and device, electronic equipment and storage medium
CN117576395A (en) Point cloud semantic segmentation method and device, electronic equipment and storage medium
CN116311135A (en) Data dimension reduction method, data dimension reduction system and controller for semantic information
CN116452883A (en) Classification recognition method and system for aircraft
CN116485645A (en) Image stitching method, device, equipment and storage medium
CN113344988A (en) Stereo matching method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant