CN113989725A - Goal segment classification method based on neural network - Google Patents
Goal segment classification method based on neural network Download PDFInfo
- Publication number
- CN113989725A CN113989725A CN202111321234.2A CN202111321234A CN113989725A CN 113989725 A CN113989725 A CN 113989725A CN 202111321234 A CN202111321234 A CN 202111321234A CN 113989725 A CN113989725 A CN 113989725A
- Authority
- CN
- China
- Prior art keywords
- video
- face
- goal
- player
- calculating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a goal segment classification method based on a neural network, which comprises the following steps: acquiring a first shot video, calculating first sole coordinates of players in a video frame when shooting and shooting hands according to a first detection frame, calculating a second sole coordinate in at least one second shot video according to the first sole coordinate, calculating the second sole coordinate and the distance values of all second detection frames, selecting the player in the second detection frame with the minimum distance value, face recognition and/or color recognition is performed on the players in the first shot video and the at least one second shot video, the method has the advantages that the goal segments of the first shot video and the at least one second shot video are classified according to the players according to the voting result, therefore, the goal collection of the player is generated, and meanwhile, the accuracy can be higher based on the color characteristics and the face recognition classification means.
Description
Technical Field
The invention belongs to the technical field of fragment classification, and particularly relates to a goal fragment classification method based on a neural network.
Background
At present, a large number of basketball fans exist in China, and a strong demand exists when the basketball fans play the basketball on line in a basketball court, namely, a collection video of all shooting or goal on the same day can be generated after the basketball is played, so that the video can be reserved as a souvenir or a friend-making circle.
However, the technology in the current market can only intercept all goal segments, and cannot classify all goal segments according to players so as to generate goal highlights of a certain player.
Disclosure of Invention
The invention aims to provide a goal segment classification method based on a neural network, so as to solve the technical problem that all goal segments in the prior art cannot be classified according to players.
In order to achieve the technical purpose, the technical scheme adopted by the invention is as follows:
a goal segment classification method based on a neural network comprises the following steps:
acquiring a first shot video, inputting each video frame to be identified of the first shot video into a detection model so as to identify and obtain a first detection frame of a player in all the video frames to be identified and a video frame representing the shooting and the hand-out, and calculating a first sole coordinate of the player in the video frame when the shooting and the hand-out are carried out according to the first detection frame;
calculating a second sole coordinate in at least one second shot video according to the first sole coordinate, acquiring second detection frames representing all players in a video frame when a user shoots a basketball and goes out of hand in the second shot video, calculating the second sole coordinate and distance values of all the second detection frames, and selecting the player in the second detection frame with the smallest distance value;
and carrying out face recognition and/or color recognition on the players in the first shot video and the at least one second shot video, calculating the similarity according to the recognition result, voting the players according to the similarity, and classifying the goal segments of the first shot video and the at least one second shot video according to the voting result.
Preferably, the method further comprises the steps of:
and acquiring basketball training data, preprocessing the basketball training data, and inputting the preprocessed basketball training data into a neural network model so as to train the neural network model to obtain a detection model.
Preferably, the acquiring a first shot video and inputting each video frame to be identified of the first shot video into the detection model specifically includes the following steps:
the basketball shooting system comprises a plurality of image collectors, wherein the image collectors are used for shooting the whole basketball half field, a basketball forbidden zone and a three-branch zone;
the image collector is used for collecting the first shooting video and at least one second shooting video;
and acquiring all goal segments of the first shot video, wherein the goal segments comprise video frames of a basketball entering a basket, and a plurality of seconds of forward and backward pushing from the video frames of the basketball entering the basket, and inputting each video frame to be identified of the goal segments into a detection model.
Preferably, the method for calculating the second sole coordinates in at least one second shot video according to the first sole coordinates specifically includes the following steps:
randomly acquiring a first video frame from the first shooting video and randomly acquiring a second video frame from the second shooting video;
calibrating four corner points of a basketball forbidden zone in the first video frame and the second video frame respectively, and calculating an affine transformation matrix between the first video frame and the second video frame according to the four corner points calibrated respectively;
the first sole coordinate calculates the second sole coordinate in the second shot video through the affine transformation matrix.
Preferably, the method includes the steps of obtaining second detection frames of all players in a video frame representing when a shot is taken out of hand in the second shot video, calculating the second sole coordinates and the distance values of all the second detection frames, and selecting the player in the second detection frame with the smallest distance value, and specifically includes the following steps:
acquiring all goal segments of the second shot video, wherein the goal segments comprise video frames of entering baskets of basketballs, and a plurality of seconds of forward pushing and a plurality of seconds of backward pushing from the video frames of entering the baskets of the basketballs;
inputting the video frames representing the shooting and the hand-out in the goal segment into a detection model so as to identify and obtain second detection frames of all players in the video frames representing the shooting and the hand-out;
and calculating the second sole coordinates and the distance values of all the second detection frames, and selecting the second detection frame with the minimum distance value, wherein the second detection frame comprises the player of the goal segment.
Preferably, the method further comprises the steps of:
and acquiring a single photo of each contestant player, and calculating HSV color histogram features of the players.
Preferably, the method comprises the following steps of obtaining a single photo of each contestant player, and calculating HSV color histogram features of the players:
converting the BGR color space of the single photo into an HSV color space, and quantizing each component of the HSV color space according to a quantization table;
and combining the quantized HSV color spaces into a single-channel image according to a ratio, and calculating the color histogram characteristics according to the single-channel image to obtain and store the standard color histogram characteristics of the area where the player is located.
Preferably, the method further comprises the steps of:
and acquiring a single photo of each contestant player, detecting the face of the single photo and calculating a face feature vector through a face detection algorithm and a face recognition algorithm.
Preferably, a single photo of each contestant is obtained, the face of the single photo is detected and a face feature vector is calculated through a face detection algorithm and a face recognition algorithm, and the method specifically comprises the following steps:
converting the BGR color space of the single photo into an RGB color space, and normalizing the pixel values of the RGB color space;
detecting the single photo through a face detection algorithm, and if the face in the single photo is detected, intercepting the detected face;
and correcting the face through a face correction algorithm, identifying the corrected face through a face identification algorithm, extracting a standard face characteristic vector and storing the standard face characteristic vector.
Preferably, the similarity is calculated according to the recognition result, and the voter is voted according to the similarity, specifically comprising the following steps:
if the human face is detected, respectively calculating the similarity of the human face feature vector and the standard human face feature vector, wherein the similarity formula is as follows:
wherein, X denotes a face feature vector, Y denotes a standard face feature vector, X is the vector product of the two vectors, | X | | | | | Y | | | is the product of the two vectors modulo, the greater the calculated Sim (X, Y), the higher the similarity is proved;
voting the standard face feature vectors with the highest similarity, calculating the number of votes obtained by the player represented by each standard face feature vector, and judging that the goal segment belongs to the player represented by the standard face feature vector with the highest number of votes obtained.
Preferably, the similarity is calculated according to the recognition result, and the voter is voted according to the similarity, specifically comprising the following steps:
if the human face is not detected, respectively calculating the similarity of the color histogram feature and the standard color histogram feature, wherein the similarity formula is as follows:
wherein X represents a color histogram feature and Y represents a scaleQuasi-color histogram feature, XiDenotes the i-th component of X, YiRepresenting the ith component of Y, and the smaller the calculated Sim (X, Y), the higher the similarity is proved;
and voting to the standard color histogram features with the highest similarity, calculating the number of votes obtained by the player represented by each standard color histogram feature, and judging that the goal segment belongs to the player represented by the standard color histogram feature with the highest number of votes obtained.
A computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method described above.
The invention has the following beneficial effects:
1. according to the invention, the second sole coordinates in at least one second shooting video are calculated through the first sole coordinates, the second detection frames representing all players in the video frames when shooting and shooting hands in the second shooting video are obtained, the second sole coordinates and the distance values of all the second detection frames are calculated, and the player in the second detection frame with the smallest distance value is selected, so that any point on the basketball court in any one video frame can be converted to other video frames, and the same goal section and the corresponding player in different video frames can be found out more quickly through a lighter algorithm.
2. The method and the device have the advantages that the face recognition and/or the color recognition are carried out on the player in the first shot video and the at least one second shot video, the similarity is calculated according to the recognition result, the player is voted according to the similarity, the goal segments of the first shot video and the at least one second shot video are classified according to the player according to the voting result, so that the goal collection of the player is generated, and meanwhile, the higher accuracy can be achieved based on the color feature and the face recognition classification means.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flow diagram of a method for neural network based goal classification;
FIG. 2 is a schematic view of the position of an image collector;
the main element symbols are as follows:
1. a first image collector 1; 2. a second image collector 2; 3. a third image collector 3; 4. a fourth image collector 4.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
as shown in fig. 1, the present embodiment includes a method for classifying goal segments based on a neural network, which includes the following steps:
acquiring a first shot video, inputting each video frame to be identified of the first shot video into a detection model so as to identify and obtain a first detection frame of a player in all the video frames to be identified and a video frame representing the shooting and the hand-out, and calculating a first sole coordinate of the player in the video frame when the shooting and the hand-out are carried out according to the first detection frame;
calculating a second sole coordinate in at least one second shot video according to the first sole coordinate, acquiring second detection frames representing all players in a video frame when shooting a basketball and taking a hand in the second shot video, calculating the second sole coordinate and distance values of all the second detection frames, and selecting the player in the second detection frame with the smallest distance value;
and carrying out face recognition and/or color recognition on the players in the first shot video and the at least one second shot video, calculating the similarity according to the recognition result, voting the players according to the similarity, and classifying the goal segments of the first shot video and the at least one second shot video according to the voting result.
Specifically, the first detection frame includes a plurality of types of players, such as a first detection frame representing a defensive player or a first detection frame representing a shooting player, and in this embodiment, the first detection frame identifying the player in all the video frames to be identified is the first detection frame representing the shooting player.
Further comprising the steps of: and acquiring basketball training data, preprocessing the basketball training data, and inputting the preprocessed basketball training data into the neural network model so as to train the neural network model and obtain the detection model.
Specifically, collect the picture of basketball match, the picture of basketball match is basketball training data, the method that carries out the preliminary treatment to basketball training data is for sportsman, ball and the basket in to the picture through artificial mode detects the frame mark, the picture input neural network model after will detecting the frame mark, in order to train the neural network model, until reaching the quasi-recall ratio of predetermineeing the within range, neural network model training finishes, obtain the detection model, the detection model is used for detecting sportsman, ball and basket.
The method comprises the following steps of obtaining a first shot video, and inputting each video frame to be identified of the first shot video into a detection model: the basketball shooting system comprises a plurality of image collectors, wherein the image collectors are used for shooting a whole basketball half field, a basketball forbidden zone and a three-branch zone;
the image collector is used for collecting the first shot video and at least one second shot video, obtaining all goal segments of the first shot video, wherein the goal segments comprise video frames of a basketball entering the basket, pushing forwards for a plurality of seconds and pushing backwards for a plurality of seconds from the video frames of the basketball entering the basket, and inputting each video frame to be identified of the goal segments into the detection model.
Specifically, in this embodiment, as shown in fig. 2, the number of the image collectors is four, and the image collectors are a first image collector 1, a second image collector 2, a third image collector 3, and a fourth image collector 4, the first image collector 1 and the second image collector 2 are located on two sides of a center line of a basketball court, and the first image collector 1 and the second image collector 2 are used for shooting a half court of the entire basketball court. The third image collector 3 and the fourth image collector 4 are arranged below the basketball stand, and the third image collector 3 and the fourth image collector 4 are used for shooting a basketball forbidden zone and a three-branch zone.
The first image collector 1 is used for collecting a first shooting video, the second image collector 2 is used for collecting a second shooting video, the third image collector 3 is used for collecting a third shooting video, and the fourth image collector 4 is used for collecting a fourth shooting video. In the process of the match, the first image collector 1, the second image collector 2, the third image collector 3 and the fourth image collector 4 operate simultaneously and are used for shooting basketball courts at four different angles. In the present embodiment, the at least one second captured video described includes the second captured video, the third captured video, and the fourth captured video.
Calculating a second sole coordinate in at least one second shot video according to the first sole coordinate, specifically comprising the steps of: randomly acquiring a first video frame from the first shooting video and randomly acquiring a second video frame from the second shooting video; calibrating four corner points of a basketball forbidden zone in a first video frame and a second video frame respectively, and calculating an affine transformation matrix between the first video frame and the second video frame according to the four corner points calibrated respectively; the first sole coordinate calculates a second sole coordinate in the second shot video through an affine transformation matrix.
Specifically, the positions of the four corner points are the upper left corner, the lower left corner, the upper right corner and the lower right corner of the basketball forbidden zone respectively.
The first sole coordinate calculates a formula of a second sole coordinate in the second shot video through an affine transformation matrix as follows: (x ', y ', w ') [ u, v,1 ]. a, where (u, v) is a point on the basketball court captured by the first image capturer 1, the matrix a is a first affine transformation matrix calculated from the four corner points of the basketball exclusion zone marked in the first video frame and the second video frame, x ' and y ' are transformation coefficients, and 1 indicates that no overall scaling is performed.
The second sole coordinates are expressed as (x, y), where x is calculated as:the formula for y is:and w' is automatically calculated by the system. The first sole coordinate calculates a second sole coordinate in the second photographed video through the above formula based on the first affine transformation matrix.
Specifically, in the above-described embodiment, one third video frame is arbitrarily acquired from the third captured video, and one fourth video frame is arbitrarily acquired from the fourth captured video. The four corners of the basketball forbidden zone are respectively marked in the first video frame and the third video frame, and the four corners of the basketball forbidden zone are respectively marked in the first video frame and the fourth video frame. And the third affine transformation matrix is calculated according to the four corner points of the basketball forbidden zone calibrated in the first video frame and the fourth video frame. And the first sole coordinate calculates a third sole coordinate in the third shot video through the formula based on the second affine transformation matrix. The first plantar coordinate calculates a fourth plantar coordinate in the fourth shot video through the above formula based on the third affine transformation matrix.
The method comprises the following steps of obtaining second detection frames of all players in a video frame representing shooting and hand-out in a second shooting video, calculating second sole coordinates and distance values of all the second detection frames, and selecting the player in the second detection frame with the smallest distance value, wherein the method specifically comprises the following steps: acquiring all goal segments of the second shot video, wherein the goal segments comprise video frames of entering the basket of the basketball, and a plurality of seconds of forward pushing and a plurality of seconds of backward pushing from the video frames of entering the basket of the basketball;
inputting the video frames representing the shooting and the hand-out in the goal segment into the detection model so as to identify and obtain second detection frames representing all players in the video frames representing the shooting and the hand-out; and calculating the coordinates of the second sole and the distance values of all the second detection frames, and selecting the second detection frame with the minimum distance value, wherein the second detection frame comprises the player of the goal segment. Wherein, calculate at least one second according to first sole coordinate and shoot the second sole coordinate in the video, calculate second sole coordinate and all second and detect the distance value of frame, select the second that the distance value is minimum and detect the frame, the second detects the sportsman that the frame includes this goal fragment, because first detection frame is the first detection frame of representing the sportsman, and select the second that the distance value is minimum and detect the frame, therefore in this embodiment, the sportsman that the second detected the frame includes this goal fragment is the sportsman.
Specifically, all goal segments of the second shot video are obtained, wherein the goal segments comprise video frames of the basketball entering the basket, and a plurality of seconds of forward pushing and a plurality of seconds of backward pushing from the video frames of the basketball entering the basket. And acquiring all goal segments of the third shot video, wherein the goal segments comprise video frames of the basketball entering the basket, and a plurality of seconds of forward pushing and a plurality of seconds of backward pushing from the video frames of the basketball entering the basket. And acquiring all goal segments of the fourth shot video, wherein the goal segments comprise video frames of the basketball entering the basket, and a plurality of seconds of forward pushing and a plurality of seconds of backward pushing from the video frames of the basketball entering the basket.
And inputting the video frame representing the shooting and hand-out of the goal segment of the second shot video into the detection model so as to identify and obtain second detection frames representing all players in the video frame of the shooting and hand-out. And inputting the video frame representing the shooting and hand-out of the goal segment of the third shot video into the detection model so as to identify and obtain third detection frames representing all players in the video frame of the shooting and hand-out. And inputting the video frame representing the shooting and hand-out of the goal segment of the fourth shot video into the detection model so as to identify and obtain a fourth detection frame representing all players in the video frame of the shooting and hand-out.
And calculating the coordinates of the second sole and the distance values of all the second detection frames, and selecting the second detection frame with the minimum distance value, wherein the second detection frame comprises the player of the goal segment. And calculating the coordinates of the third sole and the distance values of all the third detection frames, and selecting the third detection frame with the minimum distance value, wherein the third detection frame comprises the player of the goal segment in the third shot video. And calculating the fourth sole coordinates and the distance values of all the fourth detection frames, and selecting the fourth detection frame with the minimum distance value, wherein the fourth detection frame comprises the player of the goal section in the fourth shooting video.
Further comprising the steps of: and acquiring a single photo of each contesting player, and calculating HSV color histogram features of the players.
The method comprises the following steps of obtaining a single photo of each player participating in the competition, and calculating HSV color histogram characteristics of the players, wherein the method specifically comprises the following steps: converting the BGR color space of the single photo into an HSV color space, and quantizing each component of the HSV color space according to a quantization table;
and combining the quantized HSV color space into a single-channel image according to a ratio, and calculating the color histogram characteristics according to the single-channel image to obtain and store the standard color histogram characteristics of the area where the player is located.
Specifically, the quantization table is as follows:
h, S, V represents three color components of the HSV color space, each line represents a quantized value and a value range of the corresponding color component, and H e [21,40] exemplifies that when a value at any position in the color components is between 21 and 40, the value is quantized to 1.
And combining the quantized HSV color space into a single-channel image according to the proportion, wherein the proportion formula is G-9H +3S + V, and calculating the standard color histogram characteristics according to the single-channel image to obtain and store the standard color histogram characteristics of the area where the player is located.
Further comprising the steps of: and acquiring a single photo of each contestant player, detecting the face of the single photo and calculating a face feature vector through a face detection algorithm and a face recognition algorithm.
The method comprises the following steps of obtaining a single photo of each contestant player, detecting the face of the single photo and calculating a face feature vector through a face detection algorithm and a face recognition algorithm, and specifically comprises the following steps: converting the BGR color space of the single photo into an RGB color space, and normalizing the pixel values of the RGB color space;
detecting the single photo through a face detection algorithm, and if the face in the single photo is detected, intercepting the detected face; the face is corrected through a face correction algorithm, the corrected face is recognized through a face recognition algorithm, and standard face characteristic vectors are extracted.
Specifically, the calculation formula for normalizing the pixel values of the RGB color space is as follows:
p’i=((pi/255)-meani)/stdi
wherein p isiRGB color value, mean, of any position of the i-th channeliIs the mean of the RGB color values of the ith channel, stdiIs the standard deviation of the RGB color values of the ith channel.
Calculating the similarity according to the recognition result, voting the football player according to the similarity, and specifically comprising the following steps: if the human face is detected, respectively calculating the similarity of the human face feature vector and the standard human face feature vector, wherein the similarity formula is as follows:
wherein, X represents the face feature vector, Y represents the standard face feature vector, X Y represents the vector product of the two vectors, | X | | | | | Y | | | represents the product of the moduli of the two vectors, the calculated Sim (X, Y) is larger, the higher the similarity is proved to be, the standard face feature vector with the highest similarity is voted for, the vote count of the player represented by each standard face feature vector is calculated, and the goal segment is judged to belong to the player represented by the standard face feature vector with the highest vote count.
Specifically, the face detection algorithm is used for respectively carrying out face recognition detection on the first detection frame, the second detection frame with the minimum distance value, the third detection frame with the minimum distance value and the fourth detection frame with the minimum distance value. In this embodiment, the number of the players is four, and then the four standard face feature vectors are respectively recorded as: f1, F2, F3 and F4.
If the four detection frames detect the human face, the corresponding human face feature vectors are respectively f1, f2, f3 and f4, the similarity between the four human face feature vectors and the standard human face feature vector is respectively calculated, and the similarity formula is as follows:
where X represents any one of the vectors F1, F2, F3, and F4, Y represents any one of the vectors F1, F2, F3, and F4, X × Y represents a vector product of the two vectors, | X | | | | Y | | represents a product of moduli of the two vectors, and the greater the calculated Sim (X, Y), the higher the degree of similarity is demonstrated.
Specifically, the similarity between F1 and four standard face feature vectors of F1, F2, F3 and F4 is calculated, and if the similarity between F1 and F1 is the highest, F1 votes to the player of the F1 standard face feature vector. And calculating the similarity of the F2 and four standard face feature vectors of F1, F2, F3 and F4, and if the similarity of F2 and F4 is the highest, voting F2 to the player of the F4 standard face feature vector. And calculating the similarity of the F3 and four standard face feature vectors of F1, F2, F3 and F4, and if the similarity of F3 and F1 is the highest, voting F3 to the player of the F1 standard face feature vector. And calculating the similarity of the F4 and four standard face feature vectors of F1, F2, F3 and F4, and if the similarity of F4 and F1 is the highest, voting F4 to the player of the F1 standard face feature vector.
In this embodiment, the number of votes obtained by the player represented by the F1 standard face feature vector is 3 tickets, the number of votes obtained by the player represented by the F4 standard face feature vector is 1 ticket, and the number of votes obtained by the player represented by the F1 standard face feature vector is the largest, and it is determined that the goal segment belongs to the player represented by the F1 standard face feature vector.
Calculating the similarity according to the recognition result, voting the football player according to the similarity, and specifically comprising the following steps: if the human face is not detected, respectively calculating the similarity of the color histogram feature and the standard color histogram feature, wherein the similarity formula is as follows:
wherein X represents a color histogram feature, Y represents a standard color histogram feature, XiDenotes the i-th component of X, YiRepresenting the ith component of Y, and the smaller the calculated Sim (X, Y), the higher the similarity is proved;
and voting for the standard color histogram features with the highest similarity, calculating the number of votes obtained by the player represented by each standard color histogram feature, and judging that the goal segment belongs to the player represented by the standard color histogram feature with the highest number of votes obtained.
Specifically, the face detection algorithm is used for respectively carrying out face recognition detection on the first detection frame, the second detection frame with the minimum distance value, the third detection frame with the minimum distance value and the fourth detection frame with the minimum distance value. In the above embodiment, the number of players is four, and if no face is detected by the face detection algorithm, the color histogram features corresponding to the four detection frames are respectively recorded as: g1, g2, g3, and g4, four standard color histogram features are respectively noted: g1, G2, G3 and G4. Respectively calculating the similarity of the four color histogram features and the standard color histogram feature, wherein the similarity formula is as follows:
wherein X represents any vector of G1, G2, G3 and G4, Y represents any vector of G1, G2, G3 and G4, and X represents any vector of G1, G2, G3 and G4iDenotes the i-th component of X, YiThe i-th component of Y is represented, and the smaller the calculated Sim (X, Y), the higher the degree of similarity is proved.
And calculating the similarity of the G1 and the four standard color histogram features of G1, G2, G3 and G4, and voting G1 to the players with the standard color histogram feature of G1 on the assumption that the similarity of G1 and G1 is the highest. And calculating the similarity of the G2 and the four standard color histogram features of G1, G2, G3 and G4, and voting G2 to the players with the standard color histogram feature of G4 on the assumption that the similarity of G2 and G4 is the highest. And calculating the similarity of the G3 and the four standard color histogram features of G1, G2, G3 and G4, and voting G3 to the players with the standard color histogram feature of G1 on the assumption that the similarity of G3 and G1 is the highest. And calculating the similarity of the G4 and the four standard color histogram features of G1, G2, G3 and G4, and voting G4 to the players with the standard color histogram feature of G1 on the assumption that the similarity of G4 and G1 is the highest.
In this embodiment, the number of votes obtained by the player represented by the G1 standard color histogram feature is 3 tickets, the number of votes obtained by the player represented by the G4 standard color histogram feature is 1 ticket, and the number of votes obtained by the player represented by the G1 standard color histogram feature is the largest, and it is determined that the goal segment belongs to the player represented by the G1 standard color histogram feature.
And repeating the steps for each goal segment to classify the goal segments according to the players, and automatically generating a collection video for all the goal segments of a certain player after the match is finished, and providing the collection video for the players to download.
Example 2:
a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method of embodiment 1.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that:
reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
In addition, it should be noted that the specific embodiments described in the present specification may differ in the shape of the components, the names of the components, and the like. All equivalent or simple changes of the structure, the characteristics and the principle of the invention which are described in the patent conception of the invention are included in the protection scope of the patent of the invention. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.
Claims (12)
1. A goal segment classification method based on a neural network is characterized by comprising the following steps:
acquiring a first shot video, inputting each video frame to be identified of the first shot video into a detection model so as to identify and obtain a first detection frame of a player in all the video frames to be identified and a video frame representing the shooting and the hand-out, and calculating a first sole coordinate of the player in the video frame when the shooting and the hand-out are carried out according to the first detection frame;
calculating a second sole coordinate in at least one second shot video according to the first sole coordinate, acquiring second detection frames representing all players in a video frame when a user shoots a basketball and goes out of hand in the second shot video, calculating the second sole coordinate and distance values of all the second detection frames, and selecting the player in the second detection frame with the smallest distance value;
and carrying out face recognition and/or color recognition on the players in the first shot video and the at least one second shot video, calculating the similarity according to the recognition result, voting the players according to the similarity, and classifying the goal segments of the first shot video and the at least one second shot video according to the voting result.
2. The neural network-based goal segment classification method according to claim 1, further comprising the steps of:
and acquiring basketball training data, preprocessing the basketball training data, and inputting the preprocessed basketball training data into a neural network model so as to train the neural network model to obtain a detection model.
3. The method for classifying goal segments based on neural network as claimed in claim 1, wherein said obtaining a first captured video and inputting each video frame to be identified of said first captured video into a detection model, specifically comprises the following steps:
the basketball shooting system comprises a plurality of image collectors, wherein the image collectors are used for shooting the whole basketball half field, a basketball forbidden zone and a three-branch zone;
the image collector is used for collecting the first shooting video and at least one second shooting video;
and acquiring all goal segments of the first shot video, wherein the goal segments comprise video frames of a basketball entering a basket, and a plurality of seconds of forward and backward pushing from the video frames of the basketball entering the basket, and inputting each video frame to be identified of the goal segments into a detection model.
4. The neural network-based goal segment classification method of claim 1, wherein second sole coordinates in at least one second shot video are calculated according to the first sole coordinates, and the method specifically comprises the following steps:
randomly acquiring a first video frame from the first shooting video and randomly acquiring a second video frame from the second shooting video;
calibrating four corner points of a basketball forbidden zone in the first video frame and the second video frame respectively, and calculating an affine transformation matrix between the first video frame and the second video frame according to the four corner points calibrated respectively;
the first sole coordinate calculates the second sole coordinate in the second shot video through the affine transformation matrix.
5. The neural network-based goal segment classification method as claimed in claim 1, wherein second detection frames representing all players in a video frame when a shot is taken and a hand is taken in the second shot video are obtained, the second sole coordinates and the distance values of all the second detection frames are calculated, and the player in the second detection frame with the smallest distance value is selected, and the method specifically comprises the following steps:
acquiring all goal segments of the second shot video, wherein the goal segments comprise video frames of entering baskets of basketballs, and a plurality of seconds of forward pushing and a plurality of seconds of backward pushing from the video frames of entering the baskets of the basketballs;
inputting the video frames representing the shooting and the hand-out in the goal segment into a detection model so as to identify and obtain second detection frames of all players in the video frames representing the shooting and the hand-out;
and calculating the second sole coordinates and the distance values of all the second detection frames, and selecting the second detection frame with the minimum distance value, wherein the second detection frame comprises the player of the goal segment.
6. The neural network-based goal segment classification method according to claim 1, further comprising the steps of:
and acquiring a single photo of each contestant player, and calculating HSV color histogram features of the players.
7. The neural network-based goal classification method as claimed in claim 6, wherein the method for obtaining the single photo of each contestant player and calculating the HSV color histogram characteristics of the player comprises the following steps:
converting the BGR color space of the single photo into an HSV color space, and quantizing each component of the HSV color space according to a quantization table;
and combining the quantized HSV color spaces into a single-channel image according to a ratio, and calculating the color histogram characteristics according to the single-channel image to obtain and store the standard color histogram characteristics of the area where the player is located.
8. The neural network-based goal segment classification method according to claim 1, further comprising the steps of:
and acquiring a single photo of each contestant player, detecting the face of the single photo and calculating a face feature vector through a face detection algorithm and a face recognition algorithm.
9. The method as claimed in claim 8, wherein the method for classifying goal segments based on neural network comprises the steps of obtaining a single photo of each player participating in the game, detecting the face of the single photo and calculating the face feature vector through a face detection algorithm and a face recognition algorithm, and specifically comprises the following steps:
converting the BGR color space of the single photo into an RGB color space, and normalizing the pixel values of the RGB color space;
detecting the single photo through a face detection algorithm, and if the face in the single photo is detected, intercepting the detected face;
and correcting the face through a face correction algorithm, identifying the corrected face through a face identification algorithm, extracting a standard face characteristic vector and storing the standard face characteristic vector.
10. The method for classifying goal segments based on a neural network as claimed in claim 1, wherein the similarity is calculated according to the recognition result, and the voter is voted according to the similarity, comprising the following steps:
if the human face is detected, respectively calculating the similarity of the human face feature vector and the standard human face feature vector, wherein the similarity formula is as follows:
wherein, X denotes a face feature vector, Y denotes a standard face feature vector, X is the vector product of the two vectors, | X | | | | | Y | | | is the product of the two vectors modulo, the greater the calculated Sim (X, Y), the higher the similarity is proved;
voting the standard face feature vectors with the highest similarity, calculating the number of votes obtained by the player represented by each standard face feature vector, and judging that the goal segment belongs to the player represented by the standard face feature vector with the highest number of votes obtained.
11. The method for classifying goal segments based on a neural network as claimed in claim 1, wherein the similarity is calculated according to the recognition result, and the voter is voted according to the similarity, comprising the following steps:
if the human face is not detected, respectively calculating the similarity of the color histogram feature and the standard color histogram feature, wherein the similarity formula is as follows:
wherein X represents a color histogram feature, Y represents a standard color histogram feature, XiDenotes the i-th component of X, YiRepresenting the ith component of Y, and the smaller the calculated Sim (X, Y), the higher the similarity is proved;
and voting to the standard color histogram features with the highest similarity, calculating the number of votes obtained by the player represented by each standard color histogram feature, and judging that the goal segment belongs to the player represented by the standard color histogram feature with the highest number of votes obtained.
12. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the neural network-based goal segment classification method of any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111321234.2A CN113989725B (en) | 2021-11-09 | 2021-11-09 | A goal segment classification method based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111321234.2A CN113989725B (en) | 2021-11-09 | 2021-11-09 | A goal segment classification method based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113989725A true CN113989725A (en) | 2022-01-28 |
CN113989725B CN113989725B (en) | 2024-11-08 |
Family
ID=79747436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111321234.2A Active CN113989725B (en) | 2021-11-09 | 2021-11-09 | A goal segment classification method based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113989725B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114973409A (en) * | 2022-05-18 | 2022-08-30 | 青岛根尖智能科技有限公司 | Goal scoring identification method and system based on court environment and personnel pose |
CN115966019A (en) * | 2022-12-27 | 2023-04-14 | 汕头市同行网络科技有限公司 | Method for acquiring position of basketball shooting assisting player |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102427507A (en) * | 2011-09-30 | 2012-04-25 | 北京航空航天大学 | Football video highlight automatic synthesis method based on event model |
WO2019071664A1 (en) * | 2017-10-09 | 2019-04-18 | 平安科技(深圳)有限公司 | Human face recognition method and apparatus combined with depth information, and storage medium |
CN109961039A (en) * | 2019-03-20 | 2019-07-02 | 上海者识信息科技有限公司 | A kind of individual's goal video method for catching and system |
CN110472561A (en) * | 2019-08-13 | 2019-11-19 | 新华智云科技有限公司 | Soccer goal kind identification method, device, system and storage medium |
CN110674767A (en) * | 2019-09-29 | 2020-01-10 | 新华智云科技有限公司 | Method for automatically distinguishing basketball goal segment AB team based on artificial intelligence |
-
2021
- 2021-11-09 CN CN202111321234.2A patent/CN113989725B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102427507A (en) * | 2011-09-30 | 2012-04-25 | 北京航空航天大学 | Football video highlight automatic synthesis method based on event model |
WO2019071664A1 (en) * | 2017-10-09 | 2019-04-18 | 平安科技(深圳)有限公司 | Human face recognition method and apparatus combined with depth information, and storage medium |
CN109961039A (en) * | 2019-03-20 | 2019-07-02 | 上海者识信息科技有限公司 | A kind of individual's goal video method for catching and system |
CN110472561A (en) * | 2019-08-13 | 2019-11-19 | 新华智云科技有限公司 | Soccer goal kind identification method, device, system and storage medium |
CN110674767A (en) * | 2019-09-29 | 2020-01-10 | 新华智云科技有限公司 | Method for automatically distinguishing basketball goal segment AB team based on artificial intelligence |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114973409A (en) * | 2022-05-18 | 2022-08-30 | 青岛根尖智能科技有限公司 | Goal scoring identification method and system based on court environment and personnel pose |
CN115966019A (en) * | 2022-12-27 | 2023-04-14 | 汕头市同行网络科技有限公司 | Method for acquiring position of basketball shooting assisting player |
Also Published As
Publication number | Publication date |
---|---|
CN113989725B (en) | 2024-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102890781B (en) | A kind of Highlight recognition methods for badminton game video | |
US12046038B2 (en) | System and method for generating visual analytics and player statistics | |
CN102819749B (en) | A kind of football offside automatic discrimination system and method based on video analysis | |
CN111444890A (en) | Sports data analysis system and method based on machine learning | |
US9183431B2 (en) | Apparatus and method for providing activity recognition based application service | |
CN103745483B (en) | Mobile-target position automatic detection method based on stadium match video images | |
Song et al. | Triple-discriminator generative adversarial network for infrared and visible image fusion | |
CN113989725A (en) | Goal segment classification method based on neural network | |
US9256957B1 (en) | Method for moving-object detection tracking identification cueing of videos | |
CN109684919B (en) | A machine vision-based badminton serve violation discrimination method | |
CN107454437B (en) | Video annotation method and device and server | |
CN114025183B (en) | Live broadcast method, device, equipment, system and storage medium | |
CN103310193A (en) | Method for recording important skill movement moments of athletes in gymnastics video | |
CN114863321B (en) | Automatic video generation method and device, electronic equipment and chip system | |
KR20090118634A (en) | Automatic analysis system of athletic competition and its method | |
CN114387546A (en) | Analysis method, system and computer readable storage medium for basketball goal segment | |
CN110674767B (en) | Method for automatically distinguishing basketball goal segment AB team based on artificial intelligence | |
CN112837350A (en) | Target moving object recognition method, device, electronic device and storage medium | |
Tahan et al. | A computer vision driven squash players tracking system | |
CN115171019B (en) | Rope skipping counting method based on semi-supervised video object segmentation | |
CN108854031A (en) | The method and relevant apparatus of exercise data are analyzed by unmanned camera work | |
Ringis et al. | Automated highlight generation from cricket broadcasts using ORB | |
CN110415256B (en) | Rapid multi-target identification method and system based on vision | |
CN113435346A (en) | Image processing method, image processing device, electronic equipment and computer storage medium | |
US9959632B2 (en) | Object extraction from video images system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |