CN108632666B - Video detection method and video detection equipment - Google Patents
Video detection method and video detection equipment Download PDFInfo
- Publication number
- CN108632666B CN108632666B CN201710154205.9A CN201710154205A CN108632666B CN 108632666 B CN108632666 B CN 108632666B CN 201710154205 A CN201710154205 A CN 201710154205A CN 108632666 B CN108632666 B CN 108632666B
- Authority
- CN
- China
- Prior art keywords
- frame
- video
- display device
- source
- mark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The application discloses a video detection method, which comprises the following steps: the video detection equipment generates a source video, each video frame of the source video comprises a frame mark area and a frame mark, and the frame mark positions of all video frames in the source video are different; after a source video is provided for a first display device, a video detection device shoots a first video played by the first display device to generate a second video, a target video frame is selected from the second video, a frame mark position of the target video frame is obtained, and a corresponding relation between the target video frame and a video frame of the source video is determined according to the frame mark position of the target video frame. The application also provides video detection equipment capable of realizing the video detection method. The video frame can be accurately identified.
Description
Technical Field
The present application relates to the field of image technologies, and in particular, to a video detection method and a video detection device.
Background
In video applications, increasing emphasis is placed on video quality. Video quality includes sharpness, fluency, and real-time, among others. The video fluency index is generally embodied as a frame rate displayed on an image display device, and the higher the frame rate is, the better the fluency of the video is.
Referring to fig. 1, the conventional test system includes a video performance test system, a visitor call terminal, and an indoor user terminal. The visitor call terminal comprises a camera, the indoor user terminal comprises a display, and the visitor call terminal and the indoor user terminal can transmit video frames through the transmission network. The conventional image detection method is roughly as follows: the video test information source is composed of images with frame marks, and a camera of the visitor call terminal shoots the video test information source to obtain the images with the frame marks, codes of the images are transmitted to the indoor user terminal, and the images are displayed on a display of the indoor user terminal. The image with frame mark displayed on the indoor user terminal display is shot by the camera part of the video performance test system and then sent to the video analysis part for analysis. When the video image is normally displayed, the frame markers in the video image are as shown in fig. 2a or fig. 3 a.
The frame rate of an image acquired by a camera part of the video performance test system is often inconsistent with the frame rate of an image displayed by a terminal, and when an indoor user terminal switches video frames, the response delay of an indoor user terminal display can cause that one video frame acquired by the video performance test system may include a plurality of frame markers. If multiple frame markers overlap, ghosting and blurring (as shown in fig. 2b, fig. 3b, or fig. 3 c) may result, which may cause the video detection analysis module to fail to accurately identify the video frame.
Disclosure of Invention
The application provides a video detection method and video detection equipment, which can accurately identify video frames and more accurately identify the video frames.
A first aspect provides a video detection method, including: the video detection device generates a source video and provides the source video for the first display device; the method comprises the steps of shooting a first video played by a first display device, generating a second video according to the first video, selecting a target video frame from the second video, obtaining a frame mark position of the target video frame, and determining the corresponding relation between the target video frame and a video frame of a source video according to the frame mark position of the target video frame. Each video frame of the source video comprises a frame mark area and a frame mark, the frame mark positions of the video frames in the source video are different, the frame mark positions are positions of the frame marks in the frame mark area, and the first video is generated by the first display device according to the source video. The target video frame may be any one of the frames in the second video.
By this implementation, because the frame markers of the video frames in the source video are different in position, when the video acquired by the video detection device includes a plurality of frame markers, the plurality of frame markers are not ghosted or blurred and are distributed in different positions of the frame marker area. The video detection equipment can accurately identify the target video frame according to the frame mark position, and the problem that the video frame cannot be accurately identified in the prior art is solved.
In a possible implementation manner of the first aspect, the generating of the source video by the video detection device may specifically be: the video detection device acquires a video frame sequence, sets a frame mark in a frame mark area of each video frame according to the video frame sequence, and takes the video frame sequence comprising the frame mark area and the frame mark as a source video. It can be seen that the present embodiment provides a method for assigning frame markers to video frames of a source video in video frame order, which can be applied to video stream detection.
Further, in another possible implementation manner, for each video frame in the source video, the frame mark position of the video frame corresponds to the position of the video frame in the source video. Therefore, in the source video, the frame mark position of each video frame has uniqueness, and different video frames can be quickly and accurately identified according to the frame marks.
In another possible implementation manner of the first aspect, the frame marker position at which the video detection device acquires the target video frame may specifically be: when the target video frame includes a plurality of frame markers, the frame marker position of the last frame marker is taken as the frame marker position of the target frame. Therefore, the embodiment provides a scheme capable of acquiring the frame marker position of the current frame, and has good feasibility.
In another possible implementation manner of the first aspect, the providing, by the video detection device, the source video to the first display device may specifically be: the video detection device sends a source video to the first display device; or the video detection device plays the source video, and the first display device shoots the source video. It can be seen that the present embodiment can provide the source video to the first display device in a variety of ways, providing flexibility in implementation of the scheme.
In another possible implementation manner of the first aspect, after generating the second video according to the first video, when the frame marker position of the first frame of the second video is acquired, the video detection device sets the video frame number of the first video to one; starting from a second frame of the second video, the video detection equipment acquires a frame mark position of a current frame and acquires a frame mark position of a previous frame; comparing the frame mark position of the current frame with the frame mark position of the previous frame, and counting according to the comparison result until the current frame is the last frame of the second video; and then taking the counting result as the video frame number of the first video, and determining the frame rate of the first video according to the video frame number of the first video and the duration of the second video. The previous frame is a video frame adjacent to and before the current frame. By this implementation, in this embodiment, different video frames in the second video can be identified according to the frame marker position, then the number of video frames with different display contents in the second video is counted, and then the frame rate of the first video is determined according to the counting result and the duration of the second video, thereby solving the problem that the frame rate cannot be accurately counted in the prior art.
In another possible implementation manner of the first aspect, the comparing, by the video detection device, the frame marker position of the current frame with the frame marker position of the previous frame, and the counting according to the comparison result may specifically be: if the frame mark position of the current frame is different from the frame mark position of the previous frame, adding one to the video frame number of the first video; and if the frame mark position of the current frame is the same as that of the previous frame, keeping the video frame number of the first video unchanged. Therefore, the embodiment provides a specific scheme for counting the frame number of the first video, and has good feasibility.
In another possible implementation manner of the first aspect, while the video detection device captures the first video, the video detection device captures a third video played by the second display device, and generates a fourth video according to the third video; acquiring a frame mark position of a first test frame from a second video, determining a frame number corresponding to the frame mark position of the first test frame according to a preset corresponding relation between the frame mark position and the frame number, and taking the frame number corresponding to the frame mark position of the first test frame as a first frame number; acquiring a frame mark position of a second test frame from the fourth video, determining a frame number corresponding to the frame mark position of the second test frame according to the corresponding relation between the preset frame mark position and the frame number, and taking the frame number corresponding to the frame mark position of the second test frame as the second frame number; calculating the frame number difference between the first frame number and the second frame number; and determining the transmission time delay between the first display equipment and the second display equipment according to the frame number difference and the frame rate of the source video. Wherein the third video is generated by the second display device from the first video. The first test frame corresponds to a video frame displayed by the first display device at the test time, and the second test frame corresponds to a video frame displayed by the second display device at the test time. The correspondence between the frame marker position and the frame number refers to the correspondence between the frame marker position and the frame number of the source video. According to the embodiment, the corresponding relation between the first test frame and the video frame in the source video and the corresponding relation between the second test frame and the video frame in the source video can be respectively determined according to the frame mark positions, and the transmission delay between the first display device and the second display device can be accurately measured according to the frame number difference because the frame number difference of the videos played by the two display devices corresponds to the transmission delay between the two display devices.
A second aspect provides a video detection device, which has the function of implementing the video detection device in the video detection method. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
A third aspect provides a computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of the above aspects.
A fourth aspect provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the above aspects.
In the embodiment of the application, the video detection device provides a source video for the first display device, shoots the first video played by the first display device to obtain a second video, selects a target video frame from the second video, obtains a frame mark position of the target video frame, and determines a corresponding relation between the target video frame and a video frame of the source video according to the frame mark position of the target video frame. The source video comprises a plurality of video frames, each video frame comprises a frame mark area and a frame mark, the frame mark positions of the video frames in the source video are different, and the frame mark positions are the positions of the frame marks in the frame mark areas. In the case that the frame rate of the image acquired by the video detection device is not consistent with the frame rate of the image displayed by the first display device, due to the fact that the positions of the frame markers of the video frames in the source video are different, the frame markers in the target video frame are not ghosted or blurred, and are distributed in different positions of the frame marker area. The video detection equipment can accurately identify the target video frame according to the frame mark position, and the problem that the video frame cannot be accurately identified in the prior art is solved.
Drawings
FIG. 1 is a schematic diagram of a prior art test system;
FIG. 2a is a diagram of a normal image in the prior art;
FIG. 2b is a schematic diagram of an anomaly image in the prior art;
FIG. 3a is a diagram of a normal image in the prior art;
FIG. 3b is a schematic diagram of an anomaly image in the prior art;
FIG. 3c is another schematic diagram of an anomaly image in the prior art;
FIG. 4a is a schematic diagram of a test system according to an embodiment of the present application;
FIG. 4b is another schematic diagram of a test system in an embodiment of the present application;
FIG. 4c is another schematic diagram of a test system in an embodiment of the present application;
FIG. 5 is a schematic diagram of a video detection apparatus in an embodiment of the present application;
FIG. 6 is another schematic diagram of a video detection device in an embodiment of the present application;
FIG. 7 is a flowchart illustrating a video detection method according to an embodiment of the present application;
FIG. 8a is a diagram illustrating a frame marker in the frame marker area according to an embodiment of the present application;
FIG. 8b is a diagram illustrating a plurality of frame markers in the frame marker region according to an embodiment of the present application;
FIG. 8c is another diagram of a plurality of frame markers in a frame marker region according to an embodiment of the present application;
FIG. 9 is a schematic flow chart of a video detection method according to an embodiment of the present application;
FIG. 10 is a schematic flow chart of a video detection method according to an embodiment of the present application;
FIG. 11 is a schematic flow chart illustrating a video detection method according to an embodiment of the present application;
FIG. 12 is a schematic flow chart of a video detection method according to an embodiment of the present application;
fig. 13 is another schematic diagram of a video detection device in an embodiment of the present application.
Detailed Description
The video detection method provided by the application can be applied to various test systems. The following describes the test system in detail:
in the test system shown in fig. 4a, the test system comprises a video detection device 41 and a first display device 42. The video detection device 41 is provided with a camera, the first display device is provided with a display, and the camera of the video detection device 41 is disposed opposite to the display of the first display device 42.
In the test system shown in fig. 4b, the test system comprises a video detection device 41, a first display device 42 and a second display device 43. The video detection device 41 is provided with a first camera and a second camera, the first display device 42 and the second display device 43 are each provided with a display, the first camera of the video detection device 41 is disposed opposite to the display of the first display device 42, and the second camera is disposed opposite to the display of the second display device 43. The first display device 42 and the second display device 43 are connected through a wired network or a wireless network. After the first display device 42 obtains the video, the video content is compressed and encoded, and then transmitted to the second display device 43 via the network for display.
In the test system shown in fig. 4c, the test system includes a video detection device 41, a first display device 42, a second display device 43, and a material playback device 44. The first display device 42 and the second display device 43 are connected via a network. The material playing device 44 and the video detecting device 41 may be connected via a network or may be directly connected via a communication cable. The material playing apparatus 44 is configured with a display, the first display apparatus 42 is configured with a display and a camera, the second display apparatus 43 is configured with a display, and the video detection apparatus 41 is configured with a first camera and a second camera. The display of the material playing apparatus 44 is disposed opposite to the camera of the first display apparatus 42, the first camera of the video detection apparatus 41 is disposed opposite to the display of the first display apparatus 42, and the second camera is disposed opposite to the display of the second display apparatus 43. After the first display device 42 captures the video played by the material playing device 44, the video content is compressed and encoded, and then transmitted to the second display device 43 via the network for display.
In the above test system, the first display device 42 and/or the second display device 43 belong to a device under test, and the video detection device 41 is used for performing video detection, such as frame rate detection, time delay detection or sharpness detection, on the device under test. The network may be a wired network or a Wireless network, such as Wireless-Fidelity (WIFI), 3G, 4G, or 5G.
The video detection device is described in detail below:
in a possible implementation manner, the video detection device 41 includes a processor 411, a camera 412, a memory 413, a communication interface 414, and a communication bus 415, and the processor 411, the camera 412, the memory 413, and the communication interface 414 are connected for communication through the communication bus 415, as shown in fig. 5.
In another possible implementation, the video detection device 41 includes a processor 411, a camera 412, a memory 413, a display 416, and a communication bus 415, and the processor 411, the camera 412, the memory 413, and the display 416 are connected to communicate through the communication bus 415, as shown in fig. 6.
In the video detection apparatus shown in fig. 5 or fig. 6, the number of the processor 411, the camera 412, the memory 413, the communication interface 414, and the display 416 may be one or more.
The processor 411 may be a single-core processor or a multi-core processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions). The processor 411 may be a single processor or may be a collective term for a plurality of processing elements. For example, the processor 411 may be a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs according to the present disclosure, such as: one or more microprocessors (digital signal processors, DSPs), or one or more Field Programmable Gate Arrays (FPGAs).
The camera 412, which is a kind of video input device, has basic functions of video shooting/distribution and still image capturing.
The Memory 413 may be a Read-Only Memory (ROM) or other types of static storage devices that can store static information and instructions, a Random Access Memory (RAM) or other types of dynamic storage devices that can store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integral to the processor.
The communication interface 414, which may be any transceiver or the like, is used for communicating with other devices or communication Networks, such as ethernet, Residential Access Network (RAN), Wireless Local Area Network (WLAN), etc.
A display 416 displays information in a variety of ways under the control of the processor 411. For example, the output device may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, or the like.
In one embodiment, the video detection device 41 may further include an input device, which is in communication with the processor 411 and may accept user input in a variety of ways. For example, the input device may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
The memory 413 is used for storing application program codes for implementing the schemes provided by the embodiments of the present application, and is controlled by the processor 411 to execute. The processor 411 is used for executing the application program codes stored in the memory 413 for implementing the scheme provided by the embodiment of the present application.
The video detection device may be a general purpose computer device or a special purpose computer device. Specifically, the device may be a desktop, a laptop, a mobile phone, a tablet computer, a smart television, a web server, an embedded device, or the like, or a device having a similar structure as in fig. 5 or fig. 6.
The application provides a video detection method, which can accurately identify a video frame and can be applied to a test system shown in fig. 4a or 4 b. Referring to fig. 7, an embodiment of a video detection method provided in the present application includes:
step 701, the video detection device generates a source video.
Each video frame of the source video comprises a frame mark area and a frame mark, the frame mark positions of the video frames in the source video are different, and the frame mark positions are the positions of the frame marks in the frame mark area.
The frame mark region may be formed by a plurality of blocks, the blocks correspond to the frame mark positions one by one, and each block may be a pixel or a pixel region formed by a plurality of pixels. For example, a tile may be a pixel region composed of 2 pixels, or a pixel region composed of 4 pixel blocks. The frame mark region may not be set with the background color, or the background color may be set with a certain color, such as white, yellow, green, or other colors. The frame marker may be a number, a square, a circular pattern, or other shaped pattern that is a different color than the background color of the frame marker region. For each video frame, each frame marker occupies a block of the frame marker area.
Step 702, the video detection device provides a source video to the first display device.
Specifically, in an alternative embodiment, the video detection device shown in FIG. 5 sends the source video to the first display device via the communication interface 414.
In another alternative embodiment, the video detection device shown in FIG. 6 plays the source video through the display 416.
Step 703, after the first display device obtains the source video from the video detection device, it generates a first video according to the source video, and plays the first video.
Specifically, when the video detection device 41 shown in fig. 5 sends the source video to the first display device through the communication interface 414, the first display device transcodes the source video into the first video after receiving the source video sent by the video detection device, and plays the first video.
When the video detection device 41 shown in fig. 6 plays a source video through the display 416, the first display device captures the source video played by the video detection device, generates a first video, and plays the first video. Generally, the format of the first video is not exactly the same as the format of the source video, e.g., the frame rate is different. It will be appreciated that the video detection device may generate the first video from all or a portion of the source video.
Step 704, the video detection device shoots a first video played by the first display device, and generates a second video according to the first video.
The video detection device may capture the first video played by the first display device at a different frame rate than the first video. The frame rate unit may be expressed in frames per second (fps). For example, the frame rate of the first video is 30fps, and the video detection device may capture the first video at 60fps to obtain the second video, so that the frame rate of the second video is 60 fps.
Step 705, the video detection device selects a target video frame from the second video, obtains a frame mark position of the target video frame, and determines a corresponding relationship between the target video frame and a video frame of the source video according to the frame mark position of the target video frame.
Determining the corresponding relationship between the target video frame and the video frame of the source video according to the frame mark position of the target video frame is as follows: and determining that the target video frame corresponds to the ith frame of the source video under the condition that the frame mark position of the target video frame is the same as the frame mark position of the ith frame of the source video.
The video detection device may select any one frame from the second video as the target video frame, and then search for the frame marker in the video frame region of the target video frame. If the target video frame includes a frame marker, determining that the frame marker is a frame marker of the target video frame. If the target video frame comprises a plurality of frame marks, one frame mark is selected from the plurality of frame marks to be used as the frame mark of the target video frame, the frame mark position of the target video frame is compared with the frame mark position of each video frame in the source video, and the video frame corresponding to the target video frame can be found from the source video. For example, the frame marker position of the target video frame is (2,3), and the target video frame is determined to correspond to the video frame with the frame marker position of (2,3) in the source video.
In an alternative embodiment, the framing marker positions are in a one-to-one correspondence with the frame numbers of the source video. The video detection equipment determines the frame number of the source video corresponding to the frame mark position of the target video frame according to the corresponding relation between the preset frame mark position and the frame number, and then searches the video frame corresponding to the frame number of the source video from the source video. For example, the frame flag position of the target video frame is (2,3), the frame number corresponding to (2,3) is determined to be 11, and the target video frame is determined to correspond to the video frame with the frame number of 11 in the source video.
In this embodiment, in the case that the frame rate of the image captured by the video detection device is not consistent with the frame rate of the image displayed by the first display device, the frame markers in the target video frame are not ghosted or blurred, but distributed in different positions of the frame marker area. The video detection equipment can accurately identify the target video frame according to the frame mark position, and the problem that the video frame cannot be accurately identified in the prior art is solved.
Based on the embodiment shown in fig. 7, in an optional embodiment, step 701 may specifically be: the video detection equipment acquires a video frame sequence, sets a frame mark in a frame mark area of each video frame according to the video frame sequence, and forms the video frame sequence comprising the frame mark area and the frame mark into a source video.
Specifically, for each video frame in the source video, the frame marker position of the video frame corresponds to the position of the video frame in the source video, that is, the sequence of the frame marker positions of the video frames is consistent with the sequence of the video frames. Setting a frame marker in the frame marker region of each video frame according to the video frame sequence may specifically be: and setting a frame mark at the ith frame mark position of a frame mark area of the ith video frame, wherein i is a positive integer, and the ith frame mark position is before the (i + 1) th frame mark position.
For example, for each video frame, a frame marker area is set on the video frame, the frame marker area includes M × N blocks, each block corresponds to a frame marker position, M is the total number of rows, N is the total number of columns, and M, N are positive integers. Taking M as 8 and N as 5 as an example, the frame mark position of the first frame is the block in the 1 st row and the 1 st column, the frame mark position of the second frame is the block in the 1 st row and the 2 nd column, the order of the frame mark positions is the first and the last columns, the frame mark positions of other frames can be analogized, and the frame mark of the 11 th frame is as shown in fig. 8 a. It is understood that the order of setting the frame markers may also be column-first and row-second.
Based on the embodiment or the alternative embodiment shown in fig. 7, in another alternative embodiment, the frame marker position of the target video frame obtained in step 705 may specifically be: when the target video frame includes a plurality of frame markers, the frame marker position of the last frame marker is taken as the frame marker position of the target frame.
Specifically, when a display image of the first display device is ghosted or blurred, a plurality of frame marks appear in a video frame captured by the video detection device. And when the video detection equipment detects the video frames of the second video in sequence, selecting the last frame mark position as the frame mark position of the current frame.
When the sequence of the frame mark positions in the source video is front and rear, if the row number of y frame marks in the x frame marks is the maximum, the frame mark position with the maximum row number is selected from the y frame marks to be used as the last frame mark position. For example, when the frame marker position corresponding to the frame marker 1 is (2,2), and the frame marker position of the frame marker 2 is (2,3), the frame marker 2 is determined to be the last frame marker, and (2,3) is the last frame marker position, as shown in fig. 8 b. When the sequence of the frame mark positions in the source video is first column and then row, if the column number of y frame marks in the x frame marks is the maximum, the frame mark position with the maximum line number is selected from the y frame marks to be used as the last frame mark position.
It can be seen from the above embodiments that the present application provides a plurality of frame marker setting methods, and the present application may obtain the frame marker position of the target video frame by using a corresponding frame marker searching method, corresponding to different frame marker setting methods.
Based on the embodiment or the alternative embodiment shown in fig. 7, in another alternative embodiment, the frame marker position of the target video frame obtained in step 705 may specifically be: and carrying out binarization processing on the target video frame to obtain a binarized image, and when the binarized image comprises a plurality of frame marks, taking the frame mark position of the last frame mark as the frame mark position of the target frame. Wherein the process of finding the last frame marker position is similar to the previous embodiment. For example, after the frame mark shown in fig. 8b is subjected to binarization processing, the frame mark shown in fig. 8c can be obtained.
Because the second video is obtained by shooting the first video, when the frame rate of the images collected by the video detection device is higher than the frame rate of the images displayed by the first display device, the number of the video frames of the second video is more than that of the video frames of the first video in the same period, so that the second video comprises the video frames with the same display content and the video frames with different display contents. Display content refers to the area in the video frame other than the frame marker area. In the prior art, the frame rate of the first video cannot be accurately measured by the conventional video detection method because the video frame of the second video cannot be accurately identified. In order to solve the above problem, the present application further provides a video detection method, which can select video frames with different display contents from a second video as video frames of a first video, and then accurately measure a frame rate of the first video.
Based on the embodiment shown in fig. 7, in an optional embodiment, after step 704, the video detection method further includes:
step 901, when acquiring the frame marker position of the first frame of the second video, the video detection device sets the video frame number of the first video to one.
It should be noted that, in addition to performing sequential detection on the second video, after the second video is generated, reverse-order detection may be performed on the second video, and the video frame number of the first video may also be acquired according to the comparison result of the frame marker positions.
Specifically, after the duration of the second video and the number of video frames of the first video are obtained, the frame rate of the first video is calculated according to a preset calculation formula. The calculation formula may be: the frame rate of the first video is the number of frames of the first video/duration of the second video.
Based on the embodiment shown in fig. 9, in an alternative embodiment, in step 903, the frame marker position of the current frame is compared with the frame marker position of the previous frame, and counting is performed according to the comparison result, which may specifically be: if the frame mark position of the current frame is different from the frame mark position of the previous frame, adding one to the video frame number of the first video; and if the frame mark position of the current frame is the same as that of the previous frame, keeping the video frame number of the first video unchanged.
In this embodiment, if the frame marker position of the current frame is different from the frame marker position of the previous frame, it is determined that the display contents of the current frame are different from the display contents of the previous frame, and the video frame number of the first video is increased by one. And if the frame mark position of the current frame is the same as that of the previous frame, determining that the display contents of the current frame and the previous frame are the same, and keeping the video frame number of the first video unchanged. This makes it possible to detect all video frames in the second video, which are different in display content.
Based on the embodiment shown in fig. 9, in another alternative embodiment, in step 903, the frame marker position of the current frame is compared with the frame marker position of the previous frame, and counting is performed according to the comparison result, which may specifically be: determining the frame number of the current frame according to the frame mark position of the current frame and the preset corresponding relation; determining the frame number of the previous frame according to the frame mark position of the previous frame and a preset corresponding relation; if the frame number of the current frame is different from that of the previous frame, adding one to the video frame number of the first video; and if the frame number of the current frame is the same as that of the previous frame, keeping the video frame number of the first video unchanged.
Specifically, before comparing the frame marker positions of the second video, the video detection device establishes a correspondence between the frame marker positions and the frame numbers of the source video, where the frame marker positions and the frame numbers of the source video are in one-to-one correspondence. After the frame number of the source video corresponding to the frame mark position of each video frame is determined, whether the two frames are the same frame in the source video can be determined by comparing the frame numbers of the two frames, if not, the video frame number of the first video is added by one, and if the two frames are the same frame, the video frame number of the first video is kept unchanged.
The application also provides a video detection method, which can accurately measure the transmission delay between the first display device and the second display device and is suitable for the test system shown in fig. 4 b. Referring to fig. 10, another embodiment of a video detection method provided in the present application includes:
step 1001, the video detection device generates a source video.
Each video frame of the source video comprises a frame mark area and a frame mark, the frame mark positions of the video frames in the source video are different, and the frame mark positions are the positions of the frame marks in the frame mark area. It can be understood that the video detection device may obtain the frame rate of the source video, and the frame rate of the source video may be adjusted according to actual needs.
Step 1002, the video detection device provides a source video to the first display device.
Step 1003, the first display device obtains a source video from the video detection device, generates a first video according to the source video, and plays the first video.
In addition, fig. 1001 to step 1003 are similar to steps 701 to 703, and are not described herein again.
Step 1004, the first display device sends the first video to the second display device.
After the first display device generates the first video, the first video may be transmitted to a second display device.
And 1005, generating a third video by the second display device according to the first video, and playing the third video.
Step 1006, the video detection device simultaneously shoots a first video played by the first display device and a third video played by the second display device, generates a second video according to the first video, and generates a fourth video according to the third video.
Step 1007, the video detection device obtains the frame mark position of the first test frame from the fourth video, determines the frame number corresponding to the frame mark position of the first test frame according to the corresponding relationship between the preset frame mark position and the frame number, and takes the frame number corresponding to the frame mark position of the first test frame as the first frame number. The first test frame corresponds to a video frame displayed by the second display device at the test time.
The correspondence between the frame mark position and the frame number refers to the correspondence between the frame mark position and the frame number of the video frame in the source video.
Step 1008, the video detection device obtains a frame mark position of the second test frame from the second video, determines a frame number corresponding to the frame mark position of the second test frame according to a preset correspondence between the frame mark position and the frame number, and takes the frame number corresponding to the frame mark position of the second test frame as the second frame number. The second test frame corresponds to a video frame displayed by the first display device at the test time.
Step 1009, the video detection device calculates a frame number difference between the first frame number and the second frame number, and determines the transmission delay between the first display device and the second display device according to the frame number difference and the frame rate of the source video.
In this embodiment, when the first display device transmits the first video to the second display device, time is required for video transmission. The video detection device can shoot the first display device and the second display device simultaneously, after the first test frame and the second test frame are obtained respectively, the transmission time delay between the first display device and the second display device can be determined by combining the frame rate of the source video according to the frame number difference between the first test frame and the second test frame. Let the transmission delay between the first display device and the second display device be delay, the calculation formula may be: delay-frame number difference/frame rate of source video.
For example, the video detection device may capture a first video and a third video at time t1, capturing a first test frame and a second test frame, respectively. Assuming that the frame rate of the source video is 30fps, the frame number corresponding to the first test frame is 1000, and the frame number corresponding to the second test frame is 997, the frame number difference is 3, and the unit is second, where delay is 3/30 is 0.1.
The video detection method based on the test system shown in fig. 4a or fig. 4b is described above, and the video detection method based on the test system shown in fig. 4c is described below.
Referring to fig. 11, another embodiment of a video detection method provided in the present application includes:
step 1101, the material playing device synchronizes the frame mark region and the frame mark of the source video with the video detection device.
In this embodiment, the source video is generated by the material playing device, or generated by the video detection device. The specific process of generating the source video by the material playing device is similar to the process of generating the source video by the video detecting device in the embodiment or the alternative embodiment shown in fig. 7.
When the source video is generated by the material playing device, the material playing device synchronizes information such as a frame mark region and a frame mark of the source video to the video detection device, so that the video detection device can evaluate videos played by other devices (such as a second display device). The material playing device can also acquire the frame rate of the source video and synchronize the frame rate of the source video to the video detection device.
Step 1102, the material playing device plays the source video.
Step 1103, the first display device shoots a source video played by the material playing device, generates a first video according to the source video, and plays the first video.
And step 1104, the first display device sends the first video to the second display device.
And 1105, the second display device generates a third video according to the first video, and plays the third video.
Step 1106, the video detection device shoots a third video played by the second display device, and generates a fourth video according to the third video.
Step 1107, the video detection device selects a target video frame from the fourth video, obtains a frame mark position of the target video frame, and determines a corresponding relationship between the target video frame and a video frame of the source video according to the frame mark position of the target video frame.
The video detection device selects any one frame from the fourth video as a target video frame. Since the video detection device stores information such as the frame marker area and the frame marker before the second display device plays the third video, the video detection device can determine the corresponding relationship between the target video frame and the video frame of the source video according to the frame marker position of the target video frame.
The video detection device obtains the frame marker position of the target video frame, and determines the corresponding relationship between the target video frame and the video frame of the source video according to the frame marker position of the target video frame, which is similar to step 705 in the embodiment or the optional embodiment shown in fig. 7 and is not repeated here. It can be seen that in the test system shown in fig. 4c, the video detection device can accurately identify the video frame.
In the test system shown in fig. 4c, the frame rate of the third video can be accurately obtained. Based on the embodiment shown in fig. 11, in an optional embodiment, after step 1106, the video detection method further includes: when the frame mark position of the first frame of the fourth video is obtained, the video detection equipment sets the video frame number of the third video to be one; starting from a second frame of the fourth video, the video detection device acquires a frame mark position of a current frame and acquires a frame mark position of a previous frame; comparing the frame mark position of the current frame with the frame mark position of the previous frame, and counting according to the comparison result until the current frame is the last frame of the fourth video; and taking the counting result as the video frame number of the third video, and determining the frame rate of the third video according to the video frame number of the third video and the duration of the fourth video. Specifically, the specific process of determining the frame rate of the third video is similar to the embodiment shown in fig. 9, and is not repeated here.
In the test system shown in fig. 4c, the transmission delay between the first display device and the second display device can also be accurately obtained.
Referring to fig. 12, another embodiment of a video detection method provided in the present application includes:
step 1201, the material playing device synchronizes the frame marker area and the frame marker of the source video with the video detection device.
And step 1202, the material playing device plays the source video.
Step 1203, the first display device shoots a source video played by the material playing device, generates a first video according to the source video, and plays the first video.
And step 1204, the first display device sends the first video to the second display device.
And step 1205, the second display device generates a third video according to the first video and plays the third video.
Steps 1201 to 1205 are similar to steps 1101 to 1105.
Step 1206, the video detection device shoots the first video and the second video at the same time, generates the second video according to the first video, and generates the fourth video according to the third video.
Step 1207, the video detection device obtains the frame mark position of the first test frame from the fourth video, determines the frame number corresponding to the frame mark position of the first test frame according to the corresponding relation between the preset frame mark position and the frame number, and takes the frame number corresponding to the frame mark position of the first test frame as the first frame number, wherein the first test frame corresponds to the video frame displayed by the second display device at the test time.
Step 1208, the video detection device obtains a frame mark position of the second test frame from the second video, determines a frame number corresponding to the frame mark position of the second test frame according to a preset correspondence between the frame mark position and the frame number, and takes the frame number corresponding to the frame mark position of the second test frame as the second frame number, where the second test frame corresponds to the video frame displayed by the first display device at the test time.
Step 1209, the video detection device calculates a frame number difference between the first frame number and the second frame number, and determines a transmission delay between the first display device and the second display device according to the frame number difference and the frame rate of the source video.
Step 1206 to step 1209 are similar to step 1006 to step 1009, and are not described herein again.
In the test system shown in fig. 4c, in addition to the material playing device synchronizing the frame marker region and the frame marker of the source video to the video detection device, the video detection device may also obtain the frame marker region and the frame marker of the source video by other methods, which are specifically as follows: the method comprises the steps that a material playing device plays a source video, a first display device shoots the source video to generate a first video, the first video is sent to a second display device, the second display device generates a third video according to the first video and plays the third video, a video detection device shoots the third video to generate a fourth video, frame marking region information and frame marking information in the fourth video are obtained, the frame marking region information comprises the position, the length and the width of a frame marking region in a video frame, the frame marking information comprises the frame marking position and the frame marking size, the obtained frame marking information and the frame marking region information are trained, and a training result is used as the frame marking region and the frame marking of the source video.
For the sake of understanding, the video detection method provided by the present application is described as follows in a specific application scenario:
The material playing device synchronizes the frame marker region and the frame marker of the video 1 to the video detecting device. The material playing device plays the video 1, the mobile phone A shoots the video 1 at a frame rate lower than that of the video 1, and the mobile phone A continuously generates a video stream of the video 2 along with the playing of the video 1. Because the shooting frame rate of the mobile phone a is lower than that of the video 1, frame loss exists in the acquisition process, and the mobile phone a can only acquire partial video frames of the video 1. The mobile phone A plays the video stream of the video 2 and sends the video stream of the video 2 to the mobile phone B, and the mobile phone B generates the video 3 according to the video stream of the video 2 and plays the video 3. The frame rate of the mobile phone B playing the video 3 is not higher than that of the mobile phone A playing the video 2.
The video detection device shoots a video stream of the video 3 played by the mobile phone B at 60fps to generate a video 4. Since the acquisition frame rate of the video detection device is higher than that of the video 3, the video 4 includes video frames with the same display content and video frames with different display contents. All of the video frames having different contents, i.e., the video frames included in the video 3, are displayed.
The response delay of the display may result in picture blurring or ghosting, such that some video frames of the video 4 may exhibit multiple frame markers. And detecting the video 4, and assuming that the current frame comprises 2 frame markers, taking the positions of the 2 frame markers as (2,2) and (2,3) as examples, selecting the last frame marker (2,3) as the frame marker of the current frame, and comparing the frame marker with the frame marker in the video 1 to identify that the current frame is the 11 th frame in the video 1. By analogy, the video detection device can accurately identify each video frame in the video 4.
In addition, the video detection apparatus may determine the frame rate of the video 3 from the recognition result of the video 4. When the 1 st frame of the video 4 is detected, the number of frames of the video 3 is counted as 1. And comparing the frame mark position of the current frame with the frame mark position of the previous frame from the 2 nd frame, if the frame mark positions are different, adding one to the counting, and if the frame mark positions are the same, keeping the counting unchanged. And then determining the frame rate of the video 3 according to the counting result and the duration of the video 4.
In addition to the video test procedure described above, the material playback apparatus can synchronize the frame rate of the video 1 to the video detection apparatus. The video detection device can also shoot videos played by the mobile phone a and the mobile phone B at the same time, and the shooting frame rate is 30fps as an example. Recording a video frame obtained by shooting the mobile phone A at the 10 th second as a first test frame, recording a video frame obtained by shooting the mobile phone B at the 10 th second as a second test frame, respectively obtaining a frame mark position of the first test frame and a frame mark position of the second test frame, taking (3,60) and (3,57) as examples, determining that a frame number corresponding to (3,60) is 300, and a frame number corresponding to (3,57) is 297, wherein the difference between the display pictures of the mobile phone A and the mobile phone B is 3 frames. The video detection apparatus calculates the transmission delay time 3/60-0.05 in seconds between the mobile phone a and the mobile phone B according to the frame rate of the video 1 being 60 fps.
The present application further provides a video detection apparatus 1300, which can implement the functions of the video detection apparatus in any one of the embodiments shown in fig. 7 and fig. 9 to fig. 12. The video detection apparatus 1300 includes:
a generating module 1301, configured to generate a source video, where each video frame of the source video includes a frame marker region and a frame marker, and the frame marker positions of the video frames in the source video are different, where the frame marker position is a position of the frame marker in the frame marker region;
a provide source video module 1302 for providing source video to a first display device;
the camera module 1303 is configured to shoot a first video played by a first display device, where the first video is generated by the first display device according to a source video;
a processing module 1304 for generating a second video from the first video;
the processing module 1304 is further configured to select a target video frame from the second video, obtain a frame marker position of the target video frame, and determine a corresponding relationship between the target video frame and a video frame of the source video according to the frame marker position of the target video frame.
In practical applications, the camera module 1303 is implemented by a camera, and the generating module 1301 and the processing module 1304 may be implemented by a CPU.
Based on the video detection apparatus shown in fig. 13, in an optional embodiment, the generating module 1301 is specifically configured to obtain a sequence of video frames, set a frame marker in a frame marker region of each video frame according to a video frame order, and use the sequence of video frames including the frame marker region and the frame marker as a source video.
Further, in another alternative embodiment, for each video frame in the source video, the frame marker position of the video frame corresponds to the position of the video frame in the source video.
Based on the embodiment or the alternative embodiment shown in fig. 13, in another alternative embodiment, the processing module 1304 is specifically configured to use the frame marker position of the last frame marker as the frame marker position of the target frame when the target video frame includes a plurality of frame markers.
Based on the embodiment or alternative embodiment shown in fig. 13, in another alternative embodiment,
the provide source video module 1302 may be a communication interface, specifically for sending source video to a first display device; alternatively, the provide source video module 1302 may be a display, specifically configured to play the source video, which is captured by the first display device.
Based on the embodiment or alternative embodiment shown in fig. 13, in another alternative embodiment,
the processing module 1304 is further configured to, when the frame marker position of the first frame of the second video is obtained, set the video frame number of the first video to one by the video detection device; starting from a second frame of a second video, acquiring a frame mark position of a current frame and acquiring a frame mark position of a previous frame, wherein the previous frame is a video frame adjacent to and before the current frame; comparing the frame mark position of the current frame with the frame mark position of the previous frame, and counting according to the comparison result until the current frame is the last frame of the second video; taking the counting result as the video frame number of the first video in the detection time period, wherein the detection time period is the time period for detecting the second video; and determining the frame rate of the first video according to the duration of the detection period and the video frame number of the first video.
Further, in another optional embodiment, the processing module 1304 is specifically configured to add one to the video frame number of the first video if the frame marker position of the current frame is different from the frame marker position of the previous frame; and if the frame mark position of the current frame is the same as that of the previous frame, keeping the video frame number of the first video unchanged.
Based on the embodiment or alternative embodiment shown in fig. 13, in another alternative embodiment,
the camera module 1303 is further configured to shoot a third video played by the second display device while shooting the first video, where the third video is generated by the second display device according to the first video;
the processing module 1304 is further configured to generate a fourth video according to the third video;
the processing module 1304 is further configured to obtain a frame marker position of the first test frame from the second video, determine a frame number corresponding to the frame marker position of the first test frame according to a preset correspondence between the frame marker position and the frame number, and use the frame number corresponding to the frame marker position of the first test frame as a first frame number, where the first test frame corresponds to a video frame displayed by the first display device at the test time;
the processing module 1304 is further configured to obtain a frame marker position of a second test frame from the fourth video, determine a frame number corresponding to the frame marker position of the second test frame according to a preset correspondence between the frame marker position and the frame number, and use the frame number corresponding to the frame marker position of the second test frame as a second frame number, where the second test frame corresponds to a video frame displayed by the second display device at the test time;
the processing module 1304 is further configured to calculate a frame number difference between the first frame number and the second frame number, and determine a transmission delay between the first display device and the second display device according to the frame number difference and the frame rate of the source video.
In the above embodiments, the video detection device may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; the foregoing storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
Claims (17)
1. A video detection method, comprising:
the video detection equipment generates a source video, wherein each video frame of the source video comprises a frame mark area and a frame mark, the frame mark position of each video frame in the source video is different, and the frame mark position is the position of the frame mark in the frame mark area;
the video detection device provides the source video to a first display device;
the video detection device shoots a first video played by the first display device, and generates a second video according to the first video, wherein the first video is generated by the first display device according to the source video;
the video detection device selects a target video frame from the second video, acquires a frame mark position of the target video frame, wherein the frame mark of the target video frame is one selected from a plurality of frame marks included in the target video frame, and determines the corresponding relation between the target video frame and the video frame of the source video according to the frame mark position of the target video frame.
2. The method of claim 1, wherein the video detection device generating source video comprises:
the video detection equipment acquires a video frame sequence, sets a frame mark in a frame mark area of each video frame according to the video frame sequence, and takes the video frame sequence comprising the frame mark area and the frame mark as a source video.
3. The method of claim 2, wherein for each video frame in the source video, the frame marker position of the video frame corresponds to the position of the video frame in the source video.
4. The method of any of claims 1 to 3, wherein the video detection device obtaining the frame marker position of the target video frame comprises:
when the target video frame includes a plurality of frame markers, the frame marker position of the last frame marker is taken as the frame marker position of the target video frame.
5. The method of claim 4, wherein the video detection device providing the source video to the first display device comprises:
the video detection device sends the source video to the first display device;
or,
and the video detection equipment plays the source video, and the first display equipment shoots the source video.
6. The method of any of claims 1-3, wherein after the generating a second video from the first video, the method further comprises:
when the frame mark position of a first frame of the second video is obtained, the video detection equipment sets the video frame number of the first video to be one;
starting from a second frame of the second video, the video detection device acquires a frame marker position of a current frame and acquires a frame marker position of a previous frame, wherein the previous frame is a video frame adjacent to and before the current frame;
the video detection equipment compares the frame mark position of the current frame with the frame mark position of the previous frame, and counts according to the comparison result until the current frame is the last frame of the second video;
and the video detection equipment takes the counting result as the video frame number of the first video, and determines the frame rate of the first video according to the video frame number of the first video and the duration of the second video.
7. The method of claim 6, wherein a frame rate at which the video detection device captures images is higher than a frame rate at which the first display device displays images;
the video detection device compares the frame mark position of the current frame with the frame mark position of the previous frame, and counts according to the comparison result, and comprises:
if the frame mark position of the current frame is different from the frame mark position of the previous frame, adding one to the video frame number of the first video;
and if the frame mark position of the current frame is the same as that of the previous frame, keeping the video frame number of the first video unchanged.
8. The method according to any one of claims 1 to 3, further comprising:
the video detection device shoots a third video played by a second display device at the same time when the video detection device shoots the first video, and generates a fourth video according to the third video, wherein the third video is generated by the second display device according to the first video, and the first video is transmitted to the second display device by the first display device;
the video detection equipment acquires a frame mark position of a first test frame from the second video, determines a frame number corresponding to the frame mark position of the first test frame according to a preset corresponding relation between the frame mark position and the frame number, and takes the frame number corresponding to the frame mark position of the first test frame as a first frame number, wherein the first test frame corresponds to a video frame displayed by the first display equipment at the test time;
the video detection equipment acquires a frame mark position of a second test frame from the fourth video, determines a frame number corresponding to the frame mark position of the second test frame according to a preset corresponding relation between the frame mark position and the frame number, and takes the frame number corresponding to the frame mark position of the second test frame as a second frame number, wherein the second test frame corresponds to the video frame displayed by the second display equipment at the test moment;
the video detection equipment calculates the frame number difference between the first frame number and the second frame number;
and the video detection equipment determines the transmission time delay between the first display equipment and the second display equipment according to the frame number difference and the frame rate of the source video.
9. A video detection device, comprising:
the system comprises a generating module, a processing module and a display module, wherein the generating module is used for generating a source video, each video frame of the source video comprises a frame mark area and a frame mark, the frame mark positions of all video frames in the source video are different, and the frame mark positions are the positions of the frame marks in the frame mark area;
a provide source video module for providing the source video to a first display device;
the camera module is used for shooting a first video played by the first display device, and the first video is generated by the first display device according to the source video;
the processing module is used for generating a second video according to the first video;
the processing module is further configured to select a target video frame from the second video, acquire a frame marker position of the target video frame, where a frame marker of the target video frame is one selected from a plurality of frame markers included in the target video frame, and determine a correspondence between the target video frame and the video frame of the source video according to the frame marker position of the target video frame.
10. The video detection apparatus of claim 9,
the generating module is specifically configured to acquire a sequence of video frames, set a frame marker in a frame marker region of each video frame according to a sequence of the video frames, and use the sequence of video frames including the frame marker region and the frame marker as a source video.
11. The video detection device of claim 10, wherein for each video frame in the source video, the frame marker position of the video frame corresponds to the position of the video frame in the source video.
12. The video detection apparatus according to any one of claims 9 to 11,
the processing module is specifically configured to, when the target video frame includes a plurality of frame markers, use a frame marker position of a last frame marker as a frame marker position of the target video frame.
13. The video detection apparatus of claim 12,
a source video module is provided, and is specifically used for sending the source video to the first display device;
or,
the source video providing module is specifically configured to play the source video, and the first display device captures the source video.
14. The video detection apparatus according to any one of claims 9 to 11,
the processing module is further configured to set, by the video detection device, a video frame number of a first video to one when a frame marker position of the first frame of the second video is acquired; starting from a second frame of the second video, acquiring a frame mark position of a current frame and acquiring a frame mark position of a previous frame, wherein the previous frame is a video frame adjacent to and before the current frame; comparing the frame mark position of the current frame with the frame mark position of the previous frame, and counting according to the comparison result until the current frame is the last frame of the second video; taking the counting result as the video frame number of the first video in a detection time period, wherein the detection time period is the time period for detecting the second video; and determining the frame rate of the first video according to the duration of the detection period and the video frame number of the first video.
15. The video detection device according to claim 14, wherein the processing module is specifically configured to, in a case that a frame rate at which the video detection device acquires images is higher than a frame rate at which the first display device displays images, add one to the video frame number of the first video if a frame marker position of the current frame is different from a frame marker position of the previous frame; and if the frame mark position of the current frame is the same as that of the previous frame, keeping the video frame number of the first video unchanged.
16. The video detection apparatus according to any one of claims 9 to 11,
the camera module is further configured to shoot a third video played by a second display device while shooting the first video, where the third video is generated by the second display device according to the first video, and the first video is transmitted to the second display device by the first display device;
the processing module is further configured to generate a fourth video according to the third video;
the processing module is further configured to obtain a frame marker position of a first test frame from the second video, determine a frame number corresponding to the frame marker position of the first test frame according to a preset correspondence between the frame marker position and the frame number, and use the frame number corresponding to the frame marker position of the first test frame as a first frame number, where the first test frame corresponds to a video frame displayed by the first display device at a test time;
the processing module is further configured to obtain a frame marker position of a second test frame from the fourth video, determine a frame number corresponding to the frame marker position of the second test frame according to a preset correspondence between the frame marker position and the frame number, and use the frame number corresponding to the frame marker position of the second test frame as a second frame number, where the second test frame corresponds to a video frame displayed by the second display device at the test time;
the processing module is further configured to calculate a frame number difference between the first frame number and the second frame number, and determine a transmission delay between the first display device and the second display device according to the frame number difference and the frame rate of the source video.
17. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710154205.9A CN108632666B (en) | 2017-03-15 | 2017-03-15 | Video detection method and video detection equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710154205.9A CN108632666B (en) | 2017-03-15 | 2017-03-15 | Video detection method and video detection equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108632666A CN108632666A (en) | 2018-10-09 |
CN108632666B true CN108632666B (en) | 2021-03-05 |
Family
ID=63686615
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710154205.9A Active CN108632666B (en) | 2017-03-15 | 2017-03-15 | Video detection method and video detection equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108632666B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110475124B (en) * | 2019-09-06 | 2021-10-08 | 广州虎牙科技有限公司 | Video jamming detection method and device |
CN111225150B (en) * | 2020-01-20 | 2021-08-10 | Oppo广东移动通信有限公司 | Method for processing interpolation frame and related product |
CN111654690A (en) * | 2020-05-06 | 2020-09-11 | 北京百度网讯科技有限公司 | Live video delay time determination method and device and electronic equipment |
CN112019834B (en) * | 2020-07-22 | 2022-10-18 | 北京迈格威科技有限公司 | Video stream processing method, device, equipment and medium |
CN114827581A (en) * | 2021-01-28 | 2022-07-29 | 华为技术有限公司 | Synchronization delay measuring method, content synchronization method, terminal device, and storage medium |
CN112672146B (en) * | 2021-03-16 | 2021-07-16 | 统信软件技术有限公司 | Frame rate testing method and computing device for video player playing video |
CN114173194B (en) * | 2021-12-08 | 2024-04-12 | 广州品唯软件有限公司 | Page smoothness detection method and device, server and storage medium |
CN115361545B (en) * | 2022-07-11 | 2024-09-17 | 北京淳中科技股份有限公司 | Video delay detection method and detection system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1859584A (en) * | 2005-11-14 | 2006-11-08 | 华为技术有限公司 | Video frequency broadcast quality detecting method for medium broadcast terminal device |
CN101616331A (en) * | 2009-07-27 | 2009-12-30 | 北京汉邦高科数字技术有限公司 | A kind of method that video frame rate and audio-visual synchronization performance are tested |
WO2012012914A1 (en) * | 2010-07-30 | 2012-02-02 | Thomson Broadband R & D (Beijing) Co. Ltd. | Method and apparatus for measuring video quality |
CN102740111A (en) * | 2012-06-15 | 2012-10-17 | 福建升腾资讯有限公司 | Method and device for testing video fluency based on frame number watermarks under remote desktop |
JP2013541281A (en) * | 2010-09-16 | 2013-11-07 | ドイッチェ テレコム アーゲー | Method and system for measuring the quality of audio and video bitstream transmission over a transmission chain |
CN103974144A (en) * | 2014-05-23 | 2014-08-06 | 华中师范大学 | Video digital watermarking method based on characteristic scale variation invariant points and microscene detection |
-
2017
- 2017-03-15 CN CN201710154205.9A patent/CN108632666B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1859584A (en) * | 2005-11-14 | 2006-11-08 | 华为技术有限公司 | Video frequency broadcast quality detecting method for medium broadcast terminal device |
CN101616331A (en) * | 2009-07-27 | 2009-12-30 | 北京汉邦高科数字技术有限公司 | A kind of method that video frame rate and audio-visual synchronization performance are tested |
WO2012012914A1 (en) * | 2010-07-30 | 2012-02-02 | Thomson Broadband R & D (Beijing) Co. Ltd. | Method and apparatus for measuring video quality |
JP2013541281A (en) * | 2010-09-16 | 2013-11-07 | ドイッチェ テレコム アーゲー | Method and system for measuring the quality of audio and video bitstream transmission over a transmission chain |
CN102740111A (en) * | 2012-06-15 | 2012-10-17 | 福建升腾资讯有限公司 | Method and device for testing video fluency based on frame number watermarks under remote desktop |
CN103974144A (en) * | 2014-05-23 | 2014-08-06 | 华中师范大学 | Video digital watermarking method based on characteristic scale variation invariant points and microscene detection |
Also Published As
Publication number | Publication date |
---|---|
CN108632666A (en) | 2018-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108632666B (en) | Video detection method and video detection equipment | |
US11423942B2 (en) | Reference and non-reference video quality evaluation | |
US12154280B2 (en) | Determining multiple camera positions from multiple videos | |
US8179466B2 (en) | Capture of video with motion-speed determination and variable capture rate | |
CN110691259B (en) | Video playing method, system, device, electronic equipment and storage medium | |
EP3384495B1 (en) | Processing of multiple media streams | |
US20160366463A1 (en) | Information pushing method, terminal and server | |
US9979898B2 (en) | Imaging apparatus equipped with a flicker detection function, flicker detection method, and non-transitory computer-readable storage medium | |
KR20160045404A (en) | Video thumbnail extraction method and server and video provision system | |
CN110740290A (en) | Monitoring video previewing method and device | |
CN104243803A (en) | Information processing apparatus, information processing method and program | |
KR101249279B1 (en) | Method and apparatus for producing video | |
CN107734278B (en) | Video playback method and related device | |
CN115049612A (en) | Camera state monitoring method and device, computing equipment and medium | |
KR101982258B1 (en) | Method for detecting object and object detecting apparatus | |
US20210295535A1 (en) | Image processing method and apparatus | |
CN115361545B (en) | Video delay detection method and detection system | |
CN116893792A (en) | Screen projection control method and device and electronic equipment | |
CN104754367A (en) | Multimedia information processing method and device | |
CN115761567A (en) | Video processing method and device, electronic equipment and computer readable storage medium | |
CN112328145A (en) | Image display method, apparatus, device, and computer-readable storage medium | |
CN111179317A (en) | Interactive teaching system and method | |
CN113741842B (en) | Screen refresh delay determination method and device, storage medium and electronic equipment | |
JP4361818B2 (en) | Screen display area detection device, screen display area detection method, and screen display area detection program | |
CN113723242B (en) | Visual lie detection method based on video terminal, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |