CN114268807A - Automatic testing method for real-time intelligent station covering logo - Google Patents
Automatic testing method for real-time intelligent station covering logo Download PDFInfo
- Publication number
- CN114268807A CN114268807A CN202111603928.5A CN202111603928A CN114268807A CN 114268807 A CN114268807 A CN 114268807A CN 202111603928 A CN202111603928 A CN 202111603928A CN 114268807 A CN114268807 A CN 114268807A
- Authority
- CN
- China
- Prior art keywords
- frame
- source
- station
- stream
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012360 testing method Methods 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000001514 detection method Methods 0.000 claims abstract description 16
- 238000012790 confirmation Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 4
- 230000000873 masking effect Effects 0.000 claims description 7
- 239000000872 buffer Substances 0.000 claims description 6
- 238000011835 investigation Methods 0.000 claims description 6
- 238000002347 injection Methods 0.000 claims description 4
- 239000007924 injection Substances 0.000 claims description 4
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 230000000903 blocking effect Effects 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 abstract description 2
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 2
- 230000000153 supplemental effect Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000013024 troubleshooting Methods 0.000 description 1
Images
Landscapes
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention discloses an automatic testing method for a real-time intelligent station covering logo. The method comprises the steps of obtaining the time stamp of each frame of a source stream, and then injecting the time stamp of the corresponding frame of the source stream into an output stream after intelligent marking processing is carried out on a tested program. Frame synchronization is carried out on a source flow and an output flow, frames with the same content are found, local picture area interception is carried out on the positions of station captions, objective quality comparison is carried out on the station caption area screenshots of the source flow and the station caption area screenshots of the output flow corresponding to the station caption area screenshots, whether missing occlusion, wrong occlusion and other situations exist or not is judged by combining a detection result and objective quality scores, finally, an alarm module is used for giving an alarm to prompt, and the content of alarm time is recorded and used for manual confirmation. The invention has the beneficial effects that: the investment of manual time and the condition of missing measurement can be greatly reduced, and the problem that the station caption is missed and covered can be found in real time.
Description
Technical Field
The invention relates to the technical field related to video testing, in particular to an automatic testing method for a real-time intelligent station-covering logo.
Background
The intelligent station caption shielding function means that when the video transcoder detects that the real-time video information has the appointed station caption, the video transcoder can automatically shield the station caption in a mosaic mode so as to prevent the infringement of the station caption.
At present, the function test of the intelligent station caption can only ensure that no station caption is missed and shaded by recording the transcoded video into an offline file or monitoring the transcoded video by a stream player all the time.
The testing method consumes manpower greatly, and is equivalent to that each real-time video needs to be confirmed in the time that one person looks at one day; manual confirmation of the occurrence of overlooking is very common; problems may not be discovered in real time if confirmed with the included file; after the leakage and shielding problems occur, the source flow is not easy to take for problem troubleshooting, modification and follow-up verification.
Disclosure of Invention
The invention provides an automatic testing method for real-time intelligent station-covering marks, which aims to overcome the defects in the prior art and reduce the manual time.
In order to achieve the purpose, the invention adopts the following technical scheme:
an automatic testing method for a real-time intelligent station covering mark specifically comprises the following steps:
(1) acquiring a time stamp of each frame of a source stream, then injecting the time stamp of the frame corresponding to the source stream into an output stream after intelligent channel masking mark processing is carried out on a tested program, and injecting the time stamp of the frame corresponding to the same picture of the source stream into each frame of the output stream;
(2) the frame synchronization module is used for receiving the source stream and the output stream injected by the time stamp, performing frame synchronization on the source stream and the output stream, and outputting frames of the source stream and frame data of the output stream corresponding to the frames to the image cropping module in pairs;
(3) the image cutting module is used for intercepting the station caption region of the frame after frame synchronization, namely finding out the frame with the same content and intercepting the local picture region of the station caption position, and finally outputting the intercepted pictures of the left/right caption region to the objective quality calculation module in pairs by taking the frame as a unit;
(4) comparing the station caption region screenshot of the source flow with the station caption region screenshot of the output flow corresponding to the station caption region screenshot of the source flow in objective quality, respectively obtaining objective quality scores of a left/right station caption region of the output flow relative to a left/right station caption region of the source flow through an objective quality calculation module, and outputting the results to a logic judgment module;
(5) the objective quality of the left/right mark areas of the synchronous frame of the source flow and the output flow is combined with the station mark detection result of the frame through a logic judgment module to judge whether the missing shielding and error shielding conditions occur;
(6) the alarm module carries out alarm prompt according to the notification of the logic judgment module and informs the time stamp to the recording module for recording;
(7) the receiving and recording module receives and records the corresponding source stream and output stream with the alarm prompt into files for follow-up manual confirmation of the problem.
The real-time signal source flow is simplified into a source flow, and the output real-time flow after the intelligent channel covering mark is simplified into an output flow; the left station caption region of the source flow is called a source left caption region for short, and the right station caption region of the source flow is called a source right caption region for short; and the left station caption area of the output stream is referred to as an output left caption area for short, and the right station caption area of the output stream is referred to as an output right caption area for short. The time stamp is the total number of seconds from greenwich time 1970, 01, 00 hours 00 minutes 00 seconds (beijing time 1970, 01, 08 hours 00 seconds) to the present. The real-time alarm module can inform related personnel at the first time when problems occur, so that the problems can be checked more quickly. The problem information source and the problem output are recorded through the recording module, so that the problem of real-time stream can be more conveniently positioned, and subsequent modification and verification are facilitated. The invention discloses an automatic testing method for an intelligent station caption covering function, which can greatly reduce the input of manual time and the condition of missing detection and can find the problem that a station caption is missed and covered in real time.
Preferably, in the step (1), specifically: the time stamp injection is realized in a tested program, namely, the time stamp of each frame of the source stream is recorded and written into an SEI structure of each frame of the output stream corresponding to the same picture in the process of intelligently blocking the station mark, images of each frame of the source stream and the output stream have the time stamp of the source stream and the output stream, and the time stamps of different frames cannot be repeated. SEI, Supplemental Enhancement Information, belongs to the codestream domain and provides a method for adding additional Information to a video codestream.
Preferably, in the step (2), specifically: the frame synchronization module pulls the source stream and the output stream through the protocol and the address of the source stream and the output stream, then takes out the time stamp of each frame of the source stream, buffers the frame data of 3 seconds and records the corresponding time stamp, simultaneously analyzes the time stamp of the frame of the source stream in the SEI structure of each frame of the output stream, finds the first output frame of the time stamp consistent with the source stream in the buffer, and then outputs the source stream frame and the output stream frame with the same time stamp to the image cropping module in pairs.
Preferably, in the step (3), specifically: the image cutting module can respectively cut a left upper corner station caption area and a right upper corner station caption area of each frame of source flow and each frame of output flow, the station caption area is an area capable of being configured by self, the configuration is carried out according to the standard resolution and the aspect ratio at present, the standard information source can be ensured to normally cut the whole station caption, and finally, the pictures of the left/right station caption areas of the cut source flow and output flow are output to the objective quality calculation module in pairs by taking the frames as units.
Preferably, in the step (4), specifically: the objective quality calculation module adopts SSIM to respectively obtain the quality scores of the output left/right standard areas relative to the source left/right standard areas, the SSIM value of the left standard area is the quality score of the output left standard area relative to the source left standard area, the SSIM value of the right standard area is the quality score of the output right standard area relative to the source right standard area, and the frame is taken as a unit to output the timestamp information, the SSIM value of the left standard area and the SSIM value of the right standard area to the logic judgment module. SSIM, i.e., a structural similarity index, is an objective evaluation index for images that are used widely. The SSIM value range [0,1] indicates that the image distortion is smaller when the value is larger.
Preferably, in the step (5), specifically: the logic judgment module acquires a timestamp corresponding to a frame and a station caption detection result of the frame from a tested program, then compares the timestamps of the frame sent from the objective quality calculation module, judges whether the missing shielding condition occurs or not by combining the station caption detection result of the frame and the SSIM value of the left/right beacon region from the matched current frame when the timestamps are matched, and sends a notification to the alarm module according to the judgment result, wherein the notification comprises the judgment result and the timestamp of the occurrence of the problem frame.
Preferably, in the step (6), specifically: and the alarm module carries out corresponding alarm according to the notification of the logic judgment module, converts the timestamp into alarm time and transmits the alarm time to the recording module, and deletes the recording slice at the non-alarm time point before the current time.
Preferably, in the step (7), specifically: the receiving and recording module adopts real-time slicing to receive and record the source flow and the output flow, each slicing is 10s, and then according to the alarm time of the alarm module, the source flow and the output flow receiving and recording files in the non-alarm time period are deleted, so that the disk space is protected, and the specific content in the alarm can be relatively accurately and quickly positioned.
Preferably, the logic judgment module, the alarm module and the receiving and recording module are in association and cooperation, the logic judgment module firstly obtains a log of whether the program detects the station caption or not for the current frame from the tested program,
(a) when the beacon is not detected: judging the SSIM value of the left target area and the SSIM value of the right target area of the current frame, if the SSIM values are more than 0.9 point, indicating that the source and the output pictures are consistent, alarming that the station caption is not detected, storing the source stream and the output stream of the corresponding time period to 'the station caption not detected' through a receiving and recording module, and manually checking and confirming; if one of the SSIM values of the left/right logo areas is below 0.5, the logo masking is performed when the station logo is not detected, the alarm module alarms that the station logo is not detected and a plurality of logos are masked, and at the moment, the source stream and the output stream in the corresponding time period need to be saved under a folder that the source logo is not detected and the logo is not mistaken through the receiving and recording module, and manual examination and confirmation are performed;
(b) when the source left landmark region detects the landmark: judging the SSIM value of the left station caption area and the SSIM value of the right station caption area of the current frame, if the SSIM values are more than 0.9 point, indicating that the source and the output pictures are consistent, giving an alarm to miss station captions, and storing the source stream and the output stream of the corresponding time period through a recording module for manual investigation and confirmation; if the SSIM value of only the left logo area is below 0.5, the logo hiding is performed when the station logo is detected, and the station logo belongs to a normal condition; if the SSIM value of the right target area is below 0.5, or the SSIMs of the left/right target areas are below 0.5, the error-blocking area or the more blocked area is indicated, and the alarm module alarms that the error-blocking area or the more blocked area is generated;
(c) when the source right landmark region detects the landmark: judging the SSIM value of the left station caption area and the SSIM value of the right station caption area of the current frame, if the SSIM values are more than 0.9 point, indicating that the source and the output pictures are consistent, giving an alarm to miss station captions, and storing the source stream and the output stream of the corresponding time period through a recording module for manual investigation and confirmation; if the SSIM value of only the right caption area is below 0.5, the station caption is masked when being detected, and the normal condition is achieved; if the SSIM value of the left marking area is below 0.5, or the SSIMs of the left/right marking areas are below 0.5, the error-blocking area or the more blocked area is indicated, and the alarm module alarms that the error is blocked or the more blocked area is indicated.
The invention has the beneficial effects that: the investment of manual time and the condition of missing measurement can be greatly reduced, and the problem that the station caption is missed and covered can be found in real time.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The invention is further described with reference to the following figures and detailed description.
In the embodiment shown in fig. 1, an automated testing method for a real-time intelligent station hiding logo specifically includes the following steps:
(1) acquiring a time stamp of each frame of a source stream, then injecting the time stamp of the frame corresponding to the source stream into an output stream after intelligent channel masking mark processing is carried out on a tested program, and injecting the time stamp of the frame corresponding to the same picture of the source stream into each frame of the output stream; the function is mainly used for bedding the frame synchronization module, and if the effect of framing synchronization can be directly achieved through the frame synchronization module, the timestamp injection can be realized without implementation.
The method specifically comprises the following steps: the timestamp injection is realized in a tested program, namely, in the process of intelligently masking the station mark, the timestamp of each frame of the source stream is recorded and written into an SEI structure of each frame of the output stream corresponding to the same picture, each frame image of the source stream and the output stream has its own timestamp, and the timestamps of different frames are not repeated and generally increase regularly.
(2) The frame synchronization module is used for receiving the source stream and the output stream injected by the time stamp, performing frame synchronization on the source stream and the output stream, and outputting frames of the source stream and frame data of the output stream corresponding to the frames to the image cropping module in pairs;
the method specifically comprises the following steps: the frame synchronization module pulls the source stream and the output stream through the protocol and the address of the source stream and the output stream, then takes out the time stamp of each frame of the source stream, buffers the frame data of 3 seconds and records the corresponding time stamp, simultaneously analyzes the time stamp of the frame of the source stream in the SEI structure of each frame of the output stream, finds the first output frame of the time stamp consistent with the source stream in the buffer, and then outputs the source stream frame and the output stream frame with the same time stamp to the image cropping module in pairs.
(3) The image cutting module is used for intercepting the station caption region of the frame after frame synchronization, namely finding out the frame with the same content and intercepting the local picture region of the station caption position, and finally outputting the intercepted pictures of the left/right caption region to the objective quality calculation module in pairs by taking the frame as a unit;
the method specifically comprises the following steps: the image cutting module can respectively cut a left upper corner station caption area and a right upper corner station caption area of each frame of source flow and each frame of output flow, the station caption area is an area capable of being configured by self, the configuration is carried out according to the standard resolution and the aspect ratio at present, the standard information source can be ensured to normally cut the whole station caption, and finally, the pictures of the left/right station caption areas of the cut source flow and output flow are output to the objective quality calculation module in pairs by taking the frames as units.
Because the platform mark product is sheltered from to intelligence mainly is applied to the broadcasting and TV trade, according to the broadcasting and TV trade regulation, the fixed position in the upper left corner or the upper right corner of video content can only appear in the position of platform mark. Thus, the specific location may be determined according to different standard resolutions. Each frame of source stream and each frame of output stream have 2 clipping background label areas.
(4) Comparing the station caption region screenshot of the source flow with the station caption region screenshot of the output flow corresponding to the station caption region screenshot of the source flow in objective quality, respectively obtaining objective quality scores of a left/right station caption region of the output flow relative to a left/right station caption region of the source flow through an objective quality calculation module, and outputting the results to a logic judgment module;
the method specifically comprises the following steps: the objective quality calculation module adopts SSIM to respectively obtain the quality scores of the output left/right standard areas relative to the source left/right standard areas, the SSIM value of the left standard area is the quality score of the output left standard area relative to the source left standard area, the SSIM value of the right standard area is the quality score of the output right standard area relative to the source right standard area, and the frame is taken as a unit to output the timestamp information, the SSIM value of the left standard area and the SSIM value of the right standard area to the logic judgment module.
(5) The objective quality of the left/right mark areas of the synchronous frame of the source flow and the output flow is combined with the station mark detection result of the frame through a logic judgment module to judge whether the missing shielding and error shielding conditions occur;
the method specifically comprises the following steps: the logic judgment module acquires a timestamp corresponding to a frame and a station caption detection result of the frame from a tested program, then compares the timestamps of the frame sent from the objective quality calculation module, judges whether the missing shielding condition occurs or not by combining the station caption detection result of the frame and the SSIM value of the left/right beacon region from the matched current frame when the timestamps are matched, and sends a notification to the alarm module according to the judgment result, wherein the notification comprises the judgment result and the timestamp of the occurrence of the problem frame. The station caption detection result is temporarily used by the method of the current embodiment because no third-party station caption detection tool with a comparative standard exists at present. If an industry index-level station caption detection tool subsequently appears, the station caption detection result in the embodiment can be replaced.
(6) The alarm module carries out alarm prompt according to the notification of the logic judgment module and informs the time stamp to the recording module for recording;
the method specifically comprises the following steps: and the alarm module carries out corresponding alarm according to the notification of the logic judgment module, converts the timestamp into alarm time and transmits the alarm time to the recording module, and deletes the recording slice at the non-alarm time point before the current time.
(7) The receiving and recording module is used for receiving and recording the corresponding source stream and output stream with the alarm prompt into files for follow-up manual confirmation of the problem;
the method specifically comprises the following steps: the receiving and recording module adopts real-time slicing to receive and record the source flow and the output flow, each slicing is 10s, and then according to the alarm time of the alarm module, the source flow and the output flow receiving and recording files in the non-alarm time period are deleted, so that the disk space is protected, and the specific content in the alarm can be relatively accurately and quickly positioned.
Wherein, the logic judgment module, the alarm module and the receiving and recording module are in association and cooperation, the logic judgment module firstly obtains the log of whether the program detects the station caption or not for the current frame from the tested program,
(a) when the beacon is not detected: judging the SSIM value of the left target area and the SSIM value of the right target area of the current frame, if the SSIM values are more than 0.9 point, indicating that the source and the output pictures are consistent, alarming that the station caption is not detected, storing the source stream and the output stream of the corresponding time period to 'the station caption not detected' through a receiving and recording module, and manually checking and confirming; if one of the SSIM values of the left/right logo areas is below 0.5, the logo masking is performed when the station logo is not detected, the alarm module alarms that the station logo is not detected and a plurality of logos are masked, and at the moment, the source stream and the output stream in the corresponding time period need to be saved under a folder that the source logo is not detected and the logo is not mistaken through the receiving and recording module, and manual examination and confirmation are performed;
(b) when the source left landmark region detects the landmark: judging the SSIM value of the left station caption area and the SSIM value of the right station caption area of the current frame, if the SSIM values are more than 0.9 point, indicating that the source and the output pictures are consistent, giving an alarm to miss station captions, and storing the source stream and the output stream of the corresponding time period through a recording module for manual investigation and confirmation; if the SSIM value of only the left logo area is below 0.5, the logo hiding is performed when the station logo is detected, and the station logo belongs to a normal condition; if the SSIM value of the right target area is below 0.5, or the SSIMs of the left/right target areas are below 0.5, the error-blocking area or the more blocked area is indicated, and the alarm module alarms that the error-blocking area or the more blocked area is generated;
(c) when the source right landmark region detects the landmark: judging the SSIM value of the left station caption area and the SSIM value of the right station caption area of the current frame, if the SSIM values are more than 0.9 point, indicating that the source and the output pictures are consistent, giving an alarm to miss station captions, and storing the source stream and the output stream of the corresponding time period through a recording module for manual investigation and confirmation; if the SSIM value of only the right caption area is below 0.5, the station caption is masked when being detected, and the normal condition is achieved; if the SSIM value of the left marking area is below 0.5, or the SSIMs of the left/right marking areas are below 0.5, the error-blocking area or the more blocked area is indicated, and the alarm module alarms that the error is blocked or the more blocked area is indicated.
According to the industry regulations, the station caption situation cannot occur in the left/right caption area at the same time. Therefore, it is the tested intelligent station-masking program that will report an error, and is temporarily out of the scope of the present invention. If the phenomenon that the station caption can be output from the left/right caption area at the same time occurs, the whole judgment logic needs to be modified.
The real-time signal source flow is simplified into a source flow, and the output real-time flow after the intelligent channel covering mark is simplified into an output flow; the left station caption region of the source flow is called a source left caption region for short, and the right station caption region of the source flow is called a source right caption region for short; and the left station caption area of the output stream is referred to as an output left caption area for short, and the right station caption area of the output stream is referred to as an output right caption area for short. The time stamp is the total number of seconds from greenwich time 1970, 01, 00 hours 00 minutes 00 seconds (beijing time 1970, 01, 08 hours 00 seconds) to the present. SEI, Supplemental Enhancement Information, belongs to the codestream domain and provides a method for adding additional Information to a video codestream. SSIM, i.e., a structural similarity index, is an objective evaluation index for images that are used widely. The SSIM value range [0,1] indicates that the image distortion is smaller when the value is larger.
Although the specific embodiment of the invention has a manual participation part, the time period without alarm can be reduced compared with the prior pure manual checking. As the product matures, the time to alarm will be reduced gradually, and the time saved will be more appreciable compared to a large batch of real-time streams. The real-time alarm module can inform related personnel at the first time when problems occur, so that the problems can be checked more quickly. The problem information source and the problem output are recorded through the recording module, so that the problem of real-time stream can be more conveniently positioned, and subsequent modification and verification are facilitated. The invention discloses an automatic testing method for an intelligent station caption covering function, which can greatly reduce the input of manual time and the condition of missing detection and can find the problem that a station caption is missed and covered in real time.
Claims (9)
1. An automatic testing method for a real-time intelligent platform covering mark is characterized by comprising the following steps:
(1) acquiring a time stamp of each frame of a source stream, then injecting the time stamp of the frame corresponding to the source stream into an output stream after intelligent channel masking mark processing is carried out on a tested program, and injecting the time stamp of the frame corresponding to the same picture of the source stream into each frame of the output stream;
(2) the frame synchronization module is used for receiving the source stream and the output stream injected by the time stamp, performing frame synchronization on the source stream and the output stream, and outputting frames of the source stream and frame data of the output stream corresponding to the frames to the image cropping module in pairs;
(3) the image cutting module is used for intercepting the station caption region of the frame after frame synchronization, namely finding out the frame with the same content and intercepting the local picture region of the station caption position, and finally outputting the intercepted pictures of the left/right caption region to the objective quality calculation module in pairs by taking the frame as a unit;
(4) comparing the station caption region screenshot of the source flow with the station caption region screenshot of the output flow corresponding to the station caption region screenshot of the source flow in objective quality, respectively obtaining objective quality scores of a left/right station caption region of the output flow relative to a left/right station caption region of the source flow through an objective quality calculation module, and outputting the results to a logic judgment module;
(5) the objective quality of the left/right mark areas of the synchronous frame of the source flow and the output flow is combined with the station mark detection result of the frame through a logic judgment module to judge whether the missing shielding and error shielding conditions occur;
(6) the alarm module carries out alarm prompt according to the notification of the logic judgment module and informs the time stamp to the recording module for recording;
(7) the receiving and recording module receives and records the corresponding source stream and output stream with the alarm prompt into files for follow-up manual confirmation of the problem.
2. The method for automatically testing the real-time intelligent station hiding logo according to claim 1, wherein in the step (1), the method specifically comprises the following steps: the time stamp injection is realized in a tested program, namely, the time stamp of each frame of the source stream is recorded and written into an SEI structure of each frame of the output stream corresponding to the same picture in the process of intelligently blocking the station mark, images of each frame of the source stream and the output stream have the time stamp of the source stream and the output stream, and the time stamps of different frames cannot be repeated.
3. The method for automatically testing the real-time intelligent station hiding logo according to claim 1, wherein in the step (2), the method specifically comprises the following steps: the frame synchronization module pulls the source stream and the output stream through the protocol and the address of the source stream and the output stream, then takes out the time stamp of each frame of the source stream, buffers the frame data of 3 seconds and records the corresponding time stamp, simultaneously analyzes the time stamp of the frame of the source stream in the SEI structure of each frame of the output stream, finds the first output frame of the time stamp consistent with the source stream in the buffer, and then outputs the source stream frame and the output stream frame with the same time stamp to the image cropping module in pairs.
4. The method for automatically testing the real-time intelligent station hiding logo according to claim 1, wherein in the step (3), the method specifically comprises the following steps: the image cutting module can respectively cut a left upper corner station caption area and a right upper corner station caption area of each frame of source flow and each frame of output flow, the station caption area is an area capable of being configured by self, the configuration is carried out according to the standard resolution and the aspect ratio at present, the standard information source can be ensured to normally cut the whole station caption, and finally, the pictures of the left/right station caption areas of the cut source flow and output flow are output to the objective quality calculation module in pairs by taking the frames as units.
5. The method for automatically testing the real-time intelligent station hiding logo according to claim 1, wherein in the step (4), the method specifically comprises the following steps: the objective quality calculation module adopts SSIM to respectively obtain the quality scores of the output left/right standard areas relative to the source left/right standard areas, the SSIM value of the left standard area is the quality score of the output left standard area relative to the source left standard area, the SSIM value of the right standard area is the quality score of the output right standard area relative to the source right standard area, and the frame is taken as a unit to output the timestamp information, the SSIM value of the left standard area and the SSIM value of the right standard area to the logic judgment module.
6. The method for automatically testing the real-time intelligent station hiding logo according to claim 5, wherein in the step (5), the method comprises the following steps: the logic judgment module acquires a timestamp corresponding to a frame and a station caption detection result of the frame from a tested program, then compares the timestamps of the frame sent from the objective quality calculation module, judges whether the missing shielding condition occurs or not by combining the station caption detection result of the frame and the SSIM value of the left/right beacon region from the matched current frame when the timestamps are matched, and sends a notification to the alarm module according to the judgment result, wherein the notification comprises the judgment result and the timestamp of the occurrence of the problem frame.
7. The method for automatically testing the real-time intelligent station hiding logo according to claim 1, wherein in the step (6), the method specifically comprises the following steps: and the alarm module carries out corresponding alarm according to the notification of the logic judgment module, converts the timestamp into alarm time and transmits the alarm time to the recording module, and deletes the recording slice at the non-alarm time point before the current time.
8. The method for automatically testing the real-time intelligent station hiding logo according to claim 1, wherein in the step (7), the method comprises the following steps: the receiving and recording module adopts real-time slicing to receive and record the source flow and the output flow, each slicing is 10s, and then according to the alarm time of the alarm module, the source flow and the output flow receiving and recording files in the non-alarm time period are deleted, so that the disk space is protected, and the specific content in the alarm can be relatively accurately and quickly positioned.
9. The method as claimed in claim 5 or 6, wherein the logic judgment module, the alarm module and the recording module are associated, the logic judgment module first obtains a log of whether the program detects the station caption or not for the current frame from the tested program,
(a) when the beacon is not detected: judging the SSIM value of the left target area and the SSIM value of the right target area of the current frame, if the SSIM values are more than 0.9 point, indicating that the source and the output pictures are consistent, alarming that the station caption is not detected, storing the source stream and the output stream of the corresponding time period to 'the station caption not detected' through a receiving and recording module, and manually checking and confirming; if one of the SSIM values of the left/right logo areas is below 0.5, the logo masking is performed when the station logo is not detected, the alarm module alarms that the station logo is not detected and a plurality of logos are masked, and at the moment, the source stream and the output stream in the corresponding time period need to be saved under a folder that the source logo is not detected and the logo is not mistaken through the receiving and recording module, and manual examination and confirmation are performed;
(b) when the source left landmark region detects the landmark: judging the SSIM value of the left station caption area and the SSIM value of the right station caption area of the current frame, if the SSIM values are more than 0.9 point, indicating that the source and the output pictures are consistent, giving an alarm to miss station captions, and storing the source stream and the output stream of the corresponding time period through a recording module for manual investigation and confirmation; if the SSIM value of only the left logo area is below 0.5, the logo hiding is performed when the station logo is detected, and the station logo belongs to a normal condition; if the SSIM value of the right target area is below 0.5, or the SSIMs of the left/right target areas are below 0.5, the error-blocking area or the more blocked area is indicated, and the alarm module alarms that the error-blocking area or the more blocked area is generated;
(c) when the source right landmark region detects the landmark: judging the SSIM value of the left station caption area and the SSIM value of the right station caption area of the current frame, if the SSIM values are more than 0.9 point, indicating that the source and the output pictures are consistent, giving an alarm to miss station captions, and storing the source stream and the output stream of the corresponding time period through a recording module for manual investigation and confirmation; if the SSIM value of only the right caption area is below 0.5, the station caption is masked when being detected, and the normal condition is achieved; if the SSIM value of the left marking area is below 0.5, or the SSIMs of the left/right marking areas are below 0.5, the error-blocking area or the more blocked area is indicated, and the alarm module alarms that the error is blocked or the more blocked area is indicated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111603928.5A CN114268807B (en) | 2021-12-24 | 2021-12-24 | Automatic testing method for real-time intelligent station-covering mark |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111603928.5A CN114268807B (en) | 2021-12-24 | 2021-12-24 | Automatic testing method for real-time intelligent station-covering mark |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114268807A true CN114268807A (en) | 2022-04-01 |
CN114268807B CN114268807B (en) | 2023-08-01 |
Family
ID=80830241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111603928.5A Active CN114268807B (en) | 2021-12-24 | 2021-12-24 | Automatic testing method for real-time intelligent station-covering mark |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114268807B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010036293A1 (en) * | 1998-10-23 | 2001-11-01 | Facet Technology Corporation | System for automatically generating database of objects of interest by analysis of images recorded by moving vehicle |
CN103336954A (en) * | 2013-07-08 | 2013-10-02 | 北京捷成世纪科技股份有限公司 | Identification method and device of station caption in video |
CN103942539A (en) * | 2014-04-09 | 2014-07-23 | 上海交通大学 | Method for accurately and efficiently extracting human head ellipse and detecting shielded human face |
CN105469380A (en) * | 2014-09-05 | 2016-04-06 | 株式会社理光 | Method and device for detecting shielding against object |
CN105760842A (en) * | 2016-02-26 | 2016-07-13 | 北京大学 | Station caption identification method based on combination of edge and texture features |
US9456170B1 (en) * | 2013-10-08 | 2016-09-27 | 3Play Media, Inc. | Automated caption positioning systems and methods |
CN109413417A (en) * | 2018-12-17 | 2019-03-01 | 南京普物科技有限公司 | A kind of System and method for of interactive television service quality testing |
CN110147702A (en) * | 2018-07-13 | 2019-08-20 | 腾讯科技(深圳)有限公司 | A kind of object detection and recognition method and system of real-time video |
WO2020023799A1 (en) * | 2018-07-26 | 2020-01-30 | Standard Cognition, Corp. | Product correlation analysis using deep learning |
CN113453012A (en) * | 2021-06-25 | 2021-09-28 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method and device and electronic equipment |
-
2021
- 2021-12-24 CN CN202111603928.5A patent/CN114268807B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010036293A1 (en) * | 1998-10-23 | 2001-11-01 | Facet Technology Corporation | System for automatically generating database of objects of interest by analysis of images recorded by moving vehicle |
CN103336954A (en) * | 2013-07-08 | 2013-10-02 | 北京捷成世纪科技股份有限公司 | Identification method and device of station caption in video |
US9456170B1 (en) * | 2013-10-08 | 2016-09-27 | 3Play Media, Inc. | Automated caption positioning systems and methods |
CN103942539A (en) * | 2014-04-09 | 2014-07-23 | 上海交通大学 | Method for accurately and efficiently extracting human head ellipse and detecting shielded human face |
CN105469380A (en) * | 2014-09-05 | 2016-04-06 | 株式会社理光 | Method and device for detecting shielding against object |
CN105760842A (en) * | 2016-02-26 | 2016-07-13 | 北京大学 | Station caption identification method based on combination of edge and texture features |
CN110147702A (en) * | 2018-07-13 | 2019-08-20 | 腾讯科技(深圳)有限公司 | A kind of object detection and recognition method and system of real-time video |
WO2020023799A1 (en) * | 2018-07-26 | 2020-01-30 | Standard Cognition, Corp. | Product correlation analysis using deep learning |
CN109413417A (en) * | 2018-12-17 | 2019-03-01 | 南京普物科技有限公司 | A kind of System and method for of interactive television service quality testing |
CN113453012A (en) * | 2021-06-25 | 2021-09-28 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method and device and electronic equipment |
Non-Patent Citations (1)
Title |
---|
陆峰;徐友春;李永乐;王任栋;王东敏;: "基于多传感器数据融合的障碍物检测与跟踪", 军事交通学院学报, no. 02 * |
Also Published As
Publication number | Publication date |
---|---|
CN114268807B (en) | 2023-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102006499B (en) | Method for detecting video and audio quality of digital television program file | |
CN104093017B (en) | A kind of method being realized three road signal monitorings by contrast | |
CN102176758A (en) | Video quality diagnosis system and realization method thereof | |
CN103873852A (en) | Multi-mode parallel video quality fault detection method and device | |
EP3240294A1 (en) | Monitoring audio-visual content with captions | |
CN105049915A (en) | Locomotive video data integrity automatic detection method | |
CN117478843A (en) | Intelligent patrol control system and method | |
CN113507617B (en) | SEI frame playback data synchronization method, system, device and medium based on live video stream | |
CN113992984A (en) | Elevator advertisement monitoring and broadcasting method | |
CN114268807A (en) | Automatic testing method for real-time intelligent station covering logo | |
CN116248940A (en) | Method and system for detecting audio-video dyssynchrony of main and standby channel programs | |
CN112702595A (en) | SVAC2.0 video comparison method and system thereof | |
CN110866017A (en) | Tax handling operation mark leaving system for visual retrieval and implementation method thereof | |
CN105578265A (en) | Timestamp compensation or correction method based on H264/H265 video analysis | |
CN114500950B (en) | Box abnormal state detection system and method based on smart city | |
CN113992943B (en) | Method and system for monitoring signal anomaly or degradation of broadcasting server | |
US7605843B2 (en) | Monitoring apparatus | |
EP0921695B1 (en) | Video alignement using a selected partial picture | |
CN102098450B (en) | Method for automatically detecting real-time signals or streams to realize full-automatic recording | |
JPH07160891A (en) | Picture monitoring device | |
CN115914676B (en) | A real-time monitoring and comparison method and system for ultra-high-definition video signals | |
CN113034430B (en) | Video authenticity verification and identification method and system based on time watermark change analysis | |
CN105657512A (en) | Method for monitoring sound-picture asynchronization of digital video | |
CN103260047A (en) | Method and device for detecting dual-path video circuit signal synchronization | |
CN115723819B (en) | Train monitoring method and device and rail transit system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |