CN101317455A - Video code stream filtering method and filtering node - Google Patents
Video code stream filtering method and filtering node Download PDFInfo
- Publication number
- CN101317455A CN101317455A CNA2007800003987A CN200780000398A CN101317455A CN 101317455 A CN101317455 A CN 101317455A CN A2007800003987 A CNA2007800003987 A CN A2007800003987A CN 200780000398 A CN200780000398 A CN 200780000398A CN 101317455 A CN101317455 A CN 101317455A
- Authority
- CN
- China
- Prior art keywords
- module
- harmful
- content
- video code
- code flow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000001914 filtration Methods 0.000 title claims abstract description 38
- 238000004891 communication Methods 0.000 claims abstract description 28
- 238000001514 detection method Methods 0.000 claims abstract description 15
- 230000011664 signaling Effects 0.000 claims description 12
- 238000012544 monitoring process Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 10
- 230000001627 detrimental effect Effects 0.000 claims description 5
- 230000000266 injurious effect Effects 0.000 claims description 3
- 230000001815 facial effect Effects 0.000 claims description 2
- 238000003672 processing method Methods 0.000 claims description 2
- 238000010345 tape casting Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 14
- 230000007246 mechanism Effects 0.000 description 10
- 230000033001 locomotion Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000012015 optical character recognition Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 230000033228 biological regulation Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000019771 cognition Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009545 invasion Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- GOLXNESZZPUPJE-UHFFFAOYSA-N spiromesifen Chemical compound CC1=CC(C)=CC(C)=C1C(C(O1)=O)=C(OC(=O)CC(C)(C)C)C11CCCC1 GOLXNESZZPUPJE-UHFFFAOYSA-N 0.000 description 2
- 235000017166 Bambusa arundinacea Nutrition 0.000 description 1
- 235000017491 Bambusa tulda Nutrition 0.000 description 1
- 241001330002 Bambuseae Species 0.000 description 1
- 241001269238 Data Species 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 235000015334 Phyllostachys viridis Nutrition 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 239000011425 bamboo Substances 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/162—Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing
- H04N7/163—Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing by receiver means only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44209—Monitoring of downstream path of the transmission network originating from a server, e.g. bandwidth variations of a wireless network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/454—Content or additional data filtering, e.g. blocking advertisements
- H04N21/4542—Blocking scenes or portions of the received content, e.g. censoring scenes
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A method and node for filtering video streams in multimedia communications. The method includes obtaining an intra frame from a video stream and decoding the intra frame; harmful content in the decoded intra frames is identified to determine whether to stop playing the video stream. The node comprises a video tape casting time module, a switch module, an I frame detection/decoding module, a content identification module and a judgment module.
Description
A kind of video code flow filter method and filter node technical field
The present invention relates to multimedia communication technology, video code flow filter method and filter node during more particularly to a kind of multimedia communication.Background technology
Streaming Media (Streaming Media) has derived numerous multimedia communication service forms as a kind of basic multimedia communication form:Video conferencing/videophone, IPTV, VOD, instant messaging etc..Therefore Streaming Media will turn into the basic communication form on next generation network (Next Generation Network).Especially domestic and international IPTV (Internet Protocol Television, Internet Protocol Television in recent years)The rapid rising of business, application of the Streaming Media on network is also rapidly developing.
Class business such as IPTV and VOD (a Video on Demand, video request program on Streaming Media), all it is to provide video/audio content as function.The scope of content is boundless, including movie and video programs, news, sports tournament, concert etc..Every country, especially China, safety and monitoring for content are always what is paid much attention to, the law for having correlation.From protection minor's angle, also there is the regulation of correlation various countries.Meanwhile, in operator/ISP (Internet Service Provider, ISP)Also there is such demand there with content supplier.The country will carry out IPTV operation on a large scale, so one first problem is how to ensure effective contents supervision and filtering, and harmful content is covered, this problem is not solved, the operations of IPTV at home will not know where to begin, and national correlation department is also impossible to distribution of license plates.Therefore, the solution of this problem is for promoting the development of IPTV industries to have great significance.For content safety, common understanding includes two aspects:
1st, for the protection of content, prevent content from being received by the user without authority;
Such as prevent robber from seeing TV programme etc..For this kind of invasion, there are many mature technologies, such as encrypt (Encryption) and scramble(Scrambling), authentication and digital copyright management DRM (Digital Right Management) etc..
2nd, for harmful and illegal contents invasion strick precaution, the object of protection is the object of content attack,
Typically audient. '
So-called information filtering, is exactly handled and is judged for some attributes of content, these contents attributes can include:The name of content supplier, URL (the Universal Resource Locator Universal Resource Locators of content, network address is the important URL of a class), the IP address of content providing server etc., and Media Stream is with the packet header of the packet in the case of packet encapsulation(Packet header) information, the information etc. in bag.As can be seen that this processing and filtering are also to be carried out according to level from the superficial to the deep.
Prior art one is mainly the surface according to content, or is called shallow-layer feature to carry out information filtering.Wherein most typical example is the filtering based on URL, and its principle is as shown in Figure 1:Content filtering equipment is located between the core net on network and edge access network, so it is the only way critical point between the Media Stream arrival receiving terminal from content source, in practice, agency, NAT (Network Address Translator that can be with enterprise network, network address translation devices)/FW (Firewall, fire wall)It is placed on same network site, as broadband home user, can be with BAS (Broadband Administration System, broadband management system)/BRAS (Broadband Registration and Admission System, broadband registration and access system), DISLAM is placed on same position, or is placed on ISP POP (Point of Presence, are present a little)On.
Filter plant oneself has internal database, the information for having multiple content source URL, according to this number
' :Two '
According to storehouse it may determine that whether a part of content source is harmful to, and shield harmful content source, the harmless content source of clearance.Meanwhile, there are many content classification service providers that third party's service is provided, their database is more enriched and specialty, and content filtering equipment can also be connected with this third party service provider, and the filtering based on U L is carried out using their service.
There are the following problems-the of prior art one
1st, mistake kills problem:According to the filtering based on URL, harmless information filtering may be fallen, than if any website provide Video-on-demand, some of which program is harmful, but some be health film, cannot be distinguished by according only to URL;
2nd, misplaced problem:Some URL can also may go wrong probably due to be considered as the excellent website of qualification in scalar system(Its network address is pretended to be by assault, or oneself has illegal attempt etc.);
3rd, using the filtering based on URL, third-party rating system is generally also also needed to, it is such to comment
Level system has, and the rating services business of some charges specially provides rating services.But their result can not be all on entirely accurate and limit network content.And the content on network is also often to change, any one rating system is also impossible to keep up with these changes in time.
For requiring very high application scenarios, such as the IPTV for the public that orients towards the whole country, if it is huge once to have harmful content especially political sensitivity content to invade the harm successfully, caused.It is perfectly safe to accomplish, therefore is all insecure using the filtering of shallow hierarchy.Must be using the secondary information filtering of bottommost layer, i.e. the filtering of video/audio data in itself, such as the identification for image recognize harmful scene therein(Violence, pornographic etc.), harmful text information(Captions), particular persons face etc..
Reach very high filtering accuracy, it is necessary to be deep into most deep level, i.e. content-data in itself.This aspect belongs to current study hotspot, deep packet filtering DPF (Deep Packet Filtering)
The depth DPF of prior art two is set based on artificial depth content, and in this case, content filtering equipment can be decoded for Media Stream and play back content(Assuming that encryption is not problem, because can require to solve by the Lawful Interception of communication equipment the problem of encryption), examined for artificial supervisor.If it find that problematic, supervisor takes immediate steps, and harmful content is cut off, while being switched to one section of harmless content such as public service ads etc..Certainly there must be the delay apparatus of a suitable Large Copacity after content filtering equipment, to postpone harmful content, to monitoring personnel judgement necessarily and react Deal with Time(Such as 5 seconds). .
There are the following problems for prior art two:
1st, Universal and scalability is lacked:Obviously artificial method can not adapt to the demand of future network.Poor universality, poor expandability.And the very subjective factor such as the education of artificial cognition, discrimination standard and people, educational level and ideology is relevant, it is impossible to accomplish that standard is consistent;
2nd, IFTV situation can not be applied to:Above manual method goes for the monitoring of TV programme, but is not suitable for very much for IPTV.Because IPTV content quantities are huge, the content source on network is more, therefore manually can not almost be competent at;
3rd, there is big delay, it is impossible to suitable for the situation of both sides' real-time Communication for Power:Streaming Media is in the case of two-way, it is desirable to which delay can not possibly accomplish so low delay no more than 400ms, artificial cognition.But in two-way communication, such as Video chat is the blunt place that easily there is harmful content.
The content of the invention
The present invention provides the video code flow filter method and filter node during a kind of multimedia communication, low to solve the existing depth content filter method efficiency based on manual identified, the problem of lacking versatility.
Video code flow filter method during a kind of multimedia communication of the present invention, comprises the following steps:A kind of video code flow filter node during the multimedia communication, including:Video code flow filter method during a kind of multimedia communication provided, only need to I two field pictures or the I frames in partial decoding of h video code flow and before or after adjacent certain amount frame image, other most images according to frame need not be decoded, reduce processing complexity, the time delay of video code flow broadcasting is shortened, the efficiency of video content depth-type filtration is improved;
The present invention is based further on scene cut technology, the first two field picture or the frame of the mat woven of fine bamboo strips one in each scene of partial decoding of h and before or after adjacent certain amount frame image, and be identified using the image of decoding, it is a certain degree of while identification certainty is ensured to reduce the data frame for needing to decode, make the complicated solely further reduction of processing;
Automatic identification technology of the method for the invention based on existing harmful content, can efficiently realize that automatic identification is filtered, it is ensured that quick effective identification of Common detrimental content;
The method of the invention can be used cooperatively with manual identified mechanism simultaneously, can prevent the under-enumeration in kainogenesis harmful interior winter;
The present invention is also provided with doing harm to content study mechanism, when manual identified goes out kainogenesis harmful content, can learn and be added in harmful content database;
The present invention is more, and counting method can also use the existing filtering technique based on UTRL simultaneously, can forbid the source of harmful content in signaling aspect;Also, invention further provides the URL information rating scheme of harmful content, can progressively find new harmful URL sources, and new harmful URL sources are added in harmful URL information storehouse in time;
The method of the invention also provides Log Report mechanism, can record the various events in video code flow filter process;
Video code flow filter node of the present invention can conveniently realize method of the present invention, have
Good versatility;
Obviously, the content safety in the multimedia services such as current IFTV, DTV can be solved the problems, such as using technical solution of the present invention, it is ensured that it is safe and reliable that these business are provided.
Brief description of the drawings '
Fig. 1 is the principle schematic that the existing URL based on content is filtered;
Fig. 2 is the relation schematic diagram of frame and scene in video sequence of the present invention;
Fig. 3 is the corresponding relation schematic diagram between packet in scene of the present invention, frame and video code flow;Fig. 4 is an exemplary plot of feature of present invention network model;
Fig. 5 is a kind of schematic flow sheet of video code flow content filtering method of the present invention;Fig. 6-Fig. 9 is a kind of primary structure schematic diagram for the video code flow filter node for realizing video code flow filter method of the present invention.Embodiment
The present invention provides a kind of video code flow filter node being arranged on network kind correct position(Node), the filter node can be realized carries out automatic fitration and artificial filter for the content in stream-type video, it is possible to while filtering or similar shallow-layer filter method based on URL are filtered.
The automatic fitration method of the video content of the present invention is provided first below, the automatic fitration method of the present invention is to regard the I frames in video code flow as object to be detected, the identification that I two field pictures carry out harmful content is restored after decoding I frames, specifically include two methods, a kind of is that all I frames are decoded and reduced, and another is that first I frame in each scene is carried out into decoded back.In video code flow, I frame identifications are correspondingly provided with the packet header of the packet comprising I frames, can be identified.
The following detailed description of second method, refering to Fig. 2, Fig. 2 show the relation schematic diagram of frame and scene in video sequence, and for the video code flow by filter node, different scenes are divided into first(), Scene will be originally as one by multiple frames(Frame) the video sequence of composition, the sequence of scenes for being divided into different scenes to constitute.Each frame that one scene is included inside the frame of unequal number, each scene is essentially identical in background and prospect, simply in the presence of certain motion.A camera lens is can be understood as, when Shot change, new scene is produced.
For split sence, it is necessary to explanation, scene was shot in video content originally(Shot change)And making(3D transition effects added between such as two camera lenses of special efficacy etc.)When generated.Carry out scene cut on filter node to seek to the code stream in video code flow to be divided into sectional, each section corresponds to an original scene.Certainly because current scene sets technology and can't accomplish 100% accuracy of identification, it is thus possible to which intrinsic scene is not quite identical in the final scene split on filter node and video code flow, but does not influence the application of the present invention.
Refering to Fig. 3, Fig. 3 is the corresponding relation schematic diagram between packet in scene, frame and video code flow, because video code flow is from streaming media server(Streaming Media Server) etc. equipment issue, be to be packed upon compression(), Packetization and specifically packing agreement is unrelated), bag is sent sequentially in time, and each bag has corresponding sequence number or timestamp(Time Stamp etc.), the original order of bag just can be correctly reconstructed with regard to filter node according to these information, so that corresponding with scene progress wrapping.Therefore, it is finally that a scene corresponds to a series of video data bag.
In fact, as long as filter node recognizes that the first frame of each scene just can be so that this makes it possible to which all scene cuts are come out, all frames between a frame of scene first and the frame of next scene first belong to the scene.In general, there is at least one I frame in a scene(Intracoded frame), so-called I frames are for P (encoded predicted frames)For frame and B frames (bi-directional predictive coding frame).The coding of I frames is determined that without relying on other frames, and the reference frame that P frames will be relied on before it could be decoded by itself completely, and the reference frame that B frames will then rely on before and after it could be decoded.Therefore the decoding of I frames is the simplest.As long as in the compression and coding standard based on dct transform+entropy code thought, such as ITU H.26x series and MPEG series, ' the decodings of I frames all only needs to carry out anti-entropy code, quantification and inverse DCT conversion just can be with, it is not necessary to motion compensation.Therefore the amount of calculation of decoding is minimum.Other kinds of frame, such as P frames, will decode the P frames from video code flow, then need to decode several P frames before it, until an above I frame nearest from it.But for I frames, then only need to decode the I frames in itself.Two compare, and the complexity difference of decoding is huge.In fact in the encoder, although Standard General does not have mandatory provision, but in general, I frames can be all added when scene changes, the first frame of scene is exactly often I frames., may be without complete I frames in video code flow in H.264 this kind of new standard, and simply some part of a frame carries out such as one band of intraframe coding(Slice intraframe coding) can independently be carried out.For complete I may be not present
Such case of frame, can define the Criterion of Selecting of some amendments:Such as choose and there is intraframe coding band or the most frames of macro block MB (Macroblock).For general coding protocol, the band for having identifier mechanism to identify I frames or intraframe coding etc..Such as in ITU H.264 standard, identified by instantaneous decoding refresh IDR (Instantaneous Decoding Refresh) marks.Therefore filter node just can correctly extract I frames or band/macro block of intraframe coding etc. according to these specific marks.
In order to accurate identification, simultaneously before partial decoding of h I frames and adjacent a few frame adjacent images afterwards, for assisting in identifying I two field pictures, rule of thumb, in most cases, 5 frames can also be typically taken just to reach the purpose accurately recognized.It is of course also possible to when the precision of the I two field pictures decoded can not be recognized accurately, then adjacent several frame adjacent images before partial decoding of h I frames and afterwards, for aiding in knowing another ijl two field pictures.
In order to express easily, illustrated below by taking I frames as an example.May be in a scene(Camera lens is long), there are multiple I frames, then choose first I frame in one scene of regulation.
After first I frame in a scene is obtained, filter node decodes the I frames and reduces the I two field pictures, and then the two field picture is identified, including following two identification methods:
1st, manual identified, I two field pictures are shown and watched for human monitor to realize artificial filter's function;
2nd, automatic identification, I two field pictures are inputted in automatic identification module, automatic matching identification is carried out using harmful content database, handled if it find that cutting off the broadcasting of video code flow at once and reporting to human monitor, the harmful content of automatic identification can be carried out in the prior art includes following aspects:
1) automatic identification, is carried out for harmful picture material, such as the scene such as salaciousness, violence, the image recognition technology belongs to ripe prior art;
2), it is identified for harmful superposition word or symbol.Processing is first passed around, word or symbol region are oriented to come, then knowledge Do, which is vertically oriented, is also horizontally oriented, then the specific segmentation for carrying out word and background, the result of processing, which is finally sent into an existing optical character, recognizes OCR
(Optical Character Recognition) module is identified.Recognition result and database are matched, if the match is successful with harmful Rule of judgment in database, it is determined that for harmful superposition word or symbol, the identification technology of the superposition word or symbol belongs to ripe prior art;
3), for Given Face Jin Hang Shi Do that may be present in image, the two field picture is sent directly into
Somebody's face identification module is identified.Certainly the data having in the database of face recognition module are voluntarily set up by contents supervision department, wherein all kinds of faces can be stored as needed:Suspect, VIP, terrorist etc., the face recognition technology belong to ripe prior art.
When manual identified and automatic identification are used simultaneously, Rule of judgment can be defined:
1st, it is defined completely by the recognition result of automatic identification module.
2nd, it is defined completely by the recognition result of human monitor.
3rd, fall between, to provide cascading judgement with reference to both the above recognition result.One embodiment is:Weighted average based on fraction.Automatic identification module and human monitor will not only determine whether to be harmful to, and also provide harmful fraction, such as, from 0-100, detrimental extent is higher, and fraction is higher, and 0 represents harmless.It is so that the fraction weighting summation of the fraction of automatic identification module and human monitor is as follows:
S corpse (WMXSM+WHXSH ) I ( WM+WH )
Wherein WMAnd WHRepresent automatic identification module and the weights of human monitor.Relative size between the two, which is illustrated, more trusts automatic identification module or the mankind, SMAnd SHThe fraction that automatic identification module and the mankind provide is represented respectively.If the composite score finally given is more than a specified value, such as 50, then cascading judgement is harmful, otherwise it is harmless, if an only side identifies harmful content and gives the score value of harmful content, it is 0 that can give tacit consent to the score value that the opposing party provides to the content.
Above-mentioned Rule of judgment can be used flexibly as the case may be, certainly in practice, can also formulate more detailed comprehensive decision rule.
Once it was found that harmful content, the measure taken can be:
1st, harmful video code flow and corresponding audio code stream, and other associated Media Streams are cut off at once;
2nd, harmless content is intercutted(The word such as public service ads or system overhaul).
Filter node should also have learning functionality, found automatically if harmful content is not automatic identification module, but what human monitor had found, or found by other channels, then the study module in filter node, which will be learned, writes harmful video code flow.In order to which learning system needs each monitored code stream to carry out the storage of certain time length such as(10 minutes, it is contemplated that the capacity of needs, this time span should be adjusted most preferably).In order to further reduce the memory capacity of needs, its I frame for being used to recognize can be only stored for each scene.Once human monitor has found that harmful content betides sometime t
It is front and rear, then study module will t-TW/2 to t+rw/2 (TW for study time window length, such as 30 seconds)The I frames of interior corresponding scene are read out from database, are learnt.By study, automatic identification module can just recognize such related scene later.The method of study has blunt many, including artificial intelligence(Artificial Intelligence), fuzzy reasoning(Fuzzy Logic), artificial neural network(Artificial Neural Network) etc..
When being filtered based on URL simultaneously, filter node also " to remember " harmful content from content source and other relevant informations, corresponding " suspicion " database of deposit, and being graded according to historical record to URL and other relevant informations.URL for being stored in this " suspicion " database, is also to need some finer processing.If some legal URL is for no other reason than that some mistakes or having palmed off URL by others and having played harmful content, then while being stored into " suspicion " database, as long as no longer occur later, through its " suspicion " can be eliminated after a while, if on the contrary, repeatedly finding some URL bad behavior, be assured that for " blacklist;, so as to be shielded completely.The database of information and third party's URL rating service providers can also be shared, the recognition result of filter node is sent to third party's rating service provider database, can so carry out mutually beneficial cooperation.
The scene cut technology that the present invention is used generally comprises following two:
1st, the structural information in video data bag is passed through(Such as motion vector)Deng estimating for the moving region in image, it can be determined that much regions are in motion, the direction of motion, motor pattern(One-way movement, move back and forth etc.), motion amplitude size etc., so as to judge which frame is more similar on motor pattern, the similar frame of motor pattern typically belongs to same scene;
2nd, analyzed by the statistical information of video code flow, regard bit rate in video code flow as random process on the time, then carry out statistical modeling(Statistical Moddiing), so as to estimate the position of the beginning and end of scene using statistical model.
Both the above technology need not all be decoded, therefore all have very high efficiency.But and the scene cut technology that carries out again after decoding(Such as histogram seeks difference etc.)Compare, a shortcoming is exactly that segmentation precision is relatively low.This shortcoming can be by adjusting the parameter of scene cut module(Such as some threshold values)To be solved.By parameter setting obtain the very sensitive result being likely to result in be possible be a scene cut originally into multiple scenes(Over-segmentation), and should not occur multiple scene cuts originally into one
Scene(Less divided).
Such issues that a kind of video harmful content filtering technique of two-stage can be solved.Its basic thought is to carry out layering definition for characteristics of image, is broken generally into two big aspects, i.e., semantic(Semantic is called concept Conceptual) aspect feature, and event(Event) aspect feature.For example shown in Fig. 4, if the highest semantic feature to be detected is " outdoor scene ", it is " seabeach ", " mountain forest ", " open country " etc. that corresponding lower levels semantic feature, which includes, further there is the semantic feature in corresponding lower level face, finally arrive affair character, such as one mountain, or a piece of trees.Each affair character has specific recognition methods, such as recognizes road, the motion of people etc..The benefit for using this two layers of recognition methods is combined that can may be appreciated advanced features with the low-level features of automatic identification and the mankind, and such corresponding relation can form a character network model.
As a same reason, it can set up " pornographic ", the character network model of the concept such as " violence ", setting up character network model needs the mechanism for understanding cognitive process and the expertise of specific area according to the mankind, belong to prior art, the present invention is not described further.The present invention is to provide an input interface, by the interface, human expert can be with the expression-form of defined feature network model, and filter node can carry out automatic identification work according to this character network model.
Harmful superposition word and the filter method of graphical symbol for the present invention, the overlapping text that can be navigated in image and graphical symbol region need not be decoded, then they are extracted, by certain background foreground segmentation, input OCR (an Optical Character Recognition, optical character identification)Module is identified, pass through the processing for discrete remaining profound conversion DCT (Discrete Cosine Transform) coefficient in packet in video code flow, the rectangular area comprising superposition word or graphical symbol can be set to, then pass through the horizontal and vertical projection for the region(Projection, is actually, along all horizontally or vertically straight lines by the region, summation to be integrated for the pixel intensity on straight line, so as to obtain an one-dimensional brightness distribution curve), judge the trend of word or symbol, the segmentation of every trade and word then entered using similar projecting method.
Filter node of the present invention can also realize log recording function, and be connected with external control devices, realize data and Signalling exchange with outer not control device.
In summary, present invention firstly provides a kind of depth content filter method based on I frames, such as Fig. 5
It is shown, ' the identifying processing method of each I frame to be detected comprises the following steps:
S 1, obtain from video code flow to be played during multimedia communication adjacent several frames before and after an I frame to be detected and the I frames;
I frames to be detected can include each I frame in video code flow, be identified according to the I frame identification letters of relative set in the packet header of the packet comprising I frames;
Only first I frame in each scene included in video code flow can also be regard as I frames to be detected.The first frame in one scene is generally the I frames of the scene, when video code flow is using H.264 protocol code, and I frames refer to include intraframe coding band or the most frames of macro block MB, and the frame identification has instantaneous decoding refresh IDR to indicate.
52nd, the partial decoding of h I frames to be detected and its front and rear some two field pictures;
Can also be when the precision of the I two field pictures decoded can not be recognized accurately, then adjacent a few frame adjacent images before partial decoding of h I frames and afterwards, for assisting in identifying I two field pictures.
53rd, recognize in the I two field pictures and whether include harmful content, if it is perform step S4;Otherwise step S5 is performed;
54th, the broadcasting of the video code flow is cut off immediately;
While the broadcasting of the video code flow is cut off, broadcasting can also be started and replace video source.
55th, continue to play the video code flow.
If based on scene cut technology, then before frame to be detected is obtained, scene cut first is carried out to video code flow, then using the first frame in each scene as frame to be detected, partial decoding of h first frame or frame first frame and before or after adjacent certain amount frame.
The method of the invention can be used cooperatively with the existing filtering based on URL, filtering based on URL can be filtered to the related signaling during multimedia communication, if including harmful URL information in related signaling, then refusal performs the signaling, so as to prevent from receiving the video code flow from harmful URL sources.
The present invention is while the filtering based on URL, additionally provide harmful URL information rating scheme, it can prevent mistake from killing accidental harmful URL information, and find new harmful URL information, then newfound harmful URL information is added in harmful URL information storehouse in time.
In the method for the invention, specific recognition methods can use manual identified and automatic identification, typically
In the case of, higher efficiency and the identification more insured can be obtained using two kinds of recognition methods simultaneously, at this moment, the preferential court verdict for performing manual identified or automatic identification can be set, it is of course also possible to consider the court verdict of both sides to obtain more responsible control model.
The method of the invention also provides content recordal mechanism, includes the recording for the video code flow played in the recording of harmful content to identifying and set period, and the purpose recorded to harmful content is:If the harmful content information Bei Ren Gong Shi Do for not having storage in automatic identification come out, the present invention also provides study mechanism and ensures that emerging harmful content is timely added in the harmful content database of automatic identification mechanism;Purpose is recorded to the video code flow played in set period to be:Can further it check from specific
The video code flow in U L sources, or harmful content in video code flow is by after under-enumeration, and data is provided for later study.
The method of the invention also provides log recording and report mechanism simultaneously, and the filter process of video code flow is recorded and Log Report can be generated.
As shown in fig. 6, to realize video code flow filter method of the present invention, the video code flow filter node that the present invention is provided mainly includes:
Video code flow Postponement module, for receiving video code flow to be played during multimedia communication and postponing to export the video codes;Executeaaafunction of the specific time delay according to required for identification harmful content is determined;Switch module, connects the video code flow Postponement module, the video code flow for cutting off video code flow Postponement module output;
I frames detection/decoder module, before the I frame I frames to be detected for being obtained from video code flow to be played during multimedia communication and adjacent several frames afterwards, before partial decoding of h I two field pictures to be detected and the I frames and adjacent a few frame adjacent images afterwards;
Certainly, can also be when the precision of the I two field pictures decoded can not be recognized accurately, adjacent several frames before obtaining the I frames from video code flow Postponement module again and afterwards, and adjacent several frame adjacent images before partial decoding of h I frames and afterwards, for assisting in identifying I two field pictures, at this moment, I frames detection/decoder module connects video code flow Postponement module simultaneously.
Harmful content identification module, connects the I frames detection/decoder module, for recognizing in the I two field pictures whether include harmful content, if it is exports corresponding control signal;
Judging module, is connected between harmful content Shi Do modules and switch module, and the trigger signal for disconnecting the video code flow is exported during the control signal to the switch module for receiving;
If based on the I frames that scene cut technical limit spacing is to be detected, then filter node also includes:Scene cut module, connection I frames detection/decoder module, for receiving video code flow to be played parallel with video code flow Postponement module and carrying out scene cut to the video code flow;
As shown in fig. 7, Fig. 7 is harmful content identification module and a kind of structural scheme of mechanism of judging module, wherein harmful content identification module includes:
Realize the automatic identification submodule of automatic identification function, the I frames detection/between decoder module and judging module is connected to, the automatic identification of harmful content is carried out for the related content included in the harmful content in harmful content database and the I two field pictures to be compared one by one;
According to the type of harmful content, automatic identification submodule further comprises:Harmful image identification unit and the harmful image data base being connected, harmful superposition word/Symbol recognition unit and harmful superposition word/symbol database, face identification unit and the face database being connected being connected, parallel to whether being identified in the I two field pictures comprising corresponding harmful content.Wherein, the various existing character networks that harmful picture material identification is also preserved in image data base are harmful to(The character network of each mankind's input is stored in here)With the various templates for recognizing rudimentary affair character, such as statistic histogram template etc.;The template of harmful word and symbol data library storage various harmful words and symbol, such as reaction and pornographic vocabulary slang etc. also have known harmful graphical symbol, such as Nazi's symbol etc.;Face database provides necessary data and various templates for face recognition module, such as suspect, by prosecution object, the face template of VIP etc.;Also include specifically including in manual identified function manual identified submodule, the manual identified submodule for realizing in harmful content identification module:I two field pictures display unit and monitoring instruction input unit, wherein, I two field pictures display unit connects the I frames detection/decoder module, for the I two field pictures to be shown into the manual identified that supervisor carries out harmful content;Monitor instruction input unit and connect the judging module, when being instructed for receiving the cut-out that supervisor inputs when identifying harmful content, the control signal is exported to the judging module.
Include accordingly in judging module:
First decision unit, receives the control signal of the automatic identification submodule output;
Second decision unit, receives the control signal of the operation interface submodule output;
Cascading judgement unit, connects first decision unit and the second decision unit respectively, the control signal for performing the first decision unit or the second decision unit according to the rule precedence of setting;Or, the automatic identification submodule and supervisor are respectively according to rule set in advance, corresponding detrimental extent score value is provided for the harmful content that identifies, cascading judgement unit is weighted the court verdict finally performed after processing to two score values, when only receiving the score value that harmful content of the side for identification is provided, acquiescence the opposing party is that the score value that the content is provided is zero;
In many cases, individually it may be not enough to judge whether content is harmful to according to video, also to combine the court verdict of audio, therefore the audio content filter result that this module can be introduced outside this node is as an input, so also including in judging module:
3rd decision unit, the trigger signal for disconnecting the video code flow is directly exported during injurious sound court verdict for receiving the corresponding audio code stream of the video code flow to the switch module, or the trigger signal for disconnecting the video code flow is exported to the switch module by cascading judgement unit, structure shown in Fig. 7 is latter implementation.
Referring still to Fig. 6, it can also include simultaneously in video code flow filter node:
Filtering module based on URL, related signaling for receiving multimedia communication, and the filtering based on URL is carried out to the related signaling using the harmful Universal Resource Locator URL information storehouse prestored, if it is determined that some URL is harmful, corresponding signaling is then forbidden to set up process, so that the request and transmission for content can not be carried out correctly;
The filter node can also include:URL is recorded and grading module and URL ratings databases, wherein, the URL information of URL records and grading module for recording harmful content, URL ratings databases are used to record URL ratings datas;URL is recorded and grading module is according to occurring the frequency and seriousness of bad behavior before a URL, carries out recommendation revisions, the URL information is added in harmful U L information banks if the rank that the URL information reaches setting.Can so ensure, will not because of some URL it is accidental the problem of, and closed down forever, can also be to output record and rating result on third party's rating services.
URL is recorded and grading module is in addition to being recorded and commenting Grade, also as the external interface of URL ratings data library modules.Other modules and database in addition to main control module, will all without direct line
Module is recorded and graded by URL to access database.
Therefore database is only and UKL records and grade module and main control module have line.URL is recorded and grading and has line at module with lower module:Main control module;URL ratings data library modules;Judging module, court verdict is introduced to score to associated U L and defined the level;In study module, learning process may reference database data;Harmful content identification module:In identification process, it may be necessary to use the url data in database.One example is:If be superimposed a captions in video, spectators are told to go to access some URL, such as illegal website, then to be also intended to be identified and control.
Referring still to Fig. 6, the filter node includes:
Harmful content records module, and the I frames detection/decoder module and judging module are connected respectively, while the judging module triggering disconnects the video code flow, starts the harmful content and records the harmful content that module recording is identified;The time window length TW of recording can be specified by human monitor;
Video content record module, for monitoring of a recorded programme person's set period video flowing and store arrive recorded content memory module;The time window length TW of recording can be specified by human monitor;
Generally, harmful content records module and video content records module merging and is set to a recording module;
Recorded content memory module, is connected to harmful content recordal module and video content records module(Record module), for preserving the harmful content recorded.
Referring still to Fig. 6, the filter node also includes:
Harmful content study module, connect the recorded content memory module, for when the recognition result to the content of automatic identification submodule and supervisor is inconsistent and finally performs harmful court verdict of supervisor, learning the harmful content and learning outcome being added in harmful content database.
As shown in figure 8, when automatic identification submodule is set respectively according to the type of harmful content, harmful content study module correspondence includes:
Image study unit, the harmful image data base of connection, for learning harmful image and learning outcome being added in harmful image data base;
It is superimposed word/sign learning unit, the harmful superposition word/symbol database of connection, for learning harmful superposition word/symbol and learning outcome being added in harmful superposition word/symbol database;
Face unit, connects face database, for learning facial image and learning outcome being added in face database.
As shown in figure 9, to realize the control to filter node and the setting of parameter, the filter node also includes following structure:
Operating Interface Module, for inputting relevant parameter or operational order;There is provided operation interface for human monitor includes the modes such as graphical interface of user and order line.
Character network module, is connected between the Operating Interface Module and harmful image data base, for inputting/adjusting character network model and/or affair character template to harmful image identification unit.
Parameter setting module, is connected between the Operating Interface Module and scene cut module, for the relevant parameter needed for input/adjustment progress scene cut into the scene cut module.
Decision rule setup module, is connected between the Operating Interface Module and judging module, the decision rule for inputting/adjusting control signal to the judging module;
Grading rule setting module, is connected between the Operating Interface Module and URL ratings databases, for input/adjustment grading rule into the URL ratings databases;
Main control module, connects other any one module, submodule or units in the filter node, the module is the center module of this filter node respectively, plays a part of control all other modules, submodule or unit;
Log Report module, connects other any one module, submodule or units in the filter node respectively, carries out log recording for the running status for this node and the event and the result of information filtering that occur etc. and 4 blunt accuse generate.
External control module, connects the main control module, for completing data/Signalling exchange with external control devices.Because this node is on network site and other network equipments such as WMG are in consolidated network position, or even in physical equipment form, it can be realized with WMG etc. in same physical equipment.Therefore, it is more likely that receiving the control of external control devices such as gateway controller, and information is reported to external equipment, control command and data report the communication protocol of use to be H.248MGCP
(Media Gateway Control Protocol) etc., this module completes data interaction with external control devices etc..
Control instruction module, it is connected between Operating Interface Module and main control module, instruction for receiving human monitor, such as cut off harmful video code flow, with harmless code stream substitute, start or forbid based on filtering function, restart this node etc.;Foregoing monitoring instruction input unit can be arranged in the control instruction module;
Finally it is to be appreciated that the filter node of the present invention can be deployed on network, do not have strict specify for network site.Content source can be deployed in fact to any network site between user terminal, as long as the Media Stream to be filtered just can be with by the network site.In extreme circumstances, it can dispose on the subscriber terminal, then be equivalent in terminal built-in one information filtering subsystem.
It should be noted that the encrypted implementation for having no effect on technical solution of the present invention of video code flow, the encryption of video code flow has following two possibility:
1st, the content from legal content source, if encrypted by means such as DRM, key can be obtained as the contents supervision department of government offices;
2nd, the content from illegal contents source, its purpose is to spread harmful content, it is necessary to which vast netter's group energy is enough received, certainty is not encrypted or using the cryptographic means of lower level, therefore can conveniently be decrypted.
In the above method of the present invention, the particular hierarchical standard of harmful content and corresponding criterion of identification are just blunt to be determined according to practical application scene, specific standards or is known Do methods and is not limited protection scope of the present invention.
Certainly to improve accuracy of identification, the intracoded frame to be detected of acquisition can also be carried out after whole decodings, recognizes in the intraframe coding two field picture whether include harmful content, if it is cut off the broadcasting of the video code flow;Otherwise the video code flow is played.
Technical scheme provided in an embodiment of the present invention can be also used for the identification to other video contents, for example:The goal shots of sports tournament Highlight, such as football, the far throw of basketball is hit, the excellent action such as dunk shot, it is therefore an objective to stored and recorded for the video clips identified;The related camera lens of particular persons is recognized from news program, is achieved;For the electronic eyes used in traffic system(It is arranged on the video camera of each main crossroads)Self registering video recording is identified, and finds act of violating regulations, and recognize the number of vehicles peccancy;The particular story in TV programme, such as Harry Potter's film are recognized, once recognizing can notify IPTV user to watch etc..
Obviously, those skilled in the art can carry out various changes and modification to the present invention without departing from the spirit and scope of the present invention.So, if these modifications and variations of the present invention belong within the scope of the claims in the present invention and its equivalent technologies, then the present invention is also intended to comprising including these changes and modification.
Claims (1)
- Claim1st, the video code flow filter method during a kind of multimedia communication, it is characterised in that comprise the following steps:A, from during multimedia communication transmit video code flow in obtain intracoded frame to be detected, the partial decoding of h intraframe coding two field picture;Whether harmful content is included in B, the identification intraframe coding two field picture, if it is cut off the broadcasting of the video code flow;Otherwise the video code flow is played.2;Filter method as claimed in claim 1, it is characterised in thatAlso include in the step A:The image of adjacent certain amount frame before or after obtaining simultaneously intracoded frame described in partial decoding of h;Also include in the step B, assist in identifying the intraframe coding two field picture using the image of the consecutive frame.3rd, filter method as claimed in claim 1, it is characterized in that, in the step A, the intracoded frame to be detected includes each intracoded frame in the video code flow, is identified according to the intraframe coding frame identification information of relative set in the packet header of the packet comprising intracoded frame.4;Filter method as claimed in claim 1, it is characterised in that in the step A, the intracoded frame to be detected is first intraframe coding in each scene included in the video code flow5th, filter method as claimed in claim 4, it is characterised in that also comprise the following steps in methods described:Scene cut is carried out to described video code flow according to the structural information of video data bag;And/or, scene cut is carried out to described video code flow according to the statistical information of video flowing.6th, the filter method as described in claim 1-4 is one of any, it is characterized in that, in methods described, when the video code flow is using H.264 protocol code, described intracoded frame refers to that comprising intraframe coding band or macro block most frame the frame identification has instantaneous decoding refresh flags.7th, filter method as claimed in claim 1, it is characterised in that methods described also includes simultaneously:Using the harmful Universal Resource Locator URL information storehouse prestored, the correlation during multimedia communication is believed Order carries out the filtering based on URL.8th, filter method as claimed in claim 7, it is characterised in that the step;Also include simultaneously in B:Record the relevant URL information of the harmful content identified and the URL information is graded according to historical record, the URL information is added in harmful U L information banks if the rank that the URL information reaches setting.9th, filter method as claimed in claim 1, it is characterised in that methods described also includes simultaneously:Recognize in the corresponding audio code stream of the video code flow and whether include injurious sound, if it is cut off the broadcasting of the video code flow;Otherwise continue to play the video code flow.10th, filter method as claimed in claim 1, it is characterized in that, in the Bu Sudden B, the intraframe coding two field picture is inputted into automatic identification module, automatic identification module is compared the related content included in the harmful content in the harmful content database prestored and the intraframe coding two field picture to carry out the automatic identification of harmful content one by one;And/orThe intraframe coding two field picture is shown to supervisor to carry out the manual identified of harmful content.11st, filter method as claimed in claim 10, it is characterised in that when manual identified and automatic identification are carried out simultaneously, if inconsistent, the court verdict of preferential execution automatic identification module or supervisor occurs in the recognition result of the two.12nd, filter method as claimed in claim 10, it is characterized in that, when manual identified and automatic identification are carried out simultaneously, by automatic identification module and supervisor respectively according to rule set in advance, corresponding detrimental extent score value is provided for the harmful content that identifies, then two score values are weighted with the court verdict finally performed after processing, when only receiving the score value that harmful content of the side for identification is provided, acquiescence the opposing party is that the score value that the content is provided is zero.13rd, filter method as claimed in claim 12, it is characterised in that described weighting processing method is:Si= ( WMXSM+WHXSH ) I ( WM+WH )Wherein, WMAnd WHRepresent automatic identification module and the weights of supervisor, WMAnd WHBetween relative size illustrate degree of belief to recognition result, SMAnd SHThe fraction that automatic identification module and supervisor provide respectively, if greater than a set-point, then court verdict is harmful, and otherwise court verdict is nothing Evil, WM 、 WHSet based on experience value respectively with set-point.14th, the filter method as described in claim 11 or 12, it is characterised in that methods described also includes simultaneously:Record the harmful content identified, when manual identified and automatic identification are carried out simultaneously, if automatic identification and manual identified result are inconsistent and finally perform harmful court verdict of manual identified, learn the harmful content that is identified and learning outcome is added in harmful content database.15th, the filter method as described in claim 1, it is characterised in that described harmful content at least includes one of following:Harmful image, harmful superposition word or symbol, Given Face image.16th, the filter method as described in claim 1 or 9, it is characterised in that while the video code flow is played in cut-out, starts and plays standby harmless video code flow.17th, filter method as claimed in claim 1, it is characterised in that methods described also includes simultaneously:The video code flow that Record and Save set period is played. '18th, filter method as claimed in claim 1, it is characterised in that methods described also includes simultaneously:By the identification situation record of harmful content is in daily record and generates the blunt announcement of daily record.19th, filter method as claimed in claim 1, it is characterised in that institute's fan's method also includes simultaneously:Time according to required for identification harmful content, the video code flow is played in delay.20th, the video code flow filter node during a kind of multimedia communication, including:Video code flow Postponement module, for receiving video code flow to be played during multimedia communication and postponing to export the video codes;Switch module, connects the video code flow Postponement module, the video code flow for cutting off video code flow Postponement module output;It is characterized in that, the filter node also includes:Intracoded frame detection/decoder module, for being obtained from video code flow to be played during multimedia communication intracoded frame to be detected or the intracoded frame and its before or after adjacent certain amount frame, partial decoding of h obtain consecutive frame image;Harmful content identification module, connects the intracoded frame detection/decoder module, for recognizing in described image whether include harmful content, if it is exports corresponding control signal;Judging module, is connected between harmful content identification module and switch module, described for receiving The trigger signal for disconnecting the video code flow is exported during control signal to the switch module.21st, filter node as claimed in claim 20, it is characterised in that the filter node also includes:Scene cut module, connects the intracoded frame detection/decoder module, and the video code flow liter to be played for receiving carries out scene cut to the video code flow.22nd, filter node as claimed in claim 20, it is characterised in that the filter node also includes:Filtering module based on URL, the related signaling for receiving multimedia communication, and the filtering based on URL is carried out to the related signaling using the harmful Universal Resource Locator URL information storehouse prestored.23rd, filter node as claimed in claim 22, it is characterised in that the filter node also includes:URL is recorded and grading module, and for recording the relevant URL information of harmful content and the URL information being graded according to historical record, the URL information is added in harmful URL information storehouse if the rank that the URL information reaches setting;U^L ratings databases, rule and historical record for preserving URL gradings.24th, filter node as claimed in claim 23, it is characterized in that, described harmful content identification module includes automatic identification submodule, the intracoded frame detection/between decoder module and judging module is connected to, the automatic identification of harmful content is carried out for the related content included in the harmful content in harmful content database and the intraframe coding two field picture to be compared one by one;And/orManual identified submodule, the manual identified submodule is specifically included:Intracoded frame image-display units and monitoring instruction input unit, wherein, intracoded frame image-display units connect the intracoded frame detection/decoder module, and the artificial of harmful content is carried out for the intraframe coding two field picture to be shown into supervisor., not;Monitor instruction input unit and connect the judging module, when being instructed for receiving the cut-out that supervisor inputs when identifying harmful content, the control signal is exported to the judging module.25th, filter node as claimed in claim 24, it is characterised in that according to the type of harmful content, the automatic identification submodule at least includes one of following:Harmful image identification unit and the harmful image data base being connected, harmful superposition word/Symbol recognition unit and harmful superposition word/symbol database, face identification unit and the face database being connected being connected;Wherein, harmful image identification unit, harmful superposition word/Symbol recognition unit and face identification unit are connected in parallel to the intracoded frame detection/between decoder module and judging module, respectively in the intraframe coding two field picture whether comprising corresponding harmful Content is identified.26th, filter node as claimed in claim 25, it is characterised in that when including automatic identification submodule and instruction input submodule simultaneously in the harmful content identification module, the judging module includes:First decision unit, receives the control signal of the automatic identification submodule output;Second decision unit, receives the control signal of the operation interface submodule output;Cascading judgement unit , Fen Do connections first decision unit and the second decision unit, the control signal for performing the first decision unit or the second decision unit according to the rule precedence of setting;Or, the automatic identification submodule and supervisor are respectively according to rule set in advance, corresponding detrimental extent score value is provided for the harmful content that identifies, cascading judgement unit is weighted the court verdict finally performed after processing to two score values, when only receiving the score value that harmful content of the side for identification is provided, acquiescence the opposing party is that the score value that the content is provided is zero.27th, filter node as claimed in claim 26, it is characterised in that also include in the judging module:When the 3rd decision unit, injurious sound court verdict for receiving the corresponding audio code stream of the video code flow, the control instruction for disconnecting the video code flow is exported to the switch module directly or by cascading judgement unit.28th, the filter node as described in claim 20 or 25, it is characterised in that the filter node also includes:Harmful content records module, and the intracoded frame detection/decoder module and judging module are connected respectively, while the judging module triggering disconnects the video code flow, starts the harmful content and records the harmful content that module recording is identified;Recorded content memory module, connects the harmful content and records module, for preserving the harmful content recorded.29th, filter node as claimed in claim 28, it is characterised in that the filter node also includes:Harmful content study module, connect the recorded content memory module, for when the recognition result to the content of automatic identification submodule and supervisor is inconsistent and finally performs harmful court verdict of supervisor, learning the harmful content and learning outcome being added in harmful content database. 30th, filter node as claimed in claim 28, it is characterised in that when automatic identification submodule is set respectively according to the type of harmful content, the harmful content study module correspondence includes one of following:Image study unit, the harmful image data base of connection, for learning harmful image and learning outcome being added in harmful image data base;It is superimposed word/sign learning unit, the harmful superposition word/symbol database of connection, for learning harmful superposition word/symbol and learning outcome being added in harmful superposition word/symbol database;Face unit, connects face database, for learning facial image and learning outcome being added in face database.31st, filter node as claimed in claim 28, it is characterised in that the filter node also includes:Operating Interface Module, for inputting relevant parameter or operational order;Video content records module, connects between the Operating Interface Module and recorded content memory module, and the video flowing of monitoring of a recorded programme person's set period is simultaneously stored to recorded content memory module.32nd, the filter node as described in wooden fork profit requires 31, it is characterised in that when in automatic identification submodule comprising harmful image identification unit, the filter node also includes:Character network module, is connected between the Operating Interface Module and harmful image data base, for input/adjustment character network model and/or affair character template into harmful image data base.33rd, filter node as claimed in claim 32, it is characterised in that the filter node also includes:Parameter setting module, is connected between the Operating Interface Module and scene cut module, for the relevant parameter needed for input/adjustment progress scene cut into the scene cut module.34th, filter node as claimed in claim 32, it is characterised in that the filter node also includes:Decision rule setup module, is connected between the Operating Interface Module and judging module, the decision rule for inputting/adjusting control signal to the judging module;And/orDetailed level rule setting module, is connected between the Operating Interface Module and U L ratings databases, regular for commenting input in Grade databases/adjustment to grade to the URL. '35th, filter node as claimed in claim 20, it is characterised in that the filter node also includes:Film source storehouse is replaced, it is change-over switch to connect the switch module, the change-over switch connects the replacement film source storehouse while video code flow is disconnected. ' 36th, the filter node as described in claim 22-35 is any, it is characterised in that the filter node also includes:Main control module, connects other any one module, submodule or units in the filter node, for carrying out operation control respectively;Log Report module, connects other any one module, submodule or units in the filter node respectively, and will is said in the operation for generating and exporting the filter node.37th, filter node as claimed in claim 36, it is characterised in that the filter node also includes:External control module, connects the main control module, for completing data/Signalling exchange with external control devices.38th, filter node as claimed in claim 36, it is characterised in that the filter node also includes:Control instruction module, is connected between Operating Interface Module and main control module, the instruction for receiving human monitor.39th, filter node as claimed in claim 38, it is characterised in that when the filter node includes the monitoring instruction input unit simultaneously, the monitoring instruction input unit is arranged in the control instruction module.40th, the video code flow filter method during a kind of multimedia communication, it is characterised in that comprise the following steps:Intracoded frame to be detected is obtained from the video code flow transmitted during multimedia communication, the intraframe coding two field picture is decoded;Recognize in the intraframe coding two field picture and whether include harmful content, if it is cut off the broadcasting of the video code flow;Otherwise the video code flow is played.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2007800003987A CN101317455A (en) | 2006-04-30 | 2007-04-29 | Video code stream filtering method and filtering node |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200610079023.1 | 2006-04-30 | ||
CNB2006100790231A CN100490532C (en) | 2006-04-30 | 2006-04-30 | Video code stream filtering method and filtering node |
CNA2007800003987A CN101317455A (en) | 2006-04-30 | 2007-04-29 | Video code stream filtering method and filtering node |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101317455A true CN101317455A (en) | 2008-12-03 |
Family
ID=38076911
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2006100790231A Expired - Fee Related CN100490532C (en) | 2006-04-30 | 2006-04-30 | Video code stream filtering method and filtering node |
CNA2007800003987A Pending CN101317455A (en) | 2006-04-30 | 2007-04-29 | Video code stream filtering method and filtering node |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2006100790231A Expired - Fee Related CN100490532C (en) | 2006-04-30 | 2006-04-30 | Video code stream filtering method and filtering node |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN100490532C (en) |
WO (1) | WO2007128234A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023030402A1 (en) * | 2021-08-31 | 2023-03-09 | 北京字跳网络技术有限公司 | Video processing method, apparatus and system |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101557510A (en) * | 2008-04-09 | 2009-10-14 | 华为技术有限公司 | Method, system and device for processing video coding |
CN101982981B (en) * | 2010-11-12 | 2012-02-01 | 福州大学 | Classification detection device for digital TV transport stream |
CN102073676A (en) * | 2010-11-30 | 2011-05-25 | 中国科学院计算技术研究所 | Method and system for detecting network pornography videos in real time |
CN102801956B (en) * | 2012-04-28 | 2014-12-17 | 武汉兴图新科电子股份有限公司 | Network video monitoring device and method |
CN103106251B (en) * | 2013-01-14 | 2016-08-03 | 冠捷显示科技(厦门)有限公司 | A kind of system filtering the media file that display device can not be play and filter method |
CN104254002B (en) * | 2013-06-25 | 2018-01-12 | 上海尚恩华科网络科技股份有限公司 | A kind of Instant Ads for more ground multichannel supervise broadcast system and method |
CN103596016B (en) * | 2013-11-20 | 2018-04-13 | 韩巍 | A kind of multimedia video data treating method and apparatus |
US9742827B2 (en) * | 2014-01-02 | 2017-08-22 | Alcatel Lucent | Rendering rated media content on client devices using packet-level ratings |
CN104834689B (en) * | 2015-04-22 | 2019-02-01 | 上海微小卫星工程中心 | A kind of code stream type method for quickly identifying |
CN106550247B (en) * | 2016-10-31 | 2019-06-07 | 杭州天时亿科技有限公司 | The monitoring method of radio and television |
CN106708949A (en) * | 2016-11-25 | 2017-05-24 | 成都三零凯天通信实业有限公司 | Identification method of harmful content of video |
KR102384878B1 (en) * | 2016-12-19 | 2022-04-11 | 삼성전자주식회사 | Method and apparatus for filtering video |
US10349126B2 (en) | 2016-12-19 | 2019-07-09 | Samsung Electronics Co., Ltd. | Method and apparatus for filtering video |
CN110109952A (en) * | 2017-12-30 | 2019-08-09 | 惠州学院 | A kind of method and its system identifying harmful picture |
CN110019946A (en) * | 2017-12-30 | 2019-07-16 | 惠州学院 | A kind of method and its system identifying harmful video |
CN109993036A (en) * | 2017-12-30 | 2019-07-09 | 惠州学院 | A method and system for identifying harmful videos based on user ID |
CN110020258A (en) * | 2017-12-30 | 2019-07-16 | 惠州学院 | A kind of method and system of the URL Path Recognition nocuousness picture based on approximate diagram |
CN110020252B (en) * | 2017-12-30 | 2022-04-22 | 惠州学院 | A method and system for identifying harmful videos based on credit content |
CN109089126B (en) * | 2018-07-09 | 2021-04-27 | 武汉斗鱼网络科技有限公司 | Video analysis method, device, equipment and medium |
US11412303B2 (en) | 2018-08-28 | 2022-08-09 | International Business Machines Corporation | Filtering images of live stream content |
CN112672103A (en) * | 2019-10-16 | 2021-04-16 | 北京航天长峰科技工业集团有限公司 | Video playing control method under intelligent monitoring scene |
CN113891120A (en) * | 2021-09-29 | 2022-01-04 | 广东省高峰科技有限公司 | IPTV service terminal access method and system thereof |
CN114143614B (en) * | 2021-10-25 | 2023-11-24 | 深蓝感知(杭州)物联科技有限公司 | Network self-adaptive transmission method and device based on video frame delay detection |
CN116109990B (en) * | 2023-04-14 | 2023-06-27 | 南京锦云智开软件有限公司 | Sensitive illegal content detection system for video |
CN118317128A (en) * | 2024-04-16 | 2024-07-09 | 联通视频科技有限公司 | Set top box terminal safety monitoring system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030172377A1 (en) * | 2002-03-05 | 2003-09-11 | Johnson Carolynn Rae | Method and apparatus for selectively accessing programs in a parental control system |
US8397269B2 (en) * | 2002-08-13 | 2013-03-12 | Microsoft Corporation | Fast digital channel changing |
JP2004364234A (en) * | 2003-05-15 | 2004-12-24 | Pioneer Electronic Corp | Broadcast program content menu creation apparatus and method |
-
2006
- 2006-04-30 CN CNB2006100790231A patent/CN100490532C/en not_active Expired - Fee Related
-
2007
- 2007-04-29 WO PCT/CN2007/001463 patent/WO2007128234A1/en active Application Filing
- 2007-04-29 CN CNA2007800003987A patent/CN101317455A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023030402A1 (en) * | 2021-08-31 | 2023-03-09 | 北京字跳网络技术有限公司 | Video processing method, apparatus and system |
Also Published As
Publication number | Publication date |
---|---|
WO2007128234A1 (en) | 2007-11-15 |
CN1968408A (en) | 2007-05-23 |
CN100490532C (en) | 2009-05-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101317455A (en) | Video code stream filtering method and filtering node | |
CN100456830C (en) | User terminal equipment for stream media content checking and checking method | |
JP4601868B2 (en) | Networked monitoring / control system and method | |
Chang et al. | Real-time content-based adaptive streaming of sports videos | |
CN113489990B (en) | Video encoding method, video encoding device, electronic equipment and storage medium | |
CN101389029B (en) | Method and apparatus for video image encoding and retrieval | |
CN106411927A (en) | Monitoring video recording method and device | |
CN113489991B (en) | Video encoding method, video encoding device, video encoding apparatus, and storage medium | |
CN106850515A (en) | A kind of data processing method and video acquisition device, decoding apparatus | |
CN1719909A (en) | A method for measuring changes in audio and video content | |
CN109391846A (en) | A kind of video scrambling method and device of adaptive model selection | |
CN106982355A (en) | The video monitoring system and anti-leak server of a kind of anti-image leakage | |
CN108900910A (en) | Monitor the method and system of IPTV service content legality | |
WO2007128185A1 (en) | A system and method of media stream censorship and a node apparatus for generating censorship code stream | |
CN107135043A (en) | Public emergency broadcast system | |
Conway et al. | Future trends: Live-Streaming terrorist attacks | |
CN103686094B (en) | Video monitoring log generating method and video monitoring log generating system | |
CN114640655B (en) | HLS video playing-based safe video retrieval system and method | |
CN107147943A (en) | Method for sending broadcasted content in time | |
CN115243112A (en) | Device that surveillance video traced to source | |
CN110336959A (en) | A kind of original video automatic processing method | |
Saikia | Perceptual hashing in the 3D-DWT domain | |
Frolov et al. | Deepfakes and information security issues | |
WO2007131445A1 (en) | A method, a system and a apparatus for censoring video code stream | |
CN108632635B (en) | Data processing method and device based on video network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20081203 |