CN107729486A - A kind of video searching method and device - Google Patents
A kind of video searching method and device Download PDFInfo
- Publication number
- CN107729486A CN107729486A CN201710964471.8A CN201710964471A CN107729486A CN 107729486 A CN107729486 A CN 107729486A CN 201710964471 A CN201710964471 A CN 201710964471A CN 107729486 A CN107729486 A CN 107729486A
- Authority
- CN
- China
- Prior art keywords
- video
- index
- subfield
- field
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000010354 integration Effects 0.000 claims abstract description 18
- 235000013399 edible fruits Nutrition 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 8
- 230000004927 fusion Effects 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 2
- 235000010627 Phaseolus vulgaris Nutrition 0.000 description 1
- 244000046052 Phaseolus vulgaris Species 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 235000005911 diet Nutrition 0.000 description 1
- 230000037213 diet Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/735—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Library & Information Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
This application provides a kind of video searching method and device, by the way that the analysis result of acquisition is matched with carrying out information integration to whole labels and the index field generated, can be on the basis of whole label utilization rates be improved, expand the matching range of analysis result, so as to add the quantity for being presented to the video search result of user, it can be seen that, utilize the index field that contained whole labels are carried out information integration and generated, complete the matching to analysis result, can effectively solve because separate between multiple labels, and each there is the problem of can not hitting all labels of corresponding analysis result simultaneously caused by corresponding index field, so as on the basis of whole labels are made full use of, improve the recall rate of video.
Description
Technical field
The present invention relates to technical field of Internet information, is to be related to a kind of video searching method and device in particular.
Background technology
With popularization and development that video network is applied, many video websites are emerged, facilitate user in video website
Search video is watched, the extreme enrichment life of user.
At present, video searching method is mainly receiving user's used by video website allows user's search video
After searching request, the searching request is parsed, and some label in analysis result hits default configuration file
When, all videos result of searching request for meeting user is determined in the index field corresponding to the label from hit, and then
The all videos result determined is shown to user.However, due between multiple labels for being included in configuration file mutually
It is independent, and each label has corresponding index field, when causing using existing video searching method search video,
All labels of corresponding analysis result can not be hit simultaneously, so that the results for video finally determined simply belongs to hit
A label corresponding to results for video in index field, reduce the utilization rate of label in configuration file, and then reduce
The recall rate of results for video.
The content of the invention
In view of this, the invention provides a kind of video searching method and device, the profit of label in configuration file is improved
With rate, and then improve the recall rate of results for video.
To achieve the above object, the present invention provides following technical scheme:
A kind of video searching method, including:
When the video search for receiving user is asked, video search request is parsed, obtains analysis result;
According to the analysis result, the field for corresponding to the analysis result, the index word are matched from index field
Section be to comprising whole labels carry out information integration after generate;
According to the field of the corresponding analysis result, the video of the corresponding field is matched from video library, as regarding
Frequency search result.
Preferably, the process of the index field generation includes:
Multiple labels are obtained, each label includes tag types and label relevant information;
Using the tag types of each label, all labels are referred to corresponding entity type, the entity
The number of type is at least one;
Duplicate removal will be carried out in each described entity type by the label with the identical label relevant information, and by duplicate removal
The label relevant information obtained afterwards is as an Index subfield;
Video information and the entity type according to each video in the video library, generate every in the video library
Incidence relation between one video and the corresponding Index subfield, and it is stored in the corresponding Index subfield;
The whole Index subfields for storing the incidence relation are subjected to permutation and combination, generate the index field.
Preferably, the video information according to each video in the video library and the entity type, institute is generated
The incidence relation between each video and the corresponding Index subfield in video library is stated, and is stored in corresponding index
Field, including:
When target entity type is first instance type, by the video library in the video information of each video
Each described Index subfield contained by label field and the target entity type is contrasted, judge the label field with
Whether the Index subfield is identical;
If the label field is identical with the Index subfield, generate the video and the corresponding Index subfield it
Between incidence relation, and be stored in the corresponding Index subfield.
Preferably, the video information according to each video in the video library and the entity type, institute is generated
The incidence relation between each video and the corresponding Index subfield in video library is stated, and is stored in corresponding index
Field, including:
When the target entity type is second instance type, to the video information of each video in the video library
Interior specific fields are segmented, and generate at least one specific subfield;
All specific subfields and each described Index subfield contained by the target entity type are carried out pair
Than judging whether the specific subfield and the Index subfield are identical;
If the specific subfield is identical with the Index subfield, the video and the corresponding Index subfield are generated
Between incidence relation, and be stored in the corresponding Index subfield.
Preferably, it is described according to the analysis result, the field of the corresponding analysis result is matched from index field,
Including:
According to the analysis result, the Index subfield for corresponding to the analysis result is matched from index field.
Preferably, the field according to the corresponding analysis result, matches the corresponding field from video library
Video, as video search result, including:
According to the Index subfield of the corresponding analysis result, matched from the video library and the Index subfield
Video with incidence relation, as the video search result.
Preferably, after the multiple labels of acquisition, in addition to:
Tag processes instruction is received, pair carries out processing operation, the processing with the corresponding label of tag processes instruction
Operation includes increase, deleted, combination any one or more in modification and inquiry.
A kind of video searching apparatus, including:
Parsing module, for when receiving the video search request of user, being parsed to video search request,
Obtain analysis result;
Fields match module, for according to the analysis result, the corresponding analysis result to be matched from index field
Field, the index field be to comprising whole labels carry out information integration after generate;
Video matching module, for the field according to the corresponding analysis result, matched from video library described in corresponding to
The video of field, as video search result.
Preferably, described device also includes:
Acquisition module, for obtaining multiple labels, each label includes tag types and label relevant information;
Classifying module, for the tag types using each label, all labels are referred to corresponding entity
Type, the number of the entity type are at least one;
Deduplication module, for the label in each described entity type with the identical label relevant information to be carried out
Duplicate removal, and using the label relevant information obtained after duplicate removal as an Index subfield;
Generation module, for the video information according to each video in the video library and the entity type, generation
Incidence relation in the video library between each video and the corresponding Index subfield, and it is stored in the corresponding index
Subfield;
Permutation and combination module, it is raw for the whole Index subfields for storing the incidence relation to be carried out into permutation and combination
Into the index field.
Preferably, the generation module includes:
First judging unit, for when target entity type is first instance type, by each in the video library
Label field in the video information of video is contrasted with each described Index subfield contained by the target entity type,
Judge whether the label field is identical with the Index subfield;
First generation unit, for judging the label field and the Index subfield in first judging unit
After identical, the incidence relation between the video and the corresponding Index subfield is generated, and is stored in the corresponding index
Subfield.
Preferably, the generation module includes:
Participle unit, for when the target entity type is second instance type, to each in the video library
Specific fields in the video information of video are segmented, and generate at least one specific subfield;
Second judging unit, for by described in each contained by all specific subfields and the target entity type
Index subfield is contrasted, and judges whether the specific subfield and the Index subfield are identical;
Second generation unit, for judging the specific subfield and the sub- word of index in second judging unit
After Duan Xiangtong, the incidence relation between the video and the corresponding Index subfield is generated, and is stored in the corresponding rope
Index subfield.
Preferably, the fields match module includes:
Fields match submodule, for according to the analysis result, the corresponding parsing knot to be matched from index field
The Index subfield of fruit.
Preferably, the video matching module includes:
Video matching submodule, in the fields match submodule according to the analysis result, from index field
After the Index subfield for matching the corresponding analysis result, according to the Index subfield of the corresponding analysis result, from institute
The video for being matched in video library and there is incidence relation with the Index subfield is stated, as the video search result.
Understood via above-mentioned technical scheme, compared with prior art, the invention provides a kind of video searching method and
Device, can by the way that the analysis result of acquisition is matched with carrying out information integration to whole labels and the index field generated
On the basis of whole label utilization rates are improved, to expand the matching range of analysis result, be presented to user's so as to add
The quantity of video search result, it is seen then that utilize the index field that contained whole labels are carried out information integration and generated, completion pair
The matching of analysis result, can effectively solve because separate between multiple labels, and each there is corresponding index word
The problem of all labels of corresponding analysis result can not be hit caused by section simultaneously, so as to make full use of the base of whole labels
On plinth, the recall rate of video is improved.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
The embodiment of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can also basis
The accompanying drawing of offer obtains other accompanying drawings.
Fig. 1 is a kind of method flow diagram of video searching method provided in an embodiment of the present invention;
Fig. 2 is a kind of method flow diagram of the generation method of index field provided in an embodiment of the present invention;
Fig. 3 is the method flow diagram of the generation method of another index field provided in an embodiment of the present invention;
Fig. 4 is the method flow diagram of another video searching method provided in an embodiment of the present invention;
Fig. 5 is a kind of structural representation of video searching apparatus provided in an embodiment of the present invention;
Fig. 6 is a kind of structural representation of the generating means of index field provided in an embodiment of the present invention;
Fig. 7 is the structural representation of the generating means of another index field provided in an embodiment of the present invention;
Fig. 8 is the structural representation of another video searching apparatus provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of creative work is not made
Embodiment, belong to the scope of protection of the invention.
The embodiment of the invention discloses a kind of video searching method, refers to accompanying drawing 1, and methods described specifically includes following step
Suddenly:
S101:When the video search for receiving user is asked, video search request is parsed, parsed
As a result;
Specifically, the video search request of user can be user wants video of search and being regarded with this for inputting according to it
Frequency content is related and word that meet preset input rule, for example, occur in video content a certain keyword, video name,
Any one or more combination therein such as source video sequence country, preset input rule can be set according to different video website
Different input rules, is not limited thereto.
The video search request received is parsed, the search intention of user can be quickly recognized, so as to improve
The speed of video search, wherein, the method parsed to video search request can be that the video search received is asked
Carry out word segmentation processing, to obtain analysis result, such as to video search request " disaster film " progress word segmentation processing, obtain " disaster " and
" piece " the two vocabulary, as the analysis result of this video search, for performing relevant matches behaviour subsequently in video website
Make.
S102:According to the analysis result, the field for corresponding to the analysis result, the rope are matched from index field
Draw field be to comprising whole labels carry out information integration after generate;
Specifically, index field can pre-establish, it is mainly used to storage and whole labels is carried out with gained after information integration
The whole fields arrived, so as to after analysis result is obtained, can directly be matched from the index field, realized indirectly
From comprising whole labels in the purpose that is matched one by one, efficiently solve because search can only be carried out from a label every time
The problem of all labels of corresponding analysis result can not be hit caused by inquiry simultaneously, improve video recall rate;Meanwhile by
An index field is only established in video website, for being matched with analysis result, so as to reduce index field with
Analysis result carries out matching the time needed for this process, improves video search efficiency.
The matching process that the field of the analysis result obtained in corresponding S101 is matched from index field can be from rope
Draw and matched in field and analysis result identical field.Carried out still by taking " disaster " and " piece " the two analysis results as an example specific
Illustrate, include " comedy ", " tender feeling ", " racing car ", " disaster " and " motion " this five fields in the index field pre-established, then
" disaster " and " piece " is matched one by one with whole fields included in index field respectively, so as to obtain and analysis result
" disaster " identical field " disaster ", therefore, using the field " disaster " as the word that analysis result is corresponded in this video search
Section, operated for follow-up video matching.
The order that analysis result is matched with index field can be according to the whole fields included in index field
Put in order from front to back successively order matching or from rear to preceding backward successively match.
If the field corresponding with analysis result can not be matched in index field, lost at this point it is possible to generate a search
Information is lost, to prompt the results for video for the video search request for not meeting its input in user video website in time.Wherein,
The search failure information of generation can be the information for representing no any search result, such as " search result 0 " or table
Show the information of this video search failure, such as " video search failure ".
S103:According to the field of the corresponding analysis result, the video of the corresponding field is matched from video library, is made
For video search result;
Specifically, including the field of corresponding analysis result in index field, then proving can be from the video in video website
One or more videos of corresponding analysis result are matched in storehouse.Wherein, video library can pre-establish, and be mainly used in storage and regard
Frequently, there is corresponding relation in the field in video and index field and in video library, so as to according to corresponding to analysis result
Field, all videos that corresponding relation be present with the field are quickly matched from the video library, are searched as final video
Hitch fruit, to be subsequently presented to user.
For example, using the analysis result " automobile " that is obtained after parsing from pre-establish comprising " comedy ", " tender feeling ",
Matched one by one in the index field of " automobile ", " disaster " and " motion " this five fields, so as to match in index field
" automobile " field as the field that analysis result is corresponded in this video search, and then according to " automobile " field from video library
In match therewith with corresponding relation " video A ", " video B " and " video C ", it is final to present as video search result
To user, the quantity accounting that video search result is disposably searched out from video library is improved, that is, improves recalling for video
Rate.
It should be noted that the video deposited in video library can be the peer link address of video, video correlation poster
The combination of wherein one or more such as picture, video profile.Accordingly, the search result that finally be presented to user is viewing video
Peer link address, video correlation poster picture, the combination of wherein one or more such as video profile.
A kind of video searching method disclosed in the embodiment of the present invention, by by the analysis result of acquisition and to whole labels
The index field for carrying out information integration and generating is matched, and can expand solution on the basis of whole label utilization rates are improved
The matching range of result is analysed, so as to add the quantity for being presented to the video search result of user, it is seen then that using to contained whole
The index field that label carries out information integration and generated, completes the matching to analysis result, can effectively solve because of multiple labels
Between independently of each other, and each there is the institute that can not hit corresponding analysis result caused by corresponding index field simultaneously
There is the problem of label, so as on the basis of whole labels are made full use of, improve the recall rate of video.
According to the analysis result of acquisition, matched from index field to should analysis result field be for quickly from
The important step of corresponding video is matched in video library, and the index field previously generated is then to influence to match homographic solution
An important factor for the step for analysing the field of result.Therefore, how quickly, to accurately generate index field be also that this programme is of interest
An emphasis.
Therefore for the S102 in embodiment corresponding to Fig. 1, as shown in Fig. 2 the embodiment of the invention discloses a kind of index word
The generation method of section, methods described specifically include following steps:
S201:Multiple labels are obtained, each label includes tag types and label relevant information;
Specifically, label is mainly used to the characteristics of reflecting video itself, wherein, the tag types that label is included mainly are used
To show classification that label institute reflecting video belongs in itself, such as " comedy ", " describing love affairs ", " U.S. ", " Japanese " from multiple dimensions
Deng;The label relevant information that label is included can be according to video content and from multiple dimensions set keyword, such as from regarding
The keyword " the fast and the furious " that this dimension of frequency title is set, the keyword of age this dimension setting is shown from video
" 2016 ", keyword " Zhang Yimou " of this dimension setting etc., each label stamped on video are directed from video
All comprising tag types and label relevant information, so as to be advantageous to quickly embody hunting zone, to accelerate search speed.
Can be that it stamps multiple different labels for same video, to realize that embodying this from multiple dimensions regards
The characteristics of frequency;Accordingly, the source for the label that video is stamped can also be multiple such as bean cotyledon label, diet complete works label.
Acquisition modes this programme of multiple labels does not limit, and can be obtained using web crawlers from each label source
Take.
S202:Using the tag types of each label, all labels are referred to corresponding entity type, it is described
The number of entity type is at least one;
Specifically, multiple different entities types can be pre-established in video website, it is mainly used in the mark that storage is got
Label, each entity type can be channel type, release type, regional population, language form, medium type, common label
Type etc. it is therein any one, the entity type of storage and the label itself that gets with tag types between with closing
Connection relation, so as to which according to tag types possessed by each label, be quickly referred to has incidence relation therewith
Entity type in, so as to subsequently to establish Index subfield and provide basic data.
For example, the label got be respectively " label A ", " label B ", " label C ", " label D " and " label E ",
Wherein, the tag types of " label A " are " comedy ", and the tag types of " label B " are " U.S. ", and the tag types of " label C " are
" disaster ", " label D " tag types are " Japanese ", and the tag types of " label E " are " HNTV ", and the reality pre-established
Body type includes " channel type ", " release type ", " regional population ", " language form " and " medium type ", then can foundation
Incidence relation between tag types " comedy ", " disaster " and entity type " channel type ", it is quick by " label A " and " label
C " is grouped into entity type " channel type ", according to associating between tag types " U.S. " and entity type " regional population "
" label B ", is quickly grouped into entity type " regional population " by system, according to tag types " Japanese " and entity type " class of languages
Incidence relation between type ", quickly by " label D " is grouped into entity type " language form ", according to tag types " Hu Nanwei
Depending on " incidence relation between entity type " medium type ", quickly " label E " is grouped into entity type " medium type ".
S203:The label in each described entity type with the identical label relevant information is subjected to duplicate removal, and
Using the label relevant information obtained after duplicate removal as an Index subfield;
Specifically, because the multiple labels belonged in same entity type may have the related letter of identical label
Breath, therefore, it is necessary to duplicate removal processing is carried out to the label with same label relevant information got from multiple sources, so as to real
The now classification fusion to multiple labels, only retains next label relevant information as Index subfield.
For example, entity type " language form " includes " label A ", " label B " and " label C ", wherein, " label
A " label relevant information is " risk ", and the label relevant information of " label B " is " friendship ", the label relevant information of " label C "
For " risk ", duplicate removal is carried out to label relevant information " risk ", the two labels are related to " friendship " so as to obtain " risk "
Field, and using them as an Index subfield, for subsequently establishing index field.
S204:When target entity type is first instance type, the video of each video in the video library is believed
Label field in breath is contrasted with each described Index subfield contained by the target entity type, judges the label
Whether field is identical with the Index subfield, if so, S205 is then performed, if it is not, then performing S206;
Specifically, each video has the video information comprising own characteristic in video library, the video information includes
One or more label sources are the label relevant field that it is stamped, i.e. label field;And first instance type refer to it is contained every
One Index subfield, i.e., the label relevant information obtained after duplicate removal, with video in video library contained by label in video information
The consistent entity type of field, so as to which by judging whether label field is identical with Index subfield, quick realize is belonging to
In whole entity types of first instance type, the Index subfield corresponding to each video in video library is determined successively
Purpose.Wherein, target entity type is any one entity type.
S205:The incidence relation between the video and the corresponding Index subfield is generated, and is stored in described in correspondence
Index subfield, and perform S207;
If specifically, judging that label field is identical with Index subfield, establish between the Index subfield and video
Incidence relation, so that follow-up Rapid matching goes out in video library to correspond to all videos of the video search request of user.
S206:Generate video search failure information.
S207:The whole Index subfields for storing the incidence relation are subjected to permutation and combination, generate the index word
Section;
Specifically, the whole Index subfields for saving incidence relation are carried out into permutation and combination, one can be obtained completely
Index field, as the connection between video in follow-up analysis result and video library, so as to integrate whole labels automatically
On the basis of, improve the number of videos and video search efficiency for being presented to user.Wherein, whole ropes of the incidence relation are stored
Permutation and combination method this programme of Index subfield does not limit, and can be that the sequencing obtained according to Index subfield is arranged
Row combination.
Above step S204~step S205 is only " according to each in the video library disclosed in the embodiment of the present invention
The video information of video and the entity type, generate in the video library each video and the corresponding Index subfield it
Between incidence relation, and be stored in the corresponding Index subfield " a kind of preferable implementation of process, this relevant process
Specific implementation can arbitrarily be set according to the actual requirements, not limited herein.
In the embodiment of the present invention, by the way that the multiple labels got are referred into each self-corresponding entity according to tag types
In type, it is easy to subsequently to establish mode using identical for same entity type and establishes between video and manipulative indexing subfield
Incidence relation, accelerate the formation speed of index field, duplicate removal carried out to whole labels in each entity type to obtain
Different label relevant informations is obtained as Index subfield, it is possible to achieve the classification fusion of multiple labels, it is ensured that permutation and combination
The index field of generation it is comprehensive, improve video recall rate indirectly.And be first instance type in target entity type, and
When judging that label field is identical with Index subfield, establish between video and Index subfield corresponding to the label field
Incidence relation, and store, it can effectively accelerate the search rate of video, so as to effectively lift video search efficiency.
After S201 in embodiment corresponding to above-mentioned accompanying drawing 2, in addition to:
Tag processes instruction is received, pair carries out processing operation, the processing with the corresponding label of tag processes instruction
Operation includes increase, deleted, combination any one or more in modification and inquiry;
The label currently got the processing to be performed behaviour is directed to specifically, tag processes instruction can be developer
The instruction made and set, such as " label increase instruction ", " label deletes instruction ", " tag modification instruction " and " tag queries refer to
Make ", be advantageous to improve the degree of accuracy of index field.
In the embodiment of the present invention, instructed by receiving tag processes, and pair enter with the corresponding label of tag processes instruction
Row processing operation, can improve the degree of accuracy of index field, and then improve the precision of video search, effectively reduce video search and lose
The probability of happening lost.
For the S102 in embodiment corresponding to Fig. 1, as shown in figure 3, the embodiment of the invention discloses another index word
The generation method of section, methods described specifically include following steps:
S301:Multiple labels are obtained, each label includes tag types and label relevant information.
S302:Using the tag types of each label, all labels are referred to corresponding entity type, it is described
The number of entity type is at least one.
S303:The label in each described entity type with the identical label relevant information is subjected to duplicate removal, and
Using the label relevant information obtained after duplicate removal as an Index subfield.
S304:When the target entity type is second instance type, each video in the video library is regarded
Specific fields in frequency information are segmented, and generate at least one specific subfield, and perform S305;
Specifically, second instance type can refer to each contained Index subfield, i.e., the label phase obtained after duplicate removal
The entity type that label field contained by video in video information is inconsistent in information, with video library is closed, is regarded so as to utilize
Label field in frequency information matches corresponding Index subfield.At this point it is possible to have to each video in video library itself
Specific fields in some video informations are segmented, using the specific subfield of generation with belonging to each of second instance type
Whole Index subfields in individual entity type are matched one by one, and then increase matching probability, to improve each index
The quantity of video in video library corresponding to field, video recall rate is improved indirectly.
Specific fields in video information can be language fields in video name, video, play media field etc..
S305:All specific subfields are entered with each described Index subfield contained by the target entity type
Row contrast, judges whether the specific subfield and the Index subfield are identical, if so, S306 is then performed, if it is not, then performing
S307;
Specifically, by each the specific subfield generated after participle successively with target entity type contained by each index
Subfield is contrasted, so as to by judging whether specific subfield is identical with Index subfield, quickly now belong in fact
In whole entity types of second instance type, Index subfield in video library corresponding to each video is determined successively
Purpose.
S306:The incidence relation between the video and the corresponding Index subfield is generated, and is stored in described in correspondence
Index subfield, and perform S308.
S307:Generate video search failure information.
S308:The whole Index subfields for storing the incidence relation are subjected to permutation and combination, generate the index word
Section.
Above step S304~step S306 is only " according to each in the video library disclosed in the embodiment of the present invention
The video information of video and the entity type, generate in the video library each video and the corresponding Index subfield it
Between incidence relation, and be stored in the corresponding Index subfield " a kind of preferable implementation of process, this relevant process
Specific implementation can arbitrarily be set according to the actual requirements, not limited herein.
In the embodiment of the present invention, by the way that the multiple labels got are referred into each self-corresponding entity according to tag types
In type, it is easy to subsequently to establish mode using identical for same entity type and establishes between video and manipulative indexing subfield
Incidence relation, accelerate the formation speed of index field, duplicate removal carried out to whole labels in each entity type to obtain
Different label relevant informations is obtained as Index subfield, it is possible to achieve the classification fusion of multiple labels, it is ensured that permutation and combination
The index field of generation it is comprehensive, improve video recall rate indirectly.And be second instance type in target entity type, and
When judging that the specific subfield after participle is identical with Index subfield, the video and index corresponding to the specific subfield are established
Incidence relation between subfield, and store, it can effectively accelerate the search rate of video, so as to effectively lift video search effect
Rate.
On the basis of embodiment corresponding to above-mentioned accompanying drawing 2, the embodiment of the invention discloses another video searching method,
Accompanying drawing 4 is referred to, methods described specifically includes following steps:
S401:When the video search for receiving user is asked, video search request is parsed, parsed
As a result.
S402:According to the analysis result, the Index subfield for corresponding to the analysis result is matched from index field,
The index field be to comprising whole labels carry out information integration after generate.
S403:According to the Index subfield of the corresponding analysis result, matched from the video library and the index
Subfield has the video of incidence relation, as video search result;
Specifically, index field is made up of multiple Index subfields, and each Index subfield and regarding in video library
Establish relevant between frequency, therefore, can be matched according to the analysis result of acquisition from index field, so as to really
Same Index subfield, and then the incidence relation stored again according to the Index subfield are made, quickly from video library
In match all videos therewith with incidence relation, as the video search result for finally needing to be presented to user.
A kind of video searching method disclosed in the embodiment of the present invention, by by the analysis result of acquisition and to whole labels
The index field for carrying out information integration and generating is matched, it may be determined that goes out corresponding Index subfield, to utilize
The incidence relation that the Index subfield matched prestores, quickly matched from video library to should Index subfield it is complete
Portion's video, so as to accelerate the speed of video search, effectively improve the search experience of user.
The embodiment of the invention discloses a kind of video searching apparatus, refers to accompanying drawing 5, including:
Parsing module 501, for when receiving the video search request of user, being solved to video search request
Analysis, obtain analysis result;
Fields match module 502, for according to the analysis result, the corresponding parsing knot to be matched from index field
The field of fruit, the index field be to comprising whole labels carry out information integration after generate;
Video matching module 503, for the field according to the corresponding analysis result, corresponding institute is matched from video library
The video of field is stated, as video search result.
A kind of video searching apparatus disclosed in the embodiment of the present invention, by fields match module 502 by parsing module 501
The analysis result of acquisition matches with the index field for carrying out information integration to whole labels and generating, and can improve all
On the basis of label utilization rate, expand the matching range of analysis result, user is presented to so as to add video matching module 503
Video search result quantity, it is seen then that utilize the index field that contained whole labels are carried out information integrations and generated, complete
Matching to analysis result, can effectively solve because separate between multiple labels, and each there is corresponding index
The problem of all labels of corresponding analysis result can not be hit caused by field simultaneously, so as to make full use of whole labels
On the basis of, improve the recall rate of video.
The course of work of modules provided in an embodiment of the present invention, the method flow diagram corresponding to accompanying drawing 1 is refer to, had
Body running process repeats no more.
On the basis of embodiment corresponding to above-mentioned accompanying drawing 5, the embodiment of the invention discloses a kind of generation of index field
Device, accompanying drawing 6 is referred to, including:
Acquisition module 601, for obtaining multiple labels, each label includes tag types and label relevant information;
Classifying module 602, for the tag types using each label, all labels are referred to corresponding reality
Body type, the number of the entity type are at least one;
Deduplication module 603, for the label that will there is the identical label relevant information in each described entity type
Duplicate removal is carried out, and using the label relevant information obtained after duplicate removal as an Index subfield;
Generation module 604, it is raw for the video information according to each video in the video library and the entity type
Incidence relation into the video library between each video and the corresponding Index subfield, and it is stored in the corresponding rope
Index subfield;
Permutation and combination module 605, for the whole Index subfields for storing the incidence relation to be carried out into permutation and combination,
Generate the index field.
Wherein, first generation module 604 specifically includes:
First judging unit 6041, will be every in the video library for when target entity type is first instance type
Label field in the video information of one video is carried out with each contained described Index subfield of the target entity type
Contrast, judges whether the label field is identical with the Index subfield;
First generation unit 6042, for judging the label field and the rope in first judging unit 6041
After Index subfield is identical, the incidence relation between the video and the corresponding Index subfield is generated, and is stored in correspondingly
The Index subfield.
In the embodiment of the present invention, the multiple labels for being got acquisition module 601 by classifying module 602 are according to tag class
Type is referred in each self-corresponding entity type, is easy to subsequently using identical to establish mode for same entity type to establish and is regarded
Incidence relation between frequency and manipulative indexing subfield, accelerates the formation speed of index field, deduplication module 603 is to each
Whole labels in entity type carry out duplicate removal and are used as Index subfield to obtain different label relevant informations, it is possible to achieve more
Individual label classification fusion, it is ensured that the index field of combination producing it is comprehensive, improve video recall rate indirectly.And in mesh
Mark entity type is first instance type, and when the first judging unit 6041 judges that label field is identical with Index subfield,
The incidence relation that first generation unit 6042 is established between video and Index subfield corresponding to the label field, and store,
The search rate of video can effectively be accelerated, so as to effectively lift video search efficiency.
The course of work of modules provided in an embodiment of the present invention, the method flow diagram corresponding to accompanying drawing 2 is refer to, had
Body running process repeats no more.
On the basis of embodiment corresponding to above-mentioned accompanying drawing 5, the embodiment of the invention discloses the life of another index field
Into device, accompanying drawing 7 is referred to, including:
Acquisition module 601, classifying module 602, deduplication module 603, generation module 604, permutation and combination module 605;
Wherein, the generation module 604 specifically includes:
Participle unit 6043, for when the target entity type is second instance type, to every in the video library
Specific fields in the video information of one video are segmented, and generate at least one specific subfield;
Second judging unit 6044, for inciting somebody to action each contained by all specific subfields and the target entity type
The Index subfield is contrasted, and judges whether the specific subfield and the Index subfield are identical;
Second generation unit 6045, for second judging unit 5044 judge the specific subfield with it is described
After Index subfield is identical, the incidence relation between the video and the corresponding Index subfield is generated, and is stored in pair
Answer the Index subfield.
In the embodiment of the present invention, the multiple labels for being got acquisition module 601 by classifying module 602 are according to tag class
Type is referred in each self-corresponding entity type, is easy to subsequently using identical to establish mode for same entity type to establish and is regarded
Incidence relation between frequency and manipulative indexing subfield, accelerates the formation speed of index field, deduplication module 603 is to each
Whole labels in entity type carry out duplicate removal and are used as Index subfield to obtain different label relevant informations, it is possible to achieve more
Individual label classification fusion, it is ensured that the index field of combination producing it is comprehensive, improve video recall rate indirectly.And in mesh
Mark entity type is second instance type, and the second judging unit 6044 judges the specific subfield after segmenting with indexing sub- word
During Duan Xiangtong, the second generation unit 6045 establishes associating between video and Index subfield corresponding to the specific subfield
System, and store, it can effectively accelerate the search rate of video, so as to effectively lift video search efficiency.The embodiment of the present invention carries
The course of work of the modules of confession, refer to the method flow diagram corresponding to accompanying drawing 3, and specific work process repeats no more.
On the basis of embodiment corresponding to above-mentioned accompanying drawing 4, the embodiment of the invention discloses another video searching apparatus,
Accompanying drawing 8 is referred to, including:
Parsing module 501, fields match module 502, video matching module 503;
Wherein, the fields match module 502 includes:Fields match submodule 5021, for according to the analysis result,
The Index subfield of the corresponding analysis result is matched from index field.
The video matching module 503 includes:Video matching submodule 5031, in the fields match submodule
5021 according to the analysis result, after the Index subfield that the corresponding analysis result is matched from index field, according to
The Index subfield of the corresponding analysis result, matched from the video library has incidence relation with the Index subfield
Video, as the video search result for being presented to user.
A kind of video searching apparatus disclosed in the embodiment of the present invention, by fields match submodule 5021 by the solution of acquisition
Analysis result matches with the index field for carrying out information integration to whole labels and generating, it may be determined that goes out corresponding rope
Index subfield, so as to the incidence relation that video matching submodule 5031 is prestored using the Index subfield that matches, quickly
Matched from video library to should Index subfield all videos, so as to accelerate the search rate of video, effectively lifting
The search experience of user.
The course of work of modules provided in an embodiment of the present invention, the method flow diagram corresponding to accompanying drawing 4 is refer to, had
Body running process repeats no more.
The foregoing description of the disclosed embodiments, professional and technical personnel in the field are enable to realize or using the present invention.
A variety of modifications to these embodiments will be apparent for those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, it is of the invention
The embodiments shown herein is not intended to be limited to, and is to fit to and principles disclosed herein and features of novelty phase one
The most wide scope caused.
Claims (13)
- A kind of 1. video searching method, it is characterised in that including:When the video search for receiving user is asked, video search request is parsed, obtains analysis result;According to the analysis result, the field for corresponding to the analysis result is matched from index field, the index field is To comprising whole labels carry out information integration after generate;According to the field of the corresponding analysis result, the video of the corresponding field is matched from video library, is searched as video Hitch fruit.
- 2. according to the method for claim 1, it is characterised in that the process of the index field generation includes:Multiple labels are obtained, each label includes tag types and label relevant information;Using the tag types of each label, all labels are referred to corresponding entity type, the entity type Number be at least one;The label in each described entity type with the identical label relevant information is subjected to duplicate removal, and will be obtained after duplicate removal The label relevant information obtained is as an Index subfield;Video information and the entity type according to each video in the video library, generate each in the video library Incidence relation between video and the corresponding Index subfield, and it is stored in the corresponding Index subfield;The whole Index subfields for storing the incidence relation are subjected to permutation and combination, generate the index field.
- 3. according to the method for claim 2, it is characterised in that the video according to each video in the video library Information and the entity type, generate the association in the video library between each video and the corresponding Index subfield and close System, and the corresponding Index subfield is stored in, including:When target entity type is first instance type, by the label in the video library in the video information of each video Each described Index subfield contained by field and the target entity type is contrasted, judge the label field with it is described Whether Index subfield is identical;If the label field is identical with the Index subfield, generate between the video and the corresponding Index subfield Incidence relation, and it is stored in the corresponding Index subfield.
- 4. according to the method for claim 2, it is characterised in that the video according to each video in the video library Information and the entity type, generate the association in the video library between each video and the corresponding Index subfield and close System, and the corresponding Index subfield is stored in, including:When the target entity type is second instance type, in the video information of each video in the video library Specific fields are segmented, and generate at least one specific subfield;All specific subfields are contrasted with each described Index subfield contained by the target entity type, sentenced Whether the specific subfield of breaking and the Index subfield are identical;If the specific subfield is identical with the Index subfield, generate between the video and the corresponding Index subfield Incidence relation, and be stored in the corresponding Index subfield.
- 5. according to the method for claim 2, it is characterised in that it is described according to the analysis result, from index field The field of the corresponding analysis result is allotted, including:According to the analysis result, the Index subfield for corresponding to the analysis result is matched from index field.
- 6. according to the method for claim 5, it is characterised in that the field according to the corresponding analysis result, from regarding The video of the corresponding field is matched in frequency storehouse, as video search result, including:According to the Index subfield of the corresponding analysis result, matched from the video library has with the Index subfield The video of incidence relation, as the video search result.
- 7. according to the method for claim 2, it is characterised in that after the multiple labels of acquisition, in addition to:Tag processes instruction is received, pair carries out processing operation with the corresponding label of tag processes instruction, described handle operates Including combination any one or more in increasing, delete, change and inquiring about.
- A kind of 8. video searching apparatus, it is characterised in that including:Parsing module, for when receiving the video search request of user, parsing, obtaining to video search request Analysis result;Fields match module, for according to the analysis result, the word for corresponding to the analysis result to be matched from index field Section, the index field be to comprising whole labels carry out information integration after generate;Video matching module, for the field according to the corresponding analysis result, the corresponding field is matched from video library Video, as video search result.
- 9. device according to claim 8, it is characterised in that also include:Acquisition module, for obtaining multiple labels, each label includes tag types and label relevant information;Classifying module, for the tag types using each label, all labels are referred to corresponding entity type, The number of the entity type is at least one;Deduplication module, for the label for having the identical label relevant information in each described entity type to be gone Weight, and using the label relevant information obtained after duplicate removal as an Index subfield;Generation module, for the video information according to each video in the video library and the entity type, described in generation Incidence relation in video library between each video and the corresponding Index subfield, and it is stored in the corresponding sub- word of index Section;Permutation and combination module, for the whole Index subfields for storing the incidence relation to be carried out into permutation and combination, generate institute State index field.
- 10. device according to claim 9, it is characterised in that the generation module includes:First judging unit, for when target entity type is first instance type, by each video in the video library Video information in label field and the target entity type contained by each described Index subfield contrasted, judge Whether the label field is identical with the Index subfield;First generation unit, for judging that the label field is identical with the Index subfield in first judging unit Afterwards, the incidence relation between the video and the corresponding Index subfield is generated, and is stored in the corresponding sub- word of index Section.
- 11. device according to claim 9, it is characterised in that the generation module includes:Participle unit, for when the target entity type is second instance type, to each video in the video library Video information in specific fields segmented, generate at least one specific subfield;Second judging unit, for inciting somebody to action all specific subfields and each described index contained by the target entity type Subfield is contrasted, and judges whether the specific subfield and the Index subfield are identical;Second generation unit, for judging the specific subfield and the Index subfield phase in second judging unit With after, the incidence relation between the video and the corresponding Index subfield is generated, and is stored in corresponding index Field.
- 12. device according to claim 9, it is characterised in that the fields match module includes:Fields match submodule, for according to the analysis result, the corresponding analysis result to be matched from index field Index subfield.
- 13. device according to claim 12, it is characterised in that the video matching module includes:Video matching submodule, for, according to the analysis result, being matched in the fields match submodule from index field Go out after the Index subfield of the corresponding analysis result, according to the Index subfield of the corresponding analysis result, regarded from described The video that there is incidence relation with the Index subfield is matched in frequency storehouse, as the video search result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710964471.8A CN107729486B (en) | 2017-10-17 | 2017-10-17 | Video searching method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710964471.8A CN107729486B (en) | 2017-10-17 | 2017-10-17 | Video searching method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107729486A true CN107729486A (en) | 2018-02-23 |
CN107729486B CN107729486B (en) | 2021-02-09 |
Family
ID=61211480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710964471.8A Active CN107729486B (en) | 2017-10-17 | 2017-10-17 | Video searching method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107729486B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108763363A (en) * | 2018-05-17 | 2018-11-06 | 阿里巴巴集团控股有限公司 | A kind of method and device for examining record to be written |
CN109635157A (en) * | 2018-10-30 | 2019-04-16 | 北京奇艺世纪科技有限公司 | Model generating method, video searching method, device, terminal and storage medium |
CN109977318A (en) * | 2019-04-04 | 2019-07-05 | 掌阅科技股份有限公司 | Book search method, electronic equipment and computer storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4539552B2 (en) * | 2005-12-21 | 2010-09-08 | 日本ビクター株式会社 | Content search apparatus and content search program |
CN103310014A (en) * | 2013-07-02 | 2013-09-18 | 北京航空航天大学 | Method for improving accuracy of search result |
CN104123366A (en) * | 2014-07-23 | 2014-10-29 | 谢建平 | Search method and server |
CN104219575A (en) * | 2013-05-29 | 2014-12-17 | 酷盛(天津)科技有限公司 | Related video recommending method and system |
CN105187795A (en) * | 2015-09-14 | 2015-12-23 | 博康云信科技有限公司 | Video label positioning method and device based on view library |
-
2017
- 2017-10-17 CN CN201710964471.8A patent/CN107729486B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4539552B2 (en) * | 2005-12-21 | 2010-09-08 | 日本ビクター株式会社 | Content search apparatus and content search program |
CN104219575A (en) * | 2013-05-29 | 2014-12-17 | 酷盛(天津)科技有限公司 | Related video recommending method and system |
CN103310014A (en) * | 2013-07-02 | 2013-09-18 | 北京航空航天大学 | Method for improving accuracy of search result |
CN104123366A (en) * | 2014-07-23 | 2014-10-29 | 谢建平 | Search method and server |
CN105187795A (en) * | 2015-09-14 | 2015-12-23 | 博康云信科技有限公司 | Video label positioning method and device based on view library |
Non-Patent Citations (1)
Title |
---|
熊回香 等: "标签主题图的构建与实现研究", 《图书情报工作》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108763363A (en) * | 2018-05-17 | 2018-11-06 | 阿里巴巴集团控股有限公司 | A kind of method and device for examining record to be written |
CN108763363B (en) * | 2018-05-17 | 2022-02-18 | 创新先进技术有限公司 | Method and device for checking record to be written |
CN109635157A (en) * | 2018-10-30 | 2019-04-16 | 北京奇艺世纪科技有限公司 | Model generating method, video searching method, device, terminal and storage medium |
CN109977318A (en) * | 2019-04-04 | 2019-07-05 | 掌阅科技股份有限公司 | Book search method, electronic equipment and computer storage medium |
CN109977318B (en) * | 2019-04-04 | 2021-06-29 | 掌阅科技股份有限公司 | Book searching method, electronic device and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107729486B (en) | 2021-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112533051B (en) | Barrage information display method, barrage information display device, computer equipment and storage medium | |
CN110020437B (en) | Emotion analysis and visualization method combining video and barrage | |
CN110781668B (en) | Text information type identification method and device | |
JP2019212290A (en) | Method and device for processing video | |
CN110134931B (en) | Medium title generation method, medium title generation device, electronic equipment and readable medium | |
CN110597962B (en) | Search result display method and device, medium and electronic equipment | |
CN108319723A (en) | A kind of picture sharing method and device, terminal, storage medium | |
KR20160055930A (en) | Systems and methods for actively composing content for use in continuous social communication | |
CN112699645B (en) | Corpus labeling method, apparatus and device | |
CN110489649B (en) | Method and device for associating content with tag | |
CN102737022B (en) | Method and device for acquiring and searching relevant knowledge information | |
CN112749328A (en) | Searching method and device and computer equipment | |
CN109299277A (en) | Public opinion analysis method, server and computer-readable storage medium | |
CN111177462B (en) | Video distribution timeliness determination method and device | |
CN108363748B (en) | Topic portrait system and topic portrait method based on Zhihu | |
CN107729486A (en) | A kind of video searching method and device | |
CN111683294A (en) | Bullet screen comment recommendation method for information extraction | |
CN112995690B (en) | Live content category identification method, device, electronic equipment and readable storage medium | |
CN116955598A (en) | Method, device, equipment, medium and program product for generating event summary text | |
CN113705563A (en) | Data processing method, device, equipment and storage medium | |
CN113901263B (en) | Label generation method and device for video material | |
CN113626624B (en) | Resource identification method and related device | |
CN117933260A (en) | Text quality analysis method, device, equipment and storage medium | |
CN118446338B (en) | Model training method, data processing method, electronic device and storage medium | |
CN118568297B (en) | Construction method and application of cognitive warfare system based on Wensheng video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |