[go: up one dir, main page]

CN111400544B - Video data storage method, device, equipment and storage medium - Google Patents

Video data storage method, device, equipment and storage medium Download PDF

Info

Publication number
CN111400544B
CN111400544B CN201911240631.XA CN201911240631A CN111400544B CN 111400544 B CN111400544 B CN 111400544B CN 201911240631 A CN201911240631 A CN 201911240631A CN 111400544 B CN111400544 B CN 111400544B
Authority
CN
China
Prior art keywords
video data
video
association
path
time range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911240631.XA
Other languages
Chinese (zh)
Other versions
CN111400544A (en
Inventor
胡威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN201911240631.XA priority Critical patent/CN111400544B/en
Publication of CN111400544A publication Critical patent/CN111400544A/en
Application granted granted Critical
Publication of CN111400544B publication Critical patent/CN111400544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a data storage method, a device, equipment and a storage medium, and belongs to the technical field of data storage. The method comprises the following steps: acquiring I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the retrieval time of each path of video data; determining video data with association relation in the multiple paths of video data according to the I frame information and/or video retrieval information of the multiple paths of video data; storing the multipath video data, and correspondingly storing video identifications of the video data with association relation. Therefore, the association relation between the multiple paths of video data can be dynamically determined according to the I-frame information and/or the video retrieval information, and the video data is stored according to the association relation, so that the storage mode of the video data is increased, and the storage flexibility of the video data is improved.

Description

Video data storage method, device, equipment and storage medium
Technical Field
The present application relates to the field of data storage technologies, and in particular, to a method, an apparatus, a device, and a storage medium for storing video data.
Background
Currently, multiple paths of video data can be acquired from different directions through multiple cameras, and then stored. Since the multiple paths of video data generally have a certain relationship, in order to facilitate subsequent operations on the multiple paths of video data, such as querying the multiple paths of video data, an association relationship between the multiple cameras can be preset, so that when the multiple paths of video data are stored, video data recorded by the cameras with the association relationship are associated and stored.
However, in the above implementation, the association relationship between the multiple paths of video data is preset, resulting in a single video storage mode.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for storing video data, which can solve the problem that the storage mode of the video data in the related technology is single. The technical scheme is as follows:
in one aspect, there is provided a video data storage method, the method comprising:
acquiring I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the retrieval time of each path of video data;
Determining video data with association relation in the multiple paths of video data according to the I-frame information and/or video retrieval information of the multiple paths of video data;
storing the multipath video data, and correspondingly storing video identifications of the video data with association relations.
In one possible implementation manner of the present application, the determining, according to the I-frame information and/or the video search information of the multiple paths of video data, video data in which an association relationship exists in the multiple paths of video data includes:
for first video data in the multi-path video data, when the number of I frames in a first time range in the first video data is determined to be greater than or equal to a first reference threshold according to I frame information of the first video data, if second video data is determined to exist in the multi-path video data according to I frame information of other video data except the first video data in the multi-path video data, determining that the first video data and the second video data have an association relationship, wherein a time intersection exists between the first time range and the second time range, the second video data refers to video data with the number of I frames in the second time range being greater than or equal to a second reference threshold, and the first video data is any one of the multi-path video data;
And/or the number of the groups of groups,
when the first video data is determined to be searched in a third time range according to the video search information of the first video data, if the third video data is determined to exist in the multiple paths of video data according to the video index information of other video data except the first video data in the multiple paths of video data, determining that the first video data and the third video data have an association relationship, wherein the third time range and the fourth time range have a time intersection, and the third video data refer to the video data searched in the fourth time range.
In one possible implementation manner of the present application, after the video identifiers of the video data having the association relationship are correspondingly stored, the method further includes:
determining a first grading value of the first video data according to the total searching times of the first video data and/or a locking state, wherein the locking state comprises a locking state or an unlocking state;
inquiring whether video data stored in association with the first video data exists in the multiple paths of video data;
when the video data stored in association with the first video data exists, the first scoring value is used as the associated scoring value of the video data stored in association with the first video data.
In one possible implementation manner of the present application, the method further includes:
determining a degree of association between the first video data and the second video data according to the number of I frames in the first time range in the first video data and the number of I frames in the second time range in the second video data;
and updating the first grading value of the first video data and the association grading value of the video data stored in association with the first video data according to the association degree.
In one possible implementation manner of the present application, the determining, according to the total number of times of searching and the lock state of the first video data, a first score value of the first video data includes:
multiplying the total search times of the first video data by a first scoring threshold value corresponding to the search times to obtain a search scoring value of the first video data;
and determining the sum of the search scoring value of the first video data and a second scoring threshold value corresponding to the lock state to obtain the first scoring value of the first video data.
In one possible implementation manner of the present application, the method further includes:
detecting a current video storage state, wherein the video storage state comprises storage time and/or storage quantity;
When the video storage state meets the cyclic coverage condition of video storage, determining a second grading value of the multipath video data;
and deleting the multiple paths of video data, wherein the second scoring value of the multiple paths of video data ranks behind the first quantity threshold value.
In one possible implementation manner of the present application, the determining the second scoring value of the multiple paths of video data includes:
for first video data in the multi-path video data, determining a first grading value of the first video data based on the total searching times and/or the locking state of the first video data;
detecting whether second video data stored in association with the first video data exists or not, wherein the first video data is any one path of video data in the multiple paths of video data;
when second video data stored in association with the first video data exists, acquiring an association score value, wherein the association score value is at least determined based on the total number of times of searching and/or the lock state of the second video data;
and determining the sum of the first grading value of the first video data and the associated grading value to obtain a second grading value of the first video data.
In another aspect, there is provided a video data apparatus, the apparatus comprising:
the system comprises an acquisition module, a search module and a search module, wherein the acquisition module is used for acquiring I frame information and/or video search information of multiple paths of video data, the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video search information of each path of video data is at least used for indicating the time when each path of video data is searched;
the determining module is used for determining video data with association relation in the multiple paths of video data according to the I-frame information and/or the video retrieval information of the multiple paths of video data;
and the storage module is used for storing the multipath video data and correspondingly storing video identifications of the video data with the association relation.
In one possible implementation manner of the present application, the determining module is configured to:
for first video data in the multi-path video data, when the number of I frames in a first time range in the first video data is determined to be greater than or equal to a first reference threshold according to I frame information of the first video data, if second video data is determined to exist in the multi-path video data according to I frame information of other video data except the first video data in the multi-path video data, determining that the first video data and the second video data have an association relationship, wherein a time intersection exists between the first time range and the second time range, the second video data refers to video data with the number of I frames in the second time range being greater than or equal to a second reference threshold, and the first video data is any one of the multi-path video data;
And/or the number of the groups of groups,
when the first video data is determined to be searched in a third time range according to the video search information of the first video data, if the third video data is determined to exist in the multiple paths of video data according to the video index information of other video data except the first video data in the multiple paths of video data, determining that the first video data and the third video data have an association relationship, wherein the third time range and the fourth time range have a time intersection, and the third video data refer to the video data searched in the fourth time range.
In a possible implementation manner of the present application, the storage module is further configured to:
determining a first grading value of the first video data according to the total searching times of the first video data and/or a locking state, wherein the locking state comprises a locking state or an unlocking state;
inquiring whether video data stored in association with the first video data exists in the multiple paths of video data;
when the video data stored in association with the first video data exists, the first scoring value is used as the associated scoring value of the video data stored in association with the first video data.
In a possible implementation manner of the present application, the storage module is further configured to:
determining a degree of association between the first video data and the second video data according to the number of I frames in the first time range in the first video data and the number of I frames in the second time range in the second video data;
and updating the first grading value of the first video data and the association grading value of the video data stored in association with the first video data according to the association degree.
In a possible implementation manner of the present application, the storage module is further configured to:
multiplying the total search times of the first video data by a first scoring threshold value corresponding to the search times to obtain a search scoring value of the first video data;
and determining the sum of the search scoring value of the first video data and a second scoring threshold value corresponding to the lock state to obtain the first scoring value of the first video data.
In a possible implementation manner of the present application, the storage module is further configured to:
detecting a current video storage state, wherein the video storage state comprises storage time and/or storage quantity;
when the video storage state meets the cyclic coverage condition of video storage, determining a second grading value of the multipath video data;
And deleting the multiple paths of video data, wherein the second scoring value of the multiple paths of video data ranks behind the first quantity threshold value.
In a possible implementation manner of the present application, the storage module is further configured to:
for first video data in the multi-path video data, determining a first grading value of the first video data based on the total searching times and/or the locking state of the first video data;
detecting whether second video data stored in association with the first video data exists or not, wherein the first video data is any one path of video data in the multiple paths of video data;
when second video data stored in association with the first video data exists, acquiring an association score value, wherein the association score value is at least determined based on the total number of times of searching and/or the lock state of the second video data;
and determining the sum of the first grading value of the first video data and the associated grading value to obtain a second grading value of the first video data.
In another aspect, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the steps of any of the methods of the above aspects.
In another aspect, a computer readable storage medium is provided, having stored thereon instructions which, when executed by a processor, implement the steps of any of the methods of the above aspects.
In another aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of any of the methods of the above aspect.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
and acquiring I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the retrieval time of each path of video data. And determining the video data with the association relation in the multiple paths of video data according to the I frame information and/or the video retrieval information of the multiple paths of video data, and correspondingly storing the video identifications of the video data with the association relation while storing the multiple paths of video data. Therefore, the association relation between the multiple paths of video data can be dynamically determined according to the I-frame information and/or the video retrieval information, and the video data is stored according to the association relation, so that the storage mode of the video data is increased, and the storage flexibility of the video data is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart illustrating a video storage method according to an exemplary embodiment;
FIG. 2 is a schematic diagram of a framework of a video data store, shown in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating an association between video data according to an exemplary embodiment;
fig. 4 is a schematic diagram showing an association relationship between video data according to another exemplary embodiment;
FIG. 5 is a schematic diagram of a frame of a video data store, shown in accordance with an exemplary embodiment;
FIG. 6 is a schematic diagram of a video storage device according to an exemplary embodiment;
fig. 7 is a schematic structural view of an electronic device according to another exemplary embodiment.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
It should be understood that references to "at least one" in embodiments of the present application may be one or more; references to "comprising" are intended to mean that the inclusion is not exclusive, i.e., that other elements are possible in addition to the elements recited; references to "a and/or B" denote one or both of a or B.
Before describing the video storage method provided by the embodiment of the present application in detail, the terms and implementation environments related to the embodiment of the present application will be briefly described.
First, terms related to the embodiments of the present application will be briefly described.
I frame (Intra Coded Frame, intra-coded frame): can be decoded separately into a complete image independent of other frames. The I frames become relatively more when the picture changes greatly, the P frames and the B frames become relatively less, and the I frames become relatively less when the picture changes little or basically unchanged, and the P frames and the B frames become relatively more.
P frame (Predictive Coded Frame, predicted frame): the data containing the difference from the last decodable frame (such as I frame) can be decoded into a complete image only by relying on the data of the last decodable frame, and the data quantity is small because only the difference data from the last decodable frame is reserved, so that the transmission bandwidth and the storage capacity can be greatly saved.
B frame (Bipredictive Coded Frame, bi-directional predicted frame): the difference between the frame and the front and the back decodable frames is included, the frame can be decoded into a complete image only by relying on the data of other decodable frames, and the data volume of the B frame is small similar to the P frame, so that the transmission bandwidth and the storage capacity can be greatly saved.
Cyclic coverage: because the hardware capacity for storing video data is limited, the storage capacity is exhausted as new video data is written continuously, so that in order to ensure writing of new video data, some video data in the stored video data needs to be overwritten, i.e. some video data needs to be deleted. Of course, the condition of triggering the cyclic overlay is not only that the storage capacity is exhausted, but also that the storage time of the video data may be related, for example, when the storage of the video data reaches a certain time (for example, 30 days), the newly written video data needs to overlay the previous video data, so as to ensure that the new video data can be stored continuously, typically, the video data with the storage time far from the current time is overlaid.
Data locking: when a user considers that certain video data is important, and does not wish to delete the video data when the loop overlay is performed, the video data is not generally deleted when the loop overlay is performed.
Dynamic association: by dynamically analyzing each path of video data in the multiple paths of video data, the association relationship between the multiple paths of video data can be continuously confirmed and corrected along with the change of time.
Next, an implementation environment according to an embodiment of the present application will be briefly described.
The storage method of the video data provided by the embodiment of the application can be executed by the electronic equipment. As an example, the electronic device may itself be configured with multiple cameras to collect multiple paths of video data through the multiple cameras configured. As another example, the electronic device may also be connected with multiple cameras to collect multiple paths of video data through the multiple cameras connected. Further, the electronic device may be configured with a video acquisition module, and the plurality of cameras may be controlled to acquire multiple paths of video data using the configured video acquisition module.
In some embodiments, the electronic device may further include a storage module, which may include at least one hard disk, and a cache module, which may be used to cache some temporary video data. Further, the electronic device may be provided with a user interaction interface so that a user may lock, retrieve, etc. stored video data based on the user interaction interface. The electronic device may be an in-vehicle device, a computer device, an electronic device, or the like, which is not limited by the embodiment of the present application.
Having described the terms and implementation environments related to the embodiments of the present application, a detailed description of the video storage method provided by the embodiments of the present application will be provided below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating a video storage method according to an exemplary embodiment, where the video storage method may be applied to the electronic device, and the method may include the following implementation steps:
step 101: and acquiring I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the retrieval time of each path of video data.
As an example, the I-frame information may include generation timestamp information for each I-frame in the multiplexed video data, which may in fact be used to determine the location of the corresponding I-frame in the video data.
As an example, the video retrieval information may include time information corresponding to each path of video data among the plurality of paths of video data when retrieved.
That is, the electronic device determines the generation time of the I-frame in each path of video data; or determining the time when each path of video data is retrieved; alternatively, the generation time of the I-frame in each path of video data, and the time at which each path of video data is retrieved, are determined.
Step 102: and determining the video data with the association relation in the multiple paths of video data according to the I frame information and/or the video retrieval information of the multiple paths of video data.
That is, the electronic device may detect whether the multiple video data have an association relationship and which video data have an association relationship according to the I-frame information of the multiple video data. Alternatively, the electronic device may also detect whether the multiple video data have an association relationship and which video data have an association relationship according to the video retrieval information of the multiple video data. Still alternatively, the electronic device may further detect whether the multiple paths of video data have an association relationship and which video data have an association relationship according to the I-frame information and the video retrieval information of the multiple paths of video data.
As an example, according to the I-frame information and/or the video retrieval information of the multiple paths of video data, the specific implementation of determining the video data having the association relationship in the multiple paths of video data may include any one of the following ways:
the first implementation mode: when determining that the video data with the association relationship exists in the multiple paths of video data according to the I frame information of the multiple paths of video data, regarding the first video data in the multiple paths of video data, when determining that the number of I frames in a first time range in the first video data is greater than or equal to a first reference threshold according to the I frame information of the first video data, determining that the second video data exists in the multiple paths of video data according to the I frame information of other video data except the first video data, determining that the first video data and the second video data have the association relationship, wherein the first time range and the second time range have time intersection, the second video data refers to video data with the number of I frames in a second time range being greater than or equal to a second reference threshold, and the first video data is any path of video data in the multiple paths of video data.
The first reference threshold may be set by a user according to an actual requirement, or may be set by default by the electronic device, which is not limited in the embodiment of the present application.
The second reference threshold may be set by the user according to the actual requirement, or may be set by default by the electronic device, which is not limited in the embodiment of the present application.
Since the I-frames include a large amount of information of pictures, when the number of I-frames becomes large, it is indicated that the monitored picture changes greatly, and therefore, when the number of I-frames of the first video data in the first time range is large, it is indicated that the picture acquired by the first camera recording the first video data in the first time range changes greatly. If there is a second video data with more I frames in a second time range near the first time range in the multiple video data, it is indicated that the picture collected by a second camera recording the second video data in the second time range is also relatively changed, and it can be determined that the two paths of the first video data and the second video data have a certain association relationship, for example, the first video data and the second video data may be two cameras respectively from different directions monitoring the same scene, so that it can be determined that the first video data and the second video data have an association relationship, and further, the first video data and the second video data can be used as an association group. Thus, one or more associated groups may be determined according to this implementation, as shown in FIG. 3, with a video data and b video data being one associated group, and x video data and y video data being one associated group.
It will be appreciated that the greater the intersection of the first time range and the second time range, the closer the first time range and the second time range are, indicating a higher degree of association between the first video data and the second video data, and further, the greater the number of times such association between the first video data and the second video data occurs, indicating a higher degree of association between the first video data and the second video data.
The second implementation mode: when the first video data is searched in a third time range according to the video searching information of the first video data, if the third video data exists in the multiple paths of video data according to the video index information of other video data except the first video data, determining that the first video data and the third video data have an association relationship, wherein the third time range and the fourth time range have a time intersection, and the third video data refer to the video data searched in the fourth time range.
That is, if the first video data is retrieved in a third time range and there is second video data to be retrieved in a fourth time range around the third time range, for example, if the third time range is [ t1, t1+x ], and the fourth time range is [ t1-d1, t1+d1], it may be determined that there is some association between the first video data and the second video data, and thus, the first video data and the second video data may be associated, and further, the first video data and the second video data may be regarded as an associated group. Thus, one or more associated groups may be determined according to this implementation, as shown in FIG. 4, with a video data and b video data being one associated group and x video data and y video data being one associated group.
It should be noted that, the above description is given by taking the third time range as an example, in another embodiment, the association relationship may be determined based on time, for example, when it is determined that the first video data is retrieved at the target time according to the video retrieval information of the multiple video data, and the second video data is retrieved in the fourth time range, where the fourth time range includes the target time, the association relationship between the first video data and the second video data is determined. For example, if the target time is t2, the fourth time range is [ t2-d2, t2+d2], and the d2 can be set according to the time requirement, for example, the d2 is 5 minutes. It will be understood that the smaller the d2, the higher the degree of association between the first video data and the second video data, and further that the greater the number of times the association between the first video data and the second video data occurs, the higher the degree of association.
Third implementation: when determining the video data with the association relationship in the multiple paths of video data according to the I-frame information and the video retrieval information of the multiple paths of video data, the specific implementation of the method can include: for first video data in the multi-path video data, determining a first value corresponding to the first video data according to I frame information of the multi-path video data, and determining a second value corresponding to the first video data according to video retrieval information of the multi-path video data, wherein the first value is provided with a first weight value, and the second value is provided with a second weight value. And carrying out preset operation on the basis of the first numerical value, the second numerical value, the first weight value and the second weight value to obtain an operation numerical value, and then determining video data with association relation with the first video data in the multi-path video data according to the operation numerical value.
The first weight value and the second weight value can be set according to actual requirements.
For example, when it is determined that the number of I frames in the first video data within the first time range is greater than or equal to the first reference threshold according to the I frame information of the first video data, if it is determined that there is second video data in the multiple video data according to the I frame information of other video data in the multiple video data except the first video data, a first value corresponding to the first video data is determined according to the number of I frames in the first time range (for example, the number of I frames is determined as the first value), and the second video data refers to video data in which the number of I frames in the second time range is greater than or equal to the second reference threshold. When it is determined that the first video data is retrieved within a third time range according to the video retrieval information of the first video data, if it is determined that third video data exists in the multiple video data according to the video index information of other video data than the first video data in the multiple video data, a second value corresponding to the first video data is determined according to the number of times the first video data is retrieved within the third time range (for example, the number of times the first video data is retrieved is determined as the second value), where the third video data refers to video data retrieved within a fourth time range. And multiplying the first value by the first weight value, multiplying the second value by the second weight value, and adding the results obtained by the two multiplication to obtain the operation value. And if the obtained operation value is larger than a certain set threshold value, determining that the association relationship exists between the first video data and the second video data.
Step 103: storing the multipath video data, and correspondingly storing video identifications of the video data with association relation.
Wherein the video identification may be used to uniquely identify a path of video data.
When it is determined that some paths of video data in the multiple paths of video data have association relations, video identifiers of the video data having the association relations can be correspondingly stored in the process of storing the multiple paths of video data, for example, correspondence relations between the video identifiers, that is, a group of correspondence relations corresponds to an associated group, can be established.
Further, after the video identifications of the video data having the association relationship are correspondingly stored, the following operations may be further included.
Step 104: a first scoring value for the first video data is determined based on a total number of searches for the first video data and/or a lock status, including a locked status or an unlocked status.
As described above, the first video data is any one of the multiple paths of video data. The first score value may be used to indicate the importance of the first video data, i.e. the importance of each of the multiple paths of video data may be determined based on the total number of searches and/or the lock status of the first video data.
Specifically, the first score value of each path of video data may be determined according to the total number of times each path of video data is retrieved, or the first score value of each path of video data may be determined according to the lock state of each path of video data, or the first score value of each path of video data may be determined according to the total number of times each path of video data is retrieved and the lock state.
As an example, determining the specific implementation of the first scoring value of the first video data according to the total number of searches and the lock state of the first video data may include: multiplying the total search times of the first video data by a first scoring threshold corresponding to the search times to obtain a search scoring value of the first video data, and determining the sum of the search scoring value of the first video data and a second scoring threshold corresponding to the lock state to obtain the first scoring value of the first video data.
The first scoring threshold may be set by the user according to the actual requirement, or may be set by default by the electronic device, which is not limited in the embodiment of the present application.
The second scoring threshold may be set by the user according to the actual requirement, or may be set by default by the electronic device, which is not limited in the embodiment of the present application.
In addition, when the lock states are different, the corresponding second scoring thresholds are typically different. As an example, when the lock state includes a lock state, it is important to state that the corresponding video data is not generally desired to be deleted by the user when the cycle is overlaid, and thus the second scoring threshold value corresponding to the lock state may be set relatively larger; conversely, when the lock state includes an unlocked state, it is not important to indicate that the video data is not important, and thus the second scoring threshold corresponding to the unlocked state may be set to be relatively small, and illustratively, the second scoring threshold corresponding to the unlocked state may be set to 0.
That is, the description will be given here taking, as an example, the determination of the first score value of each path of video data based on the total number of times of retrieval and the lock state of each path of video data. When searching of the stored first video data is detected, the electronic device may count the total number of times of searching of the first video data, so when the first score value of the first video data needs to be determined, the electronic device may acquire the total number of times of searching of the first video data, and multiply the total number of times of searching with the first score threshold value to obtain a corresponding search score value in the search dimension. It will be appreciated that the larger the total number of searches of the first video data, the more frequently the user searches the first video data, so that the larger the user's demand for the first video data can be determined, and the more important the user can be determined, and therefore, the search score value can be used as an index for determining the first score value of the first video data. In addition, whether the first video data is locked may be considered, and if so, it is indicated that the user does not wish to delete the first video data, and thus it is indicated that the importance of the first video data is great, and thus the lock status may also be used as another index for determining the first score value of the first video data. In an implementation, a second score threshold sum of the search score value and the lock state is determined as a first score value of the first video data.
For example, assuming that the first scoring threshold is 1, the second scoring threshold corresponding to the locked state is 70, and the second scoring threshold corresponding to the unlocked state is 0, if the total number of searches of the first video data is 10, the locked state of the first video data includes the locked state, it may be determined that the first scoring value of the first video data is 10×1+70, that is, 80 scores.
The above description is given by taking the first scoring value of each path of video data determined according to the total number of times of searching each path of video data and the lock state as an example. In another embodiment, when the first score value of each path of video data is determined according to the total number of times of searching each path of video data, the total number of times of searching each path of video data may be multiplied by the first score threshold corresponding to the number of times of searching, and the result obtained by the multiplication may be determined as the first score value of each path of video data. Alternatively, in another embodiment, when the first score value of each path of video data is determined according to the lock state of each path of video data, the lock state of each path of video data may be detected, and the second score threshold corresponding to the lock state of each path of video data may be determined as the first score value of each path of video data.
Step 105: and inquiring whether the video data stored in association with the first video data exists in the multiple paths of video data.
As described above, in the storing process of the video data, the video identifications of the video data having the association relationship are correspondingly stored, so that whether other video identifications having the association relationship with the video identifications of the first video data exist can be queried from the stored association relationship according to the video identifications of the first video data, if so, it is determined that the video data associated with the first video data exists in the multiple paths of video data, otherwise, it is determined that the video data associated with the first video data does not exist in the multiple paths of video data.
Step 106: when there is video data stored in association with the first video data, the first score value is taken as an associated score value of the video data stored in association with the first video data.
Among the stored multi-path video data, there may be second video data having an association relationship with the first video data, wherein the number of the second video data may be one or more. It will be understood that, if the first video data is important, the second video data having an association with the first video data is also important, so, in order to ensure consistency of importance among multiple paths of video data having an association, after determining a first score value of the first video data, the first score value may be used as an association score value of the second video data having an association with the first video data, so that the existing second video data is as important as the first video data, as shown in fig. 2.
Further, a degree of association between the first video data and the second video data is determined according to the number of I frames in the first time range in the first video data and the number of I frames in the second time range in the second video data, and a first score value of the first video data and an associated score value of video data stored in association with the first video data are updated according to the degree of association.
In an implementation, if the number of I frames in the first time range in the first video data is greater than a first reference threshold and the number of I frames in the second time range in the second video data is greater than a second reference threshold, it is determined that the association is greater. In this case, the first score value may be increased by a specified threshold multiple, and the associated score value of the video data stored in association with the first video data may be increased by a specified threshold multiple to increase the importance of the first video data and the second video data.
The specified threshold may be set by a user in a user-defined manner according to actual needs, or may be set by default by the electronic device, which is not limited in the embodiment of the present application.
Of course, in another embodiment, the degree of association between the first video data and the second video data may also be determined according to the size of the intersection of the first time range and the second time range, see in particular above.
Furthermore, the cyclic coverage of the multiple paths of video data can be performed according to the association relationship between the multiple paths of video data, which specifically includes: detecting a current video storage state, wherein the video storage state comprises storage time and/or storage quantity, when the video storage state meets the cyclic coverage condition of video storage, determining a second grading value of the multi-path video data, and deleting the multi-path video data with the second grading value being ranked behind a first quantity threshold value in the multi-path video data.
The first number of thresholds may be set by a user according to actual needs, or may be set by default by the electronic device, which is not limited in the embodiment of the present application.
As an example, during video storage, the electronic device may periodically detect the current video storage state, e.g., the electronic device may detect the current video storage state every time period threshold, i.e., the electronic device may periodically detect the current video storage state.
Of course, the above description is merely given by taking the electronic device to periodically detect the current video storage state as an example. In some embodiments, the current video storage state may also be detected after receiving a video data storage request, which may carry video data to be written, for example, by a user triggered by a reference operation, such as a click operation, a slide operation, or the like. That is, the electronic device may also detect the current video storage state upon detecting that there is new video data to be written.
The time duration threshold may be set by a user according to an actual requirement, or may be set by default by the electronic device, which is not limited in the embodiment of the present application.
It should be appreciated that detecting the current video storage status may include detecting a storage capacity occupancy in the current storage module, i.e., detecting how much storage space is already occupied by the video data, how much storage space remains, and/or detecting an earliest storage time and/or a latest storage time of the currently stored video data.
Then, the electronic device may determine, according to the detected video storage state, whether a cyclic coverage condition of video storage is satisfied, which may include at least one of the following ways as an example:
the first implementation mode: and when the cyclic coverage condition comprises a reference capacity threshold, determining the current video storage capacity, and if the current video storage capacity is greater than or equal to the reference capacity threshold, determining that the current video storage state meets the cyclic coverage condition of video storage.
The reference capacity threshold may be set by a user according to an actual requirement, or may be set by default by the electronic device, which is not limited in the embodiment of the present application.
That is, if the current video storage capacity is greater than or equal to the reference capacity threshold, it is indicated that the storage space is about to be or has been occupied, and it may be determined that the video storage state satisfies the cyclic coverage condition of video storage, that is, cyclic coverage processing is required.
The second implementation mode: when the cyclic coverage condition comprises a reference time threshold, determining the earliest storage time in the currently stored video data, and when the difference value of the earliest storage time and the current time is greater than or equal to the reference time threshold, determining that the current video storage state meets the cyclic coverage condition of video storage.
The reference time threshold may be set by a user according to an actual requirement, or may be set by default by the electronic device, which is not limited in the embodiment of the present application.
That is, if the earliest stored video data of the stored multi-path video data is already far from the current time, it may be determined that the stored video data needs to be processed, thereby determining that the current video storage state satisfies the cyclic coverage condition of the video storage.
When it is determined that the video storage state satisfies the cyclic coverage condition of the video storage, it is determined that the cyclic coverage process is required for the stored video data, and for this reason, a second score value of the multi-path video data may be determined, and since the second score value may be used to indicate the importance of the corresponding video data, the lower the second score value, the smaller the importance of the corresponding video data is, and therefore, some video data having a smaller importance may be deleted to enable the writing of new video data to be continued. For example, when the first number threshold is 1, deleting the video data with the lowest second score value in the multiple paths of video data, and retaining other video data.
As an example, a specific implementation of determining the second scoring value for the multi-path video data may include: for a first video data in the plurality of paths of video data, a first scoring value of the first video data is determined based on a total number of times the first video data is retrieved and/or a lock state. Detecting whether second video data stored in association with the first video data exists, wherein the first video data is any one of the multiple paths of video data, and when the second video data stored in association with the first video data exists, acquiring an association grading value, wherein the association grading value is determined at least based on the total search times and/or the lock state of the second video data. And determining the sum of the first grading value of the first video data and the associated grading value to obtain a second grading value of the first video data.
As will be understood from the above description, if there is second video data having an association relationship with the first video data in the stored multi-path video data, there may be an association score value of the first video data, and therefore, before determining the second score value of the first video data, the electronic device may further detect whether there is an association score value of the first video data, and when there is an association score value of the first video data, determine the sum of the first score value of the first video data and the association score value as the second score value of the first video data.
That is, in determining the second score value, it may be determined whether there is an associated score value corresponding to the second video data having an associated relationship with the first video data, as shown in fig. 2, whether there is an associated score value may be determined by the associated relationship recorded by the associated relationship module, and if there is an associated score value of the first video data, the associated score value may be increased in addition to the first score value of the first video data when determining the second score value of the first video data, so that the sum of the two may be determined as the second score value of the first video data.
Further, since multiple paths of video data are generally stored in a storage module, such as a hard disk, the response speed may be slow because the data need to be read from the hard disk when the user retrieves the data. Therefore, when it is detected that the user retrieves the first video data in the stored multi-path video data, the index information and part of the video data of the second video data having the association relationship with the first video data can be loaded into the buffer module, i.e. the data related to the first video data is preloaded, as shown in fig. 5, so that the user can directly obtain the data from the buffer module when retrieving the data related to the first video data, and avoid the need of reading from the storage module, thereby improving the response speed of the user in retrieving the related video data.
In the embodiment of the application, the I-frame information and/or the video retrieval information of multiple paths of video data are obtained, wherein the I-frame information of each path of video data is used for indicating the position of an I-frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the retrieval time of each path of video data. And determining the video data with the association relation in the multiple paths of video data according to the I frame information and/or the video retrieval information of the multiple paths of video data, and correspondingly storing the video identifications of the video data with the association relation while storing the multiple paths of video data. Therefore, the association relation between the multiple paths of video data can be dynamically determined according to the I-frame information and/or the video retrieval information, and the video data is stored according to the association relation, so that the storage mode of the video data is increased, and the storage flexibility of the video data is improved.
Fig. 6 is a schematic diagram of a video storage device that may be implemented in software, hardware, or a combination of both, according to an example embodiment. The video storage device may include:
an obtaining module 610, configured to obtain I-frame information and/or video search information of multiple paths of video data, where the I-frame information of each path of video data is used to indicate a position of an I-frame included in each path of video data, and the video search information of each path of video data is at least used to indicate a time when each path of video data is searched;
A determining module 620, configured to determine, according to the I-frame information and/or the video search information of the multiple paths of video data, video data in which an association relationship exists in the multiple paths of video data;
the storage module 630 is configured to store the multiple paths of video data, and store video identifiers of the video data with an association relationship correspondingly.
In one possible implementation of the present application, the determining module 620 is configured to:
for first video data in the multi-path video data, when the number of I frames in a first time range in the first video data is determined to be greater than or equal to a first reference threshold according to I frame information of the first video data, if second video data is determined to exist in the multi-path video data according to I frame information of other video data except the first video data in the multi-path video data, determining that the first video data and the second video data have an association relationship, wherein a time intersection exists between the first time range and the second time range, the second video data refers to video data with the number of I frames in the second time range being greater than or equal to a second reference threshold, and the first video data is any one of the multi-path video data;
And/or the number of the groups of groups,
when the first video data is determined to be searched in a third time range according to the video search information of the first video data, if the third video data is determined to exist in the multiple paths of video data according to the video index information of other video data except the first video data in the multiple paths of video data, determining that the first video data and the third video data have an association relationship, wherein the third time range and the fourth time range have a time intersection, and the third video data refer to the video data searched in the fourth time range.
In one possible implementation of the present application, the storage module 630 is further configured to:
determining a first grading value of the first video data according to the total searching times of the first video data and/or a locking state, wherein the locking state comprises a locking state or an unlocking state;
inquiring whether video data stored in association with the first video data exists in the multiple paths of video data;
when the video data stored in association with the first video data exists, the first scoring value is used as the associated scoring value of the video data stored in association with the first video data.
In one possible implementation of the present application, the storage module 630 is further configured to:
determining a degree of association between the first video data and the second video data according to the number of I frames in the first time range in the first video data and the number of I frames in the second time range in the second video data;
and updating the first grading value of the first video data and the association grading value of the video data stored in association with the first video data according to the association degree.
In one possible implementation of the present application, the storage module 630 is further configured to:
multiplying the total search times of the first video data by a first scoring threshold value corresponding to the search times to obtain a search scoring value of the first video data;
and determining the sum of the search scoring value of the first video data and a second scoring threshold value corresponding to the lock state to obtain the first scoring value of the first video data.
In one possible implementation of the present application, the storage module 630 is further configured to:
detecting a current video storage state, wherein the video storage state comprises storage time and/or storage quantity;
when the video storage state meets the cyclic coverage condition of video storage, determining a second grading value of the multipath video data;
And deleting the multiple paths of video data, wherein the second scoring value of the multiple paths of video data ranks behind the first quantity threshold value.
In one possible implementation of the present application, the storage module 630 is further configured to:
for first video data in the multi-path video data, determining a first grading value of the first video data based on the total searching times and/or the locking state of the first video data;
detecting whether second video data stored in association with the first video data exists or not, wherein the first video data is any one path of video data in the multiple paths of video data;
when second video data stored in association with the first video data exists, acquiring an association score value, wherein the association score value is at least determined based on the total number of times of searching and/or the lock state of the second video data;
and determining the sum of the first grading value of the first video data and the associated grading value to obtain a second grading value of the first video data.
In the embodiment of the application, the I-frame information and/or the video retrieval information of multiple paths of video data are obtained, wherein the I-frame information of each path of video data is used for indicating the position of an I-frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the retrieval time of each path of video data. And determining the video data with the association relation in the multiple paths of video data according to the I frame information and/or the video retrieval information of the multiple paths of video data, and correspondingly storing the video identifications of the video data with the association relation while storing the multiple paths of video data. Therefore, the association relation between the multiple paths of video data can be dynamically determined according to the I-frame information and/or the video retrieval information, and the video data is stored according to the association relation, so that the storage mode of the video data is increased, and the storage flexibility of the video data is improved.
It should be noted that: in the video data storage device provided in the above embodiment, when implementing the video data storage method, only the division of the above functional modules is used for illustration, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the storage device for video data provided in the above embodiment and the storage method embodiment for video data belong to the same concept, and the specific implementation process is detailed in the method embodiment, which is not repeated here.
Fig. 7 shows a block diagram of an electronic device 700 according to an exemplary embodiment of the application. The electronic device 700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Electronic device 700 may also be referred to by other names of user devices, portable electronic devices, laptop electronic devices, desktop electronic devices, and the like.
In general, the electronic device 700 includes: a processor 701 and a memory 702.
Processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 701 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 701 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 701 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 701 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. The memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the video data storage method provided by the method embodiments of the present application.
In some embodiments, the electronic device 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 703 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, touch display 705, camera 706, audio circuitry 707, positioning component 708, and power supply 709.
A peripheral interface 703 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 701 and memory 702. In some embodiments, the processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 704 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuitry 704 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 704 may communicate with other electronic devices via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 704 may also include NFC (Near Field Communication ) related circuitry, which is not limiting of the application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 705 is a touch display, the display 705 also has the ability to collect touch signals at or above the surface of the display 705. The touch signal may be input to the processor 701 as a control signal for processing. At this time, the display 705 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 705 may be one, providing a front panel of the electronic device 700; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the electronic device 700 or in a folded design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or a folded surface of the electronic device 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 705 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 706 is used to capture images or video. Optionally, the camera assembly 706 includes a front camera and a rear camera. In general, a front camera is disposed on a front panel of an electronic device, and a rear camera is disposed on a rear surface of the electronic device. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing, or inputting the electric signals to the radio frequency circuit 704 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, and disposed at different locations of the electronic device 700. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 707 may also include a headphone jack.
The location component 708 is operative to locate a current geographic location of the electronic device 700 for navigation or LBS (Location Based Service, location-based services). The positioning component 708 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, or the Galileo system of Russia.
The power supply 709 is used to power the various components in the electronic device 700. The power supply 709 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 700 further includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyroscope sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the electronic device 700. For example, the acceleration sensor 711 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch display screen 705 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 711. The acceleration sensor 711 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the electronic device 700, and the gyro sensor 712 may collect a 3D motion of the user on the electronic device 700 in cooperation with the acceleration sensor 711. The processor 701 may implement the following functions based on the data collected by the gyro sensor 712: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 713 may be disposed at a side frame of the electronic device 700 and/or at an underlying layer of the touch display screen 705. When the pressure sensor 713 is disposed at a side frame of the electronic device 700, a grip signal of the user on the electronic device 700 may be detected, and the processor 701 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at the lower layer of the touch display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 705. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 714 is used to collect a fingerprint of the user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 714 may be provided on the front, back, or side of the electronic device 700. When a physical key or vendor Logo is provided on the electronic device 700, the fingerprint sensor 714 may be integrated with the physical key or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 705 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 705 is turned down. In another embodiment, the processor 701 may also dynamically adjust the shooting parameters of the camera assembly 706 based on the ambient light intensity collected by the optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically provided on the front panel of the electronic device 700. The proximity sensor 716 is used to capture the distance between the user and the front of the electronic device 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front face of the electronic device 700 gradually decreases, the processor 701 controls the touch display 705 to switch from the bright screen state to the off screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the electronic device 700 gradually increases, the processor 701 controls the touch display screen 705 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 7 is not limiting of the electronic device 700 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
The embodiment of the application also provides a non-transitory computer readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to execute the video data storage method provided by the embodiment.
The embodiment of the application also provides a computer program product containing instructions, which when run on a computer, cause the computer to execute the video data storage method provided by the embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (9)

1. A method of storing video data, the method comprising:
acquiring I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the retrieval time of each path of video data;
for first video data in the multi-path video data, when the number of I frames in a first time range in the first video data is determined to be greater than or equal to a first reference threshold according to I frame information of the first video data, if second video data is determined to exist in the multi-path video data according to I frame information of other video data except the first video data in the multi-path video data, determining that the first video data and the second video data have an association relationship, wherein the second video data refers to video data with the number of I frames in a second time range being greater than or equal to a second reference threshold, the first video data is any one of the multi-path video data, and the first time range and the second time range have a time intersection;
And/or the number of the groups of groups,
when the first video data is determined to be searched in a third time range according to the video search information of the first video data, if the third video data exists in the multiple paths of video data according to the video index information of other video data except the first video data, determining that the first video data and the third video data have an association relationship, wherein the third video data refers to the video data searched in a fourth time range, and the third time range and the fourth time range have a time intersection;
storing the multipath video data, and correspondingly storing video identifications of the video data with association relations.
2. The method of claim 1, wherein after storing the video identifications of the video data having the association relationship, the method further comprises:
determining a first grading value of the first video data according to the total searching times of the first video data and/or a locking state, wherein the locking state comprises a locking state or an unlocking state;
inquiring whether video data stored in association with the first video data exists in the multiple paths of video data;
When the video data stored in association with the first video data exists, the first scoring value is used as the associated scoring value of the video data stored in association with the first video data.
3. The method of claim 2, wherein the method further comprises:
determining a degree of association between the first video data and the second video data according to the number of I frames in the first time range in the first video data and the number of I frames in the second time range in the second video data;
and updating the first grading value of the first video data and the association grading value of the video data stored in association with the first video data according to the association degree.
4. The method of claim 2, wherein the determining a first scoring value for the first video data based on the total number of searches and the lock status for the first video data comprises:
multiplying the total search times of the first video data by a first scoring threshold value corresponding to the search times to obtain a search scoring value of the first video data;
and determining the sum of the search scoring value of the first video data and a second scoring threshold value corresponding to the lock state to obtain the first scoring value of the first video data.
5. The method of claim 1, wherein the method further comprises:
detecting a current video storage state, wherein the video storage state comprises storage time and/or storage quantity;
when the video storage state meets the cyclic coverage condition of video storage, determining a second grading value of the multipath video data;
and deleting the multiple paths of video data, wherein the second scoring value of the multiple paths of video data ranks behind the first quantity threshold value.
6. The method of claim 5, wherein said determining a second scoring value for the plurality of paths of video data comprises:
for first video data in the multi-path video data, determining a first grading value of the first video data based on the total searching times and/or the locking state of the first video data;
detecting whether second video data stored in association with the first video data exists or not, wherein the first video data is any one path of video data in the multiple paths of video data;
when second video data stored in association with the first video data exists, acquiring an association score value, wherein the association score value is at least determined based on the total number of times of searching and/or the lock state of the second video data;
And determining the sum of the first grading value of the first video data and the associated grading value to obtain a second grading value of the first video data.
7. A storage device for video data, the device comprising:
the system comprises an acquisition module, a search module and a search module, wherein the acquisition module is used for acquiring I frame information and/or video search information of multiple paths of video data, the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video search information of each path of video data is at least used for indicating the time when each path of video data is searched;
a determining module, configured to determine, for a first video data in the multiple paths of video data, if the number of I frames in a first time range in the first video data is greater than or equal to a first reference threshold according to I frame information of the first video data, determine that a second video data exists in the multiple paths of video data according to I frame information of other video data in the multiple paths of video data except the first video data, and determine that the first video data and the second video data have an association relationship, where the second video data refers to video data in which the number of I frames in a second time range is greater than or equal to a second reference threshold, and the first video data is any one of the multiple paths of video data, where the first time range and the second time range have a time intersection;
And/or the number of the groups of groups,
when the first video data is determined to be searched in a third time range according to the video search information of the first video data, if the third video data exists in the multiple paths of video data according to the video index information of other video data except the first video data, determining that the first video data and the third video data have an association relationship, wherein the third video data refers to the video data searched in a fourth time range, and the third time range and the fourth time range have a time intersection;
and the storage module is used for storing the multipath video data and correspondingly storing video identifications of the video data with the association relation.
8. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the steps of any of the methods of claims 1-6.
9. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the steps of the method of any of claims 1-6.
CN201911240631.XA 2019-12-06 2019-12-06 Video data storage method, device, equipment and storage medium Active CN111400544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911240631.XA CN111400544B (en) 2019-12-06 2019-12-06 Video data storage method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911240631.XA CN111400544B (en) 2019-12-06 2019-12-06 Video data storage method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111400544A CN111400544A (en) 2020-07-10
CN111400544B true CN111400544B (en) 2023-09-19

Family

ID=71433992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911240631.XA Active CN111400544B (en) 2019-12-06 2019-12-06 Video data storage method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111400544B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005084701A (en) * 2003-09-04 2005-03-31 Ntt Comware Corp Multimedia data multiplex association method, apparatus and program in search index for multimedia data
US7024100B1 (en) * 1999-03-26 2006-04-04 Matsushita Electric Industrial Co., Ltd. Video storage and retrieval apparatus
CN103984710A (en) * 2014-05-05 2014-08-13 深圳先进技术研究院 Video interaction inquiry method and system based on mass data
CN104281651A (en) * 2014-09-16 2015-01-14 福建星网锐捷安防科技有限公司 Method and system for searching large volume of video data
CN106326439A (en) * 2016-08-26 2017-01-11 东方网力科技股份有限公司 Real-time video storage and retrieval method and device
CN107493458A (en) * 2017-09-13 2017-12-19 郑州云海信息技术有限公司 A kind of method and system for accessing monitor video
CN109327680A (en) * 2018-09-21 2019-02-12 北京世纪东方国铁科技股份有限公司 Multi-path video data concurrently stores and read method
CN110198432A (en) * 2018-10-30 2019-09-03 腾讯科技(深圳)有限公司 Processing method, device, computer-readable medium and the electronic equipment of video data

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8693842B2 (en) * 2011-07-29 2014-04-08 Xerox Corporation Systems and methods for enriching audio/video recordings
CN104639860B (en) * 2014-12-31 2017-11-03 浙江华和万润信息科技有限公司 The storage method and device of a kind of monitoring video
US9554160B2 (en) * 2015-05-18 2017-01-24 Zepp Labs, Inc. Multi-angle video editing based on cloud video sharing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7024100B1 (en) * 1999-03-26 2006-04-04 Matsushita Electric Industrial Co., Ltd. Video storage and retrieval apparatus
JP2005084701A (en) * 2003-09-04 2005-03-31 Ntt Comware Corp Multimedia data multiplex association method, apparatus and program in search index for multimedia data
CN103984710A (en) * 2014-05-05 2014-08-13 深圳先进技术研究院 Video interaction inquiry method and system based on mass data
CN104281651A (en) * 2014-09-16 2015-01-14 福建星网锐捷安防科技有限公司 Method and system for searching large volume of video data
CN106326439A (en) * 2016-08-26 2017-01-11 东方网力科技股份有限公司 Real-time video storage and retrieval method and device
CN107493458A (en) * 2017-09-13 2017-12-19 郑州云海信息技术有限公司 A kind of method and system for accessing monitor video
CN109327680A (en) * 2018-09-21 2019-02-12 北京世纪东方国铁科技股份有限公司 Multi-path video data concurrently stores and read method
CN110198432A (en) * 2018-10-30 2019-09-03 腾讯科技(深圳)有限公司 Processing method, device, computer-readable medium and the electronic equipment of video data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M. Smith, A. Khotanzad.Efficient Indexing and Storage Algorithm for Visual Databases.Fifth International Conference on Information Technology.2008,162-166. *
刘晓军.多路视频流的压缩与存储控制技术.西安电子科技大学.2012,全文. *

Also Published As

Publication number Publication date
CN111400544A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN109874312B (en) Method and device for playing audio data
CN112084811B (en) Identity information determining method, device and storage medium
CN113127130B (en) Page jump method, device and storage medium
CN111782950B (en) Sample data set acquisition method, device, equipment and storage medium
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN111754386B (en) Image area shielding method, device, equipment and storage medium
CN111177137B (en) Method, device, equipment and storage medium for data deduplication
CN111857793B (en) Training method, device, equipment and storage medium of network model
CN113127181A (en) Memory management method, device and storage medium
WO2020088542A1 (en) Data management method, apparatus and system, and server, terminal and storage medium
CN111383251B (en) Method, device, monitoring equipment and storage medium for tracking target object
CN111813322B (en) Storage pool creation method, device, equipment and storage medium
CN111275607A (en) Interface display method and device, computer equipment and storage medium
CN110471614B (en) Method for storing data, method and device for detecting terminal
CN113099378B (en) Positioning method, device, equipment and storage medium
CN111459410B (en) Memory space allocation method and device, electronic equipment and storage medium
CN116842047A (en) Cache updating method, device, equipment and computer readable storage medium
CN111797017B (en) Method, device, test equipment and storage medium for storing log
CN113469322B (en) Method, device, equipment and storage medium for determining executable program of model
CN111428080B (en) Video file storage method, video file search method and video file storage device
CN110032421B (en) Method and device for managing atlas in memory, terminal and storage medium
CN112052153B (en) Product version testing method and device
CN113495770A (en) Method, device, terminal and storage medium for displaying application page
CN113408989B (en) Automobile data comparison method and device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant