WO2023124841A1 - Method for storing feature in database, electronic device and computer readable storage medium - Google Patents
Method for storing feature in database, electronic device and computer readable storage medium Download PDFInfo
- Publication number
- WO2023124841A1 WO2023124841A1 PCT/CN2022/137033 CN2022137033W WO2023124841A1 WO 2023124841 A1 WO2023124841 A1 WO 2023124841A1 CN 2022137033 W CN2022137033 W CN 2022137033W WO 2023124841 A1 WO2023124841 A1 WO 2023124841A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature
- stored
- cache queue
- length
- features
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000003860 storage Methods 0.000 title claims abstract description 66
- 238000004590 computer program Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 description 20
- 238000004519 manufacturing process Methods 0.000 description 11
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000004904 shortening Methods 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 3
- 239000002699 waste material Substances 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000010387 memory retrieval Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000008685 targeting Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
- G06F16/2282—Tablespace storage structures; Management thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the embodiments of the present application relate to the field of image recognition, and in particular, to a feature storage method, electronic equipment, and a computer-readable storage medium.
- AI Artificial Intelligence
- Face features are the focus of video surveillance systems, and the storage of stranger face features is an important requirement in the security field.
- the pedestrian trajectory can be generated through the processing of the AI analysis system, and the target object can be found through the pedestrian trajectory and a corresponding alarm can be generated. Then remind the relevant personnel to deal with it in time, which greatly improves the efficiency of using video data to process government affairs.
- feature warehousing is particularly important.
- the current implementation method used to solve the problem of repeated warehousing of feature warehousing still has the problem of low efficiency of feature warehousing.
- the main purpose of the embodiments of the present application is to provide a feature storage method, an electronic device, and a computer-readable storage medium, so that the efficiency of feature storage can be improved while avoiding repeated feature storage.
- the embodiment of the present application provides a feature storage method, including: obtaining the features to be stored; searching in the pre-established cache queue to see if the features to be stored exist, and when the retrieved In the case where the feature to be stored in the cache queue exists, the feature to be stored in the library is discarded; In the data table, search whether the feature to be stored exists; if the feature to be stored does not exist in the data table, store the feature to be stored in the cache queue In the case where the feature to be stored in the data table is retrieved, discard the feature to be stored in the database; if there is a cached feature in the cache queue, put the cache queue The features currently cached in the database are stored in the database, so that the features currently cached are stored in the data table.
- an embodiment of the present application further provides an electronic device, including: at least one processor; and a memory connected to the at least one processor in communication; wherein, the memory stores information that can be used by the Instructions executed by at least one processor, the instructions are executed by the at least one processor, so that the at least one processor can execute the above-mentioned feature storage method.
- the embodiment of the present application also provides a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, the above-mentioned feature storage method is implemented.
- the feature warehousing method provided by the embodiment of the present application obtains the features to be warehousing, searches whether there are features to be warehousing in the pre-established cache queue, and in the case that there are features to be warehoused in the cache queue.
- the cache queue If there is no feature to be stored in the cache queue, check whether there is a feature to be stored in the data table, so as to avoid targeting all features to be stored.
- the characteristics of the library must be retrieved in the data table. Since the retrieval in the cache queue is a retrieval in memory, it is much faster than directly searching in the database, that is, the speed of memory retrieval is greater than the speed of database retrieval, so it will greatly improve the retrieval speed. efficiency. At the same time, according to the law of data appearance, the same feature will generally appear in a short period of time, so the probability of being hit by only searching the cache queue is very high, so the retrieval efficiency can be further improved to further improve the storage efficiency. In addition, the use of the cache queue can also reduce the time required to lock and unlock the library when each feature is stored, thereby further improving the efficiency of feature storage.
- Fig. 1 is a schematic flow chart of a feature storage method mentioned in the embodiment of the present application.
- Fig. 2 is the flow chart that the length of buffer queue mentioned in the embodiment of the present application increases
- Fig. 3 is the flow chart of shortening the length of the cache queue mentioned in the embodiment of the present application.
- Fig. 4 is a schematic flow chart of another feature storage method mentioned in the embodiment of the present application.
- Fig. 5 is a schematic structural diagram of the electronic device mentioned in the embodiment of the present application.
- Method 1 By configuring the storage time interval of a single face feature.
- the specific implementation method is: after the system deployment is completed, collect the time when the face features are stored within a period of time (such as 1 week) as the basic data, and take the maximum value of the basic data plus 20% of the maximum value as a single person in the entire system
- the unit storage time of face feature storage That is to say, only one feature is allowed to be stored in this unit of time. Even if the storage is completed in advance, the next feature storage operation cannot be performed.
- Method 2 Enter the database by locking the table.
- the specific implementation method is: the video source generates a face picture, and the algorithm model extracts features from the face picture.
- Each thread that gets the facial features judges whether it can perform storage operations (that is, judges whether the gallery is locked). If the storage operation can be done (that is, the library is not locked), first lock the library operation, and then put the acquired face features into the library (before this thread does not perform the unlock operation, other threads cannot delete the library. Operations other than read operations), after the storage is completed, perform the library unlock operation to release the operation right of the library.
- the gallery is the data table in the database.
- the problems of the above-mentioned method 1 include: low time utilization rate, the configured time interval is no longer applicable when the gallery data changes too much, there will be too much empty waiting time or there is still a phenomenon of repeated storage, which makes the feature input Library efficiency is low.
- the problems in the above method two include: frequently operating the library, wasting a certain amount of time, and making the feature storage efficiency low.
- a feature warehousing method is provided, which is applied to an electronic device, and the electronic device may be a server.
- the application scenarios of the embodiments of the present application may include but not limited to: stream data collection of facial (or humanoid) features into storage, such as suspect chase and escape tracking, generation of pedestrian trajectories, and feature storage in scenarios such as one person with one profile.
- the schematic flow diagram of the feature storage method in the embodiment of the present application can refer to Figure 1, including:
- Step 101 Obtain the features to be stored
- Step 102 Search in the pre-established cache queue whether there are features to be stored in the warehouse, if yes, execute step 103, otherwise execute step 104;
- Step 103 discarding the features to be stored
- Step 104 Search in the pre-established data table whether there is a feature to be stored; if yes, execute step 103, otherwise execute step 105;
- Step 105 Store the features to be stored in the cache queue
- Step 106 judge whether the cache queue is empty; if yes, continue to execute step 106, otherwise execute step 107;
- Step 107 Store the currently cached features in the cache queue into the database, so that the currently cached features are stored in the data table.
- the cache queue is pre-established to solve the problem of the speed mismatch between the production data and the consumption data.
- first search in the cache queue If there is no feature to be stored in the cache queue, check whether there is a feature to be stored in the data table, so as to avoid targeting all features to be stored.
- the characteristics of the library must be retrieved in the data table. Since the retrieval in the cache queue is a retrieval in memory, it is much faster than directly searching in the database, that is, the speed of memory retrieval is greater than the speed of database retrieval, so it will greatly improve the retrieval speed. efficiency.
- the same feature will generally appear in a short period of time, so the probability of being hit by only searching the cache queue is very high, so the retrieval efficiency can be further improved to further improve the storage efficiency.
- the use of the cache queue can also reduce the time required to lock and unlock the library when each feature is stored, thereby further improving the efficiency of feature storage.
- the features to be stored are reference features that are expected to be stored in the data table for subsequent feature matching, and may be: picture features, sound features, and the like.
- the feature to be stored is the unique representation of the collected data calculated by the algorithm.
- the features to be stored can be face features (ie features of face pictures), and the face features stored in the data table are used for subsequent face feature matching.
- the features to be stored may be fingerprint features (ie features of fingerprint pictures), and the fingerprint features stored in the data table are used for subsequent fingerprint feature matching.
- the features to be stored may be voiceprint features, and the voiceprint features stored in the data table are used for subsequent voiceprint feature matching. It can be understood that, when applied to different scenarios, the features to be loaded into the library may be set according to actual needs, which is not specifically limited in this embodiment.
- the features to be stored are extracted from the video stream or picture stream provided by the camera, that is, the server can obtain the video stream or picture stream taken by the camera, and extract the picture features in the video stream or picture stream; wherein , the camera may be a security camera.
- the picture features of the same person are generally generated in one time period, which greatly increases the possibility of the picture features of the same person being hit in the cache queue, that is, in It is easy to quickly detect whether the feature to be stored in the cache queue is stored in the cache queue within a period of time, so that a judgment can be quickly made on whether to discard the feature to be stored, so as to further improve the retrieval efficiency and storage efficiency.
- the pre-established cache queue is used to cache the features to be stored, which is equivalent to data producers (features to be stored) and data consumers (features to be Storage) transmission buffer channel between.
- data producers features to be stored
- data consumers features to be Storage
- the store operation can be understood as transferring the features cached in the cache queue to the data table in the database.
- the feature enqueue and feature dequeue in the cache queue may be performed at the same time, which can also improve the efficiency of feature warehousing to a certain extent.
- the server can search whether there is a feature that is the same as the feature to be stored in the currently cached features in the cache queue. If there is a feature that is the same as the feature to be stored in the cache queue, it means that the cache queue has already stored the feature to be stored In order to avoid repeated storage, the server will perform step 103, that is, discard the feature to be stored; if there is no feature identical to the feature to be stored in the cache queue, it means that the cache queue has not yet stored the feature to be stored. If the feature to be stored, step 104 can be executed to continue searching in the data table whether the feature to be stored exists.
- the data table is a data table in a database where the features to be stored may be stored.
- the database includes: a gallery, a sound library, etc., and the gallery and the sound library can be two data tables stored in the database respectively. .
- the server can search whether there is the same feature as the feature to be stored in the feature stored in the data table. If there is a feature that is the same as the feature to be stored in the data table, it means that the feature to be stored has already been stored in the data table. In order to avoid repeated storage, the server will execute step 103, that is, discard the feature to be stored; if the same feature as the feature to be stored does not exist in the data table, it means that the feature to be stored has not been stored in the data table. For the features to be stored, step 105 can be performed to store the features to be stored in the cache queue.
- step 106 the server judges whether the cache queue is empty, that is, whether there is a cache feature in the cache queue.
- the cache queue of the server is empty, that is, there is no cache characteristic in the cache queue, continue to execute step 106 .
- the cache queue of the server is not empty, that is, there is a cache feature in the cache queue, go to step 107 .
- the server performs a warehouse-in operation on the characteristics of the current cache in the cache queue, so that the characteristics of the current cache in the cache queue are stored in the data table; wherein, the warehouse-in operation can also be understood as a dequeue operation. If it is empty, the features cached in the cache queue are dequeued in the order of first-in-first-out, and stored in the data table in turn. That is to say, if the cache queue is not empty, features can be continuously fetched from the cache queue and put into storage. Wherein, since the data table is stored in the database, storing in the data table can be understood as storing in the database.
- a cache queue is established in the memory, and when features need to be stored in the database, they are first retrieved in the cache queue. Since the retrieval of the cache queue is in-memory retrieval, it is faster than directly searching the database Many, so it will greatly improve the retrieval efficiency of features. If the same feature is retrieved, the feature warehousing process will be terminated. Otherwise, the data table will be retrieved. If the same feature is retrieved, the warehousing process will be terminated. Otherwise, the feature will be stored in the data table. Avoid storing identical features in the database.
- the cache queue compared with the method of locking the table into the database, in this embodiment, it does not need to spend the time of locking and unlocking the table, which greatly improves the efficiency of feature storage.
- the same feature will generally appear in a short period of time, so the probability of being hit by only searching the cache queue is very high, which can further improve the retrieval efficiency and storage efficiency.
- the retrieval process (that is, the process of retrieving whether there is a feature to be stored in the cache queue and the data table) and the storage process (that is, the process of entering the feature cached in the cache queue into the data table) are two parallel
- the processes are respectively implemented by two independent threads, that is, steps 106-107 and steps 101-105 are two parallel processes, so that the efficiency of feature storage can be greatly improved.
- the cache queue has a preset initial length
- the feature storage method further includes: when the cache length occupied by the currently cached feature in the cache queue is greater than the first preset length, the queue of the cache queue The length is increased to a second preset length; wherein, the first preset length is smaller than the initial length.
- the first preset length and the second preset length can be set according to actual needs. It can be understood that since the queue length of the cache queue is increased to the second preset length, the second preset length is greater than the initial length .
- the cache length occupied by the currently cached features in the cache queue is greater than the first preset length, since the first preset length is smaller than the initial length of the cache queue, it means that the cache queue is about to be full at this time but not yet full.
- the speed at which features are stored in the cache queue (which can be understood as the production speed of feature producers) is greater than the speed at which features are stored in storage (which can be understood as the consumption speed of feature consumers), so that there is a large amount of data in the cache queue that has not been stored in the warehouse.
- the queue length of the cache queue is increased to the second preset length, that is, the length of the cache queue is expanded so that it can adapt to the scenario where the production speed of the feature producer is greater than the consumption speed of the feature consumer, avoiding the The problem of feature loss caused by the mismatch between the speed and the consumption speed of the feature consumer.
- the first preset length is greater than 0.5n and less than n
- the second preset length is greater than n and less than or equal to 1.5n
- n is the initial length.
- the first preset length and the second preset length are denoted by T1 and T2 respectively, and the relationship satisfied by T1 and T2 is respectively: 0.5n ⁇ T1 ⁇ n; n ⁇ T2 ⁇ 1.5n.
- 0.5n ⁇ T1 ⁇ n can better measure the length state of the cache queue that is about to be full but not yet full
- n ⁇ T2 ⁇ 1.5n can better ensure that the expanded cache queue can store more waiting items
- the features of the library are well adapted to the scenario where the production speed of feature producers is greater than the consumption speed of feature consumers.
- T2 ⁇ 1.5n can also avoid excessive expansion of the length of the cache queue and waste of memory resources.
- the first preset length is 0.8n
- the second preset length is 1.2n. That is, when the cache length occupied by the currently cached features in the cache queue is greater than 0.8n, the queue length of the cache queue is increased to 1.2n. 0.8n can well represent the state that is about to be full but not yet full. At this time, the queue length of the cache queue is increased to 1.2n, which is equivalent to providing 0.4n for the data to be stored in the cache queue next. storage space. By properly expanding the length of the cache queue at the right time, it is beneficial to make reasonable use of resources while avoiding the problem of repeated storage of the same feature due to the mismatch between the production speed of the feature producer and the consumption speed of the feature consumer. question.
- the flow chart of increasing the length of the buffer queue involved in the feature storage method can refer to FIG. 2, including:
- Step 201 Determine whether the cache length occupied by the currently cached features in the cache queue is greater than 0.8n, if yes, execute step 202, otherwise the process ends;
- Step 202 Increase the queue length of the cache queue to 1.2n.
- the feature warehousing method further includes: when it is detected k consecutive times within the preset time period that the length occupied by the currently cached feature in the cache queue is less than the third preset length, storing the cache queue The length of the queue is reduced to a fourth preset length; wherein, the third preset length is smaller than the first preset length, and the k is a natural number greater than 1.
- the third preset length and the fourth preset length can be set according to actual needs. It can be understood that, since the queue length of the cache queue is reduced to the fourth preset length, the fourth preset length can be smaller than the initial length.
- the preset time period can be set according to actual needs, which is not specifically limited in this embodiment.
- the buffer is cached for a period of time
- the amount of data cached in the queue continues to be small.
- reducing the queue length of the cache queue to a fourth preset length is beneficial to avoid waste of resources while meeting the cache requirements of the current cache queue. Especially in a multi-process scenario, it can effectively save resources and maximize resource utilization.
- the third preset length is greater than 0 and less than or equal to 0.5n
- the fourth preset length is greater than 0.5n and less than n.
- the third preset length and the fourth preset length are denoted by T3 and T4 respectively, and the relationship satisfied by T3 and T4 is respectively: 0 ⁇ T3 ⁇ 0.5n; 0.5n ⁇ T4 ⁇ n.
- the third preset length is greater than 0 and less than or equal to 0.5n, which can better measure the state of less data stored in the cache queue.
- the fourth preset length is greater than 0.5n and less than n, which can better ensure that the reduced length
- the cache queue can also meet the current cache requirements of the cache queue, and is also beneficial to avoid waste of resources.
- the third preset length is 0.5n
- the fourth preset length is 0.6n. That is, when the length occupied by the currently cached features in the cache queue is detected k times in a preset period of time is less than 0.5n, the queue length of the cache queue is reduced to 0.6n, and if it is less than 0.5n, it is less than the original length Half, it can well represent the state of the cache queue with less long-term cache. At this time, the queue length of the cache queue is reduced to 0.6n, which is equivalent to providing at least 0.1n for the data to be stored in the cache queue next. storage space. By properly shortening the length of the cache queue at the right time, it is beneficial to make reasonable use of resources while avoiding the problem of repeated storage of the same feature due to the mismatch between the production speed of the feature producer and the consumption speed of the feature consumer. question.
- the flow chart of shortening the length of the cache queue involved in the method for storing features can refer to FIG. 3 , including:
- Step 301 Determine whether the length occupied by the currently cached feature in the cache queue is less than 0.5n; if yes, execute step 302, otherwise the process ends;
- Step 302 accumulating the number of times that the length of the currently cached feature in the cache queue detected continuously within a preset time period is less than 0.5n;
- Step 303 judge whether the accumulated number of times is greater than k; if yes, execute step 304, otherwise execute step 305;
- Step 304 reducing the queue length of the cache queue to 0.6n;
- Step 305 Set the accumulated times to 0.
- the process of increasing the length of the cache queue may be included, such as the process similar to that in FIG. 2 , and the process of shortening the length of the cache queue may also be included.
- the process in FIG. 3 it may also include the process of increasing and shortening the length of the cache queue at the same time. That is, in the embodiment of the present application, a dynamically adjustable cache queue is used, and the length of the queue can be adaptively changed as the production speed of feature data changes, which effectively balances the space complexity and storage efficiency.
- the schematic flow chart of the feature storage method can refer to Figure 4, including:
- Step 401 Initialize a cache queue q with a length of n;
- Step 402 Obtain the picture and extract the feature f of the picture; for example, to obtain the picture collected by the security camera, the feature f is the feature to be stored;
- Step 403 Determine the relationship between the cache length m and n occupied by the features in the current cache queue
- Step 404 If m>0.8n, modify the length of the cache queue to 1.2n;
- Step 405 If m ⁇ 0.5n is detected for k consecutive times within a period of time, modify the length of the cache queue to 0.6n;
- Step 406 If m ⁇ 0.8n or if m ⁇ 0.5n is not detected for k consecutive times within a period of time, then do not change the cache queue length n;
- Step 407 Search whether the feature f exists in the cache queue q, if it exists, execute step 408, otherwise execute step 409;
- Step 408 Discard the feature f
- Step 409 Search whether the feature f exists in the data table, if it exists, execute step 408, otherwise execute step 410;
- Step 410 Insert feature f into cache queue q;
- step 401 Repeat the steps from step 401 to step 410 above;
- Step 411 If the cache queue q is not empty, continuously extract features from q and put them into the library.
- step 411 is written as the last step in FIG. 4 , it does not mean that step 411 is not the last step executed, that is, step 411 can be executed as long as there is a cached feature in the cache queue q.
- the retrieval of the cache queue is a retrieval in memory
- the retrieval efficiency of the same feature will be greatly improved.
- the length of the cache queue can be dynamically changed (it can be shortened or expanded), which fundamentally solves the problem that the production speed of the feature producer is inconsistent with the consumption speed of the feature consumer, that is, it does not match.
- the picture features of the same person are generally generated in one time period, which greatly increases the possibility of the picture features of the same person being hit in the cache queue.
- the retrieval efficiency is further improved.
- the two processes of retrieval and storage can be performed concurrently, which is beneficial to further improve storage efficiency.
- step division of the above various methods is only for the sake of clarity of description. During implementation, it can be combined into one step or some steps can be split and decomposed into multiple steps. As long as they include the same logical relationship, they are all within the scope of protection of this patent. ; Adding insignificant modifications or introducing insignificant designs to the algorithm or process, but not changing the core design of the algorithm and process are all within the scope of protection of this patent.
- the embodiment of the present application also provides an electronic device, as shown in FIG. 5 , including: at least one processor 501; and a memory 502 communicatively connected to at least one processor 501; The instructions executed by the processor 501 are executed by at least one processor 501, so that the at least one processor 501 can execute the method for storing features in the foregoing embodiments.
- the memory 502 and the processor 501 are connected by a bus, and the bus may include any number of interconnected buses and bridges, and the bus connects one or more processors 501 and various circuits of the memory 502 together.
- the bus may also connect together various other circuits such as peripherals, voltage regulators, and power management circuits, all of which are well known in the art and therefore will not be further described herein.
- the bus interface provides an interface between the bus and the transceivers.
- a transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing means for communicating with various other devices over a transmission medium.
- the data processed by the processor 501 is transmitted on the wireless medium through the antenna, and further, the antenna also receives the data and transmits the data to the processor 501 .
- Processor 501 is responsible for managing the bus and general processing, and may also provide various functions including timing, peripheral interface, voltage regulation, power management and other control functions. And the memory 502 may be used to store data used by the processor 501 when performing operations.
- the embodiment of the present application also provides a computer-readable storage medium storing a computer program.
- the above method embodiments are implemented when the computer program is executed by the processor.
- a storage medium includes several instructions to make a device ( It may be a single-chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
相关申请related application
本申请要求于2021年12月28日申请的、申请号为202111681412.2的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to a Chinese patent application with application number 202111681412.2 filed on December 28, 2021, the entire contents of which are incorporated herein by reference.
本申请实施例涉及图像识别领域,特别涉及一种特征入库方法、电子设备和计算机可读存储介质。The embodiments of the present application relate to the field of image recognition, and in particular, to a feature storage method, electronic equipment, and a computer-readable storage medium.
近年来,随着国家对人工智能(Artificial Intelligence,AI)的大力支持,AI技术在安防领域的发展也日益增强,安防摄像机采集的视频图像数据经过AI分析处理后极大促进了科技强警等政企业务。近年来警用和民用摄像机安装数量急剧增长,从而产生了海量的视频数据。然而这些数据对数据拥有者来说都是无用数据。采用AI分析系统对视频数据进行智能分析,可以大大减少人力资源的消耗,提升对公共安全事件的反应速度。In recent years, with the country's strong support for Artificial Intelligence (AI), the development of AI technology in the security field has also been increasing. Government business. In recent years, the number of police and civilian camera installations has increased dramatically, resulting in massive video data. However, these data are useless data to the data owner. Using the AI analysis system to intelligently analyze video data can greatly reduce the consumption of human resources and improve the response speed to public security incidents.
人脸特征是视频监控系统的重点关注对象,陌生人脸的特征入库是安防领域的重要需求。通过AI分析系统处理可以生成行人轨迹,进一步通过行人轨迹找到目标对象并产生相应告警。然后及时提醒相关人员进行处理,极大提高了利用视频数据处理政务的效率。这一过程中,特征入库显的尤为重要,然而,目前为了解决特征入库发生重复入库的问题所采用的实现方式,仍然存在特征入库的效率低的问题。Face features are the focus of video surveillance systems, and the storage of stranger face features is an important requirement in the security field. The pedestrian trajectory can be generated through the processing of the AI analysis system, and the target object can be found through the pedestrian trajectory and a corresponding alarm can be generated. Then remind the relevant personnel to deal with it in time, which greatly improves the efficiency of using video data to process government affairs. In this process, feature warehousing is particularly important. However, the current implementation method used to solve the problem of repeated warehousing of feature warehousing still has the problem of low efficiency of feature warehousing.
发明内容Contents of the invention
本申请实施例的主要目的在于提出一种特征入库方法、电子设备和计算机可读存储介质,使得可以在避免特征入库发生重复入库的情况下,提高特征入库的效率。The main purpose of the embodiments of the present application is to provide a feature storage method, an electronic device, and a computer-readable storage medium, so that the efficiency of feature storage can be improved while avoiding repeated feature storage.
为至少实现上述目的,本申请实施例提供了一种特征入库方法,包括:获取待入库的特征;在预先建立的缓存队列中检索是否存在所述待入库的特征,并在检索到所述缓存队列中存在所述待入库的特征的情况下,舍弃所述待入库的特征;在检索到所述缓存队列中不存在所述待入库的特征的情况下,在预先建立的数据表中检索是否存在所述待入库的特征;在检索到所述数据表中不存在所述待入库的特征的情况下,将所述待入库的特征存入所述缓存队列中;在检索到所述数据表中存在所述待入库的特征的情况下,舍弃所述待入库的特征;在所述缓存队列中存在缓存的特征的情况下,将所述缓存队列中当前缓存的特征入库,以使所述当前缓存的特征存入所述数据表。In order to at least achieve the above purpose, the embodiment of the present application provides a feature storage method, including: obtaining the features to be stored; searching in the pre-established cache queue to see if the features to be stored exist, and when the retrieved In the case where the feature to be stored in the cache queue exists, the feature to be stored in the library is discarded; In the data table, search whether the feature to be stored exists; if the feature to be stored does not exist in the data table, store the feature to be stored in the cache queue In the case where the feature to be stored in the data table is retrieved, discard the feature to be stored in the database; if there is a cached feature in the cache queue, put the cache queue The features currently cached in the database are stored in the database, so that the features currently cached are stored in the data table.
为至少实现上述目的,本申请实施例还提供了一种电子设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的特征入库方法。In order to at least achieve the above purpose, an embodiment of the present application further provides an electronic device, including: at least one processor; and a memory connected to the at least one processor in communication; wherein, the memory stores information that can be used by the Instructions executed by at least one processor, the instructions are executed by the at least one processor, so that the at least one processor can execute the above-mentioned feature storage method.
为至少实现上述目的,本申请实施例还提供了一种计算机可读存储介质,存储有计算机 程序,所述计算机程序被处理器执行时实现上述的特征入库方法。In order to at least achieve the above purpose, the embodiment of the present application also provides a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, the above-mentioned feature storage method is implemented.
本申请实施例提供的特征入库方法,获取待入库的特征,在预先建立的缓存队列中检索是否存在待入库的特征,并在检索到缓存队列中存在待入库的特征的情况下,舍弃待入库的特征,在检索到缓存队列中不存在待入库的特征的情况下,在预先建立的数据表中检索是否存在待入库的特征;在检索到数据表中不存在待入库的特征的情况下,将待入库的特征存入缓存队列中;将缓存队列中当前缓存的特征入库,以使所述当前缓存的特征进入所述数据表;即,本申请实施例预先建立缓存队列,解决了生产数据与消费数据速度不匹配的问题。在需要进行特征入库时,先在缓存队列中检索,在缓存队列中不存在待入库的特征的情况下,再去数据表中检测是否存在该待入库的特征,避免针对所有待入库的特征都要在数据表中进行检索,由于在缓存队列的检索是内存中的检索,比直接在数据库中检索速度要快很多,即内存检索的速度大于数据库检索速度,所以会大大提高检索效率。同时,根据数据出现的规律,同一个特征一般会在短时间内集中出现,所以只检索缓存队列就能命中的概率非常高,因此可以进一步提高检索效率,以进一步提高入库效率。另外,使用了缓存队列,还可以减少每个特征入库时都需要锁库和解锁库的时间,从而可以进一步提高特征入库的效率。The feature warehousing method provided by the embodiment of the present application obtains the features to be warehousing, searches whether there are features to be warehousing in the pre-established cache queue, and in the case that there are features to be warehoused in the cache queue. , discard the features to be stored, and if there is no feature to be stored in the cache queue, search whether there is a feature to be stored in the pre-established data table; if there is no feature to be stored in the retrieved data table In the case of the characteristics of storage, store the characteristics to be stored in the cache queue; store the characteristics of the current cache in the cache queue, so that the characteristics of the current cache can enter the data table; that is, the implementation of the present application The example pre-establishes the cache queue to solve the problem of the speed mismatch between the production data and the consumption data. When it is necessary to store features, first search in the cache queue. If there is no feature to be stored in the cache queue, check whether there is a feature to be stored in the data table, so as to avoid targeting all features to be stored. The characteristics of the library must be retrieved in the data table. Since the retrieval in the cache queue is a retrieval in memory, it is much faster than directly searching in the database, that is, the speed of memory retrieval is greater than the speed of database retrieval, so it will greatly improve the retrieval speed. efficiency. At the same time, according to the law of data appearance, the same feature will generally appear in a short period of time, so the probability of being hit by only searching the cache queue is very high, so the retrieval efficiency can be further improved to further improve the storage efficiency. In addition, the use of the cache queue can also reduce the time required to lock and unlock the library when each feature is stored, thereby further improving the efficiency of feature storage.
图1是本申请实施例中提到的一种特征入库方法的流程示意图;Fig. 1 is a schematic flow chart of a feature storage method mentioned in the embodiment of the present application;
图2是本申请实施例中提到的缓存队列的长度增加的流程图;Fig. 2 is the flow chart that the length of buffer queue mentioned in the embodiment of the present application increases;
图3是本申请实施例中提到的缓存队列的长度缩短的流程图;Fig. 3 is the flow chart of shortening the length of the cache queue mentioned in the embodiment of the present application;
图4是本申请实施例中提到的另一种特征入库方法的流程示意图;Fig. 4 is a schematic flow chart of another feature storage method mentioned in the embodiment of the present application;
图5是本申请实施例中提到的电子设备的结构示意图。Fig. 5 is a schematic structural diagram of the electronic device mentioned in the embodiment of the present application.
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的各实施例进行详细的阐述。然而,本领域的普通技术人员可以理解,在本申请各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。以下各个实施例的划分是为了描述方便,不应对本申请的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. However, those of ordinary skill in the art can understand that in each embodiment of the application, many technical details are provided for readers to better understand the application. However, even without these technical details and various changes and modifications based on the following embodiments, the technical solutions claimed in this application can also be realized. The division of the following embodiments is for the convenience of description, and should not constitute any limitation to the specific implementation of the present application, and the embodiments can be combined and referred to each other on the premise of no contradiction.
本申请的发明人发现,目前为了解决特征入库发生重复入库的问题,人脸特征入库的方法主要包括2种,下面分别进行介绍:The inventor of the present application found that, in order to solve the problem of repeated warehousing of features, there are mainly two methods for warehousing facial features, which are introduced below:
方法一:通过配置单张人脸特征入库时间间隔的方式。具体实现方式为:在系统部署完成之后,采集一段时间(比如1周)内的人脸特征入库的时间作为基础数据,取基础数据最大值再加上最大值的20%作为整个系统单个人脸特征入库的单位入库时间。也就是说,在这个单位时间内只允许一个特征做入库处理,即使入库提前完成,也不能做下一个特征入库的操作。Method 1: By configuring the storage time interval of a single face feature. The specific implementation method is: after the system deployment is completed, collect the time when the face features are stored within a period of time (such as 1 week) as the basic data, and take the maximum value of the basic data plus 20% of the maximum value as a single person in the entire system The unit storage time of face feature storage. That is to say, only one feature is allowed to be stored in this unit of time. Even if the storage is completed in advance, the next feature storage operation cannot be performed.
方法二:通过锁表入库的方式。具体实现方式为:视频源产生一张人脸图片,算法模型对该人脸图片提取特征。每个取到人脸特征的线程,判断是否可以做入库操作(即判断图库 是否上锁)。如果可以做入库操作(即图库未上锁),则首先对图库进行锁库操作,然后将获取到的人脸特征入库(在此线程没有执行解锁操作前,其他线程不能对图库执行除读操作之外的其他操作),入库完成后做图库解锁操作,将图库的操作权释放。其中,图库即为数据库中的数据表。Method 2: Enter the database by locking the table. The specific implementation method is: the video source generates a face picture, and the algorithm model extracts features from the face picture. Each thread that gets the facial features judges whether it can perform storage operations (that is, judges whether the gallery is locked). If the storage operation can be done (that is, the library is not locked), first lock the library operation, and then put the acquired face features into the library (before this thread does not perform the unlock operation, other threads cannot delete the library. Operations other than read operations), after the storage is completed, perform the library unlock operation to release the operation right of the library. Among them, the gallery is the data table in the database.
然而,上述方法一存在的问题包括:时间利用率低、图库数据变话太大时所配置的时间间隔不再适用,会出现空等待时间过多或者依然存在重复入库的现象,使得特征入库效率低。上述方法二存在的问题包括:频繁操作图库,浪费一定时间,使得特征入库效率低。However, the problems of the above-mentioned method 1 include: low time utilization rate, the configured time interval is no longer applicable when the gallery data changes too much, there will be too much empty waiting time or there is still a phenomenon of repeated storage, which makes the feature input Library efficiency is low. The problems in the above method two include: frequently operating the library, wasting a certain amount of time, and making the feature storage efficiency low.
本申请实施例中为了至少解决避免特征入库发生重复入库的情况下,特征入库效率低的技术问题,提供了一种特征入库方法,应用于电子设备,该电子设备可以为服务器。本申请实施例的应用场景可以包括但不限于:流数据采集人脸(或人形)特征入库如嫌疑人追逃跟踪、行人轨迹的生成、一人一档等场景下的特征入库。本申请实施例的特征入库方法的流程示意图可以参考图1,包括:In the embodiment of the present application, in order to at least solve the technical problem of low efficiency of feature warehousing in the case of avoiding repeated warehousing of features into warehousing, a feature warehousing method is provided, which is applied to an electronic device, and the electronic device may be a server. The application scenarios of the embodiments of the present application may include but not limited to: stream data collection of facial (or humanoid) features into storage, such as suspect chase and escape tracking, generation of pedestrian trajectories, and feature storage in scenarios such as one person with one profile. The schematic flow diagram of the feature storage method in the embodiment of the present application can refer to Figure 1, including:
步骤101:获取待入库的特征;Step 101: Obtain the features to be stored;
步骤102:在预先建立的缓存队列中检索是否存在待入库的特征,如果是,则执行步骤103,否则执行步骤104;Step 102: Search in the pre-established cache queue whether there are features to be stored in the warehouse, if yes, execute
步骤103:舍弃待入库的特征;Step 103: discarding the features to be stored;
步骤104:在预先建立的数据表中检索是否存在待入库的特征;如果是,则执行步骤103,否则执行步骤105;Step 104: Search in the pre-established data table whether there is a feature to be stored; if yes, execute
步骤105:将待入库的特征存入缓存队列中;Step 105: Store the features to be stored in the cache queue;
步骤106:判断缓存队列是否为空;如果是,则继续执行步骤106,否则执行步骤107;Step 106: judge whether the cache queue is empty; if yes, continue to execute
步骤107:将缓存队列中当前缓存的特征入库,以使当前缓存的特征存入数据表。Step 107: Store the currently cached features in the cache queue into the database, so that the currently cached features are stored in the data table.
本申请实施例中,预先建立缓存队列,解决了生产数据与消费数据速度不匹配的问题。在需要进行特征入库时,先在缓存队列中检索,在缓存队列中不存在待入库的特征的情况下,再去数据表中检测是否存在该待入库的特征,避免针对所有待入库的特征都要在数据表中进行检索,由于在缓存队列的检索是内存中的检索,比直接在数据库中检索速度要快很多,即内存检索的速度大于数据库检索速度,所以会大大提高检索效率。同时,根据数据出现的规律看,同一个特征一般会在短时间内集中出现,所以只检索缓存队列就能命中的概率非常高,因此可以进一步提高检索效率,以进一步提高入库效率。另外,使用了缓存队列,还可以减少每个特征入库时都需要锁库和解锁库的时间,从而可以进一步提高特征入库的效率。In the embodiment of the present application, the cache queue is pre-established to solve the problem of the speed mismatch between the production data and the consumption data. When it is necessary to store features, first search in the cache queue. If there is no feature to be stored in the cache queue, check whether there is a feature to be stored in the data table, so as to avoid targeting all features to be stored. The characteristics of the library must be retrieved in the data table. Since the retrieval in the cache queue is a retrieval in memory, it is much faster than directly searching in the database, that is, the speed of memory retrieval is greater than the speed of database retrieval, so it will greatly improve the retrieval speed. efficiency. At the same time, according to the law of data appearance, the same feature will generally appear in a short period of time, so the probability of being hit by only searching the cache queue is very high, so the retrieval efficiency can be further improved to further improve the storage efficiency. In addition, the use of the cache queue can also reduce the time required to lock and unlock the library when each feature is stored, thereby further improving the efficiency of feature storage.
在步骤101中,待入库的特征为希望存入数据表用于进行后续的特征匹配的参考特征,可以为:图片特征、声音特征等。该待入库的特征为通过算法计算出来的所采集数据的唯一表征。In
比如,在人脸识别场景中,待入库的特征可以为人脸特征(即人脸图片的特征),存入数据表的人脸特征用于进行后续的人脸特征匹配。在指纹识别场景中,待入库的特征可以为指纹特征(即指纹图片的特征),存入数据表的指纹特征用于进行后续的指纹特征匹配。在声纹识别场景中,待入库的特征可以为声纹特征,存入数据表的声纹特征用于进行后续的声纹特征匹配。可以理解的是,在应用于不同场景时,待入库的特征可以根据实际需要进行设置,本实施例对此不做具体限定。For example, in a face recognition scenario, the features to be stored can be face features (ie features of face pictures), and the face features stored in the data table are used for subsequent face feature matching. In the fingerprint recognition scenario, the features to be stored may be fingerprint features (ie features of fingerprint pictures), and the fingerprint features stored in the data table are used for subsequent fingerprint feature matching. In the voiceprint recognition scenario, the features to be stored may be voiceprint features, and the voiceprint features stored in the data table are used for subsequent voiceprint feature matching. It can be understood that, when applied to different scenarios, the features to be loaded into the library may be set according to actual needs, which is not specifically limited in this embodiment.
在一个实施例中,待入库的特征从摄像机提供的视频流或图片流中提取得到,即服务器 可以获取摄像机拍摄的视频流或图片流,并提取视频流或图片流中的图片特征;其中,摄像机可以为安防摄像机。对于摄像机提供的视频流或图片流来说,同一个人的图片特征一般会集中到一个时间段内产生的,这就使得同一个人的图片特征在缓存队列中被命中的可能性大大提高,即在一段时间内容易快速检测到缓存队列中是否存入了待入库的特征,从而可以快速做出是否舍弃该待入库的特征的判断,以进一步提高了检索效率,以提高入库效率。In one embodiment, the features to be stored are extracted from the video stream or picture stream provided by the camera, that is, the server can obtain the video stream or picture stream taken by the camera, and extract the picture features in the video stream or picture stream; wherein , the camera may be a security camera. For the video stream or picture stream provided by the camera, the picture features of the same person are generally generated in one time period, which greatly increases the possibility of the picture features of the same person being hit in the cache queue, that is, in It is easy to quickly detect whether the feature to be stored in the cache queue is stored in the cache queue within a period of time, so that a judgment can be quickly made on whether to discard the feature to be stored, so as to further improve the retrieval efficiency and storage efficiency.
在步骤102中,预先建立的缓存队列用于缓存待入库的特征,该缓存队列相当于是数据生产者(生产待入库的特征)与数据消费者(消费待入库的特征,即进行特征入库)之间的传输缓冲通道。在数据生产者的生产速度大于数据消费者的消费速度的情况下,难以做到特征生产与特征消费的同步,此时可以将数据生产者生产的特征(即获取的待入库的特征)先缓存到缓存队列中,然后再逐渐对缓存队列中缓存的特征进行入库操作,入库操作可以理解为将缓存队列中缓存的特征转移至数据库中的数据表中。缓存队列中的特征入队和特征出队可能同时进行,在一定程度上也可以提高特征入库的效率。In
服务器可以在缓存队列中当前缓存的特征中检索是否存在与待入库的特征相同的特征,如果缓存队列中存在与待入库的特征相同的特征,说明缓存队列中已经存储过该待入库的特征,为避免重复存储,则服务器会执行步骤103,即舍弃该待入库的特征;如果缓存队列中不存在与待入库的特征相同的特征,说明缓存队列中还未存储过该待入库的特征,则可以执行步骤104,继续在数据表中检索是否存在该待入库的特征。The server can search whether there is a feature that is the same as the feature to be stored in the currently cached features in the cache queue. If there is a feature that is the same as the feature to be stored in the cache queue, it means that the cache queue has already stored the feature to be stored In order to avoid repeated storage, the server will perform
在步骤104中,数据表为待入库的特征可能存入的数据库中的数据表,比如,数据库中包括:图库、声音库等,图库和声音库可以分别为数据库中存储的两张数据表。In
服务器可以在数据表中已经存储的特征中检索是否存在与待入库的特征相同的特征,如果数据表中存在与待入库的特征相同的特征,说明数据表中已经存储过该待入库的特征,为避免重复存储,则服务器会执行步骤103,即舍弃该待入库的特征;如果数据表中不存在与待入库的特征相同的特征,说明数据表中还未存储过该待入库的特征,则可以执行步骤105,将待入库的特征存入缓存队列。The server can search whether there is the same feature as the feature to be stored in the feature stored in the data table. If there is a feature that is the same as the feature to be stored in the data table, it means that the feature to be stored has already been stored in the data table. In order to avoid repeated storage, the server will execute
在步骤106中,服务器判断缓存队列是否为空,即缓存队列中是否存在缓存的特征。当服务器缓存队列为空,即缓存队列中不存在缓存的特征,则继续执行步骤106。当服务器缓存队列不为空,即缓存队列中存在缓存的特征,则进入步骤107。In
在步骤107中,服务器对缓存队列中当前缓存的特征进行入库操作,以使缓存队列中当前缓存的特征存入数据表;其中,入库操作也可以理解为出队操作,在缓存队列不为空的情况下,缓存队列中缓存的特征按照先入先出的顺序出队,依次存入数据表中。也就是说,若缓存队列不为空,则可以不断从缓存队列中取特征入库。其中,由于数据表存储在数据库中,因此存入数据表即可以理解为存入数据库。In
本实施例中,在服务启动时,在内存中建立缓存队列,在需要进行特征入库时,首先在缓存队列中检索,由于缓存队列的检索是内存中的检索,比直接检索数据库速度要快很多,所以会大大提高特征的检索效率,若检索到相同特征,则终止特征入库流程,否则检索数据表,若检索到相同特征则终止入库流程,否则将特征存入数据表,以此避免数据库中存入相同特征。使用了缓存队列,相对锁表入库的方法,本实施例中无需花费锁表和解锁表的时间,极大的提高了特征入库的效率。同时,根据数据出现的规律看,同一个特征一般会在短时间内集中出现,所以只检索缓存队列就能命中的概率非常高,从而可以进一步提高检索效率, 以进一步提高入库效率。In this embodiment, when the service is started, a cache queue is established in the memory, and when features need to be stored in the database, they are first retrieved in the cache queue. Since the retrieval of the cache queue is in-memory retrieval, it is faster than directly searching the database Many, so it will greatly improve the retrieval efficiency of features. If the same feature is retrieved, the feature warehousing process will be terminated. Otherwise, the data table will be retrieved. If the same feature is retrieved, the warehousing process will be terminated. Otherwise, the feature will be stored in the data table. Avoid storing identical features in the database. Using the cache queue, compared with the method of locking the table into the database, in this embodiment, it does not need to spend the time of locking and unlocking the table, which greatly improves the efficiency of feature storage. At the same time, according to the law of data appearance, the same feature will generally appear in a short period of time, so the probability of being hit by only searching the cache queue is very high, which can further improve the retrieval efficiency and storage efficiency.
在一个实施例中,检索过程(即检索缓存队列和数据表中是否存在待入库的特征的过程)和入库过程(即缓存队列中缓存的特征进入数据表的过程)是两个并行的过程分别通过两个独立的线程实现,即步骤106-107与步骤101-105是两个并行的过程,从而可以极大地提高特征入库效率。In one embodiment, the retrieval process (that is, the process of retrieving whether there is a feature to be stored in the cache queue and the data table) and the storage process (that is, the process of entering the feature cached in the cache queue into the data table) are two parallel The processes are respectively implemented by two independent threads, that is, steps 106-107 and steps 101-105 are two parallel processes, so that the efficiency of feature storage can be greatly improved.
在一个实施例中,缓存队列具有预设的初始长度,特征入库方法还包括:在缓存队列中当前缓存的特征所占用的缓存长度大于第一预设长度的情况下,将缓存队列的队列长度增大至第二预设长度;其中,第一预设长度小于初始长度。第一预设长度和第二预设长度可以根据实际需要进行设置,可以理解的是,由于是将缓存队列的队列长度增大至第二预设长度,因此,第二预设长度大于初始长度。在缓存队列中当前缓存的特征所占用的缓存长度大于第一预设长度的情况下,由于第一预设长度小于缓存队列的初始长度,说明此时缓存队列即将存满但还未存满,特征存入缓存队列的速度(可以理解为特征生产者的生产速度)大于特征入库的速度(可以理解为特征消费者的消费速度),使得缓存队列中存在大量数据没有入库,则通过将缓存队列的队列长度增大至第二预设长度,即扩充缓存队列的长度,使其可以适应特征生产者的生产速度大于特征消费者的消费速度这样一种场景,避免因特征生产者的生产速度与特征消费者的消费速度不匹配而产生的特征丢失的问题。In one embodiment, the cache queue has a preset initial length, and the feature storage method further includes: when the cache length occupied by the currently cached feature in the cache queue is greater than the first preset length, the queue of the cache queue The length is increased to a second preset length; wherein, the first preset length is smaller than the initial length. The first preset length and the second preset length can be set according to actual needs. It can be understood that since the queue length of the cache queue is increased to the second preset length, the second preset length is greater than the initial length . In the case that the cache length occupied by the currently cached features in the cache queue is greater than the first preset length, since the first preset length is smaller than the initial length of the cache queue, it means that the cache queue is about to be full at this time but not yet full, The speed at which features are stored in the cache queue (which can be understood as the production speed of feature producers) is greater than the speed at which features are stored in storage (which can be understood as the consumption speed of feature consumers), so that there is a large amount of data in the cache queue that has not been stored in the warehouse. The queue length of the cache queue is increased to the second preset length, that is, the length of the cache queue is expanded so that it can adapt to the scenario where the production speed of the feature producer is greater than the consumption speed of the feature consumer, avoiding the The problem of feature loss caused by the mismatch between the speed and the consumption speed of the feature consumer.
在一个实施例中,第一预设长度大于0.5n且小于n,第二预设长度大于n且小于或等于1.5n,n为初始长度。第一预设长度和第二预设长度分别用T1、T2表示,则T1和T2满足的关系分别为:0.5n<T1<n;n<T2≤1.5n。0.5n<T1<n能够较好的衡量缓存队列即将存满但还未存满的长度状态,n<T2≤1.5n,能够较好的保证扩充后的缓存队列能够存入较多的待入库的特征,以很好的适应特征生产者的生产速度大于特征消费者的消费速度这样一种场景,同时T2≤1.5n,也能避免将缓存队列的长度扩展的过大而浪费内存资源。In one embodiment, the first preset length is greater than 0.5n and less than n, the second preset length is greater than n and less than or equal to 1.5n, and n is the initial length. The first preset length and the second preset length are denoted by T1 and T2 respectively, and the relationship satisfied by T1 and T2 is respectively: 0.5n<T1<n; n<T2≤1.5n. 0.5n<T1<n can better measure the length state of the cache queue that is about to be full but not yet full, and n<T2≤1.5n can better ensure that the expanded cache queue can store more waiting items The features of the library are well adapted to the scenario where the production speed of feature producers is greater than the consumption speed of feature consumers. At the same time, T2≤1.5n can also avoid excessive expansion of the length of the cache queue and waste of memory resources.
在一个实施例中,第一预设长度为0.8n,第二预设长度为1.2n。即在缓存队列中当前缓存的特征所占用的缓存长度大于0.8n的情况下,将缓存队列的队列长度增大至1.2n。0.8n能够很好的表征即将存满但还未存满的状态,此时再将缓存队列的队列长度增大至1.2n,相当于能够为接下来要存入缓存队列中的数据提供0.4n的存储空间。通过在恰当的时机对缓存队列的长度进行恰当的扩充,有利于在合理利用资源的同时,避免因特征生产者的生产速度与特征消费者的消费速度不匹配而产生的同一特征重复入库的问题。In one embodiment, the first preset length is 0.8n, and the second preset length is 1.2n. That is, when the cache length occupied by the currently cached features in the cache queue is greater than 0.8n, the queue length of the cache queue is increased to 1.2n. 0.8n can well represent the state that is about to be full but not yet full. At this time, the queue length of the cache queue is increased to 1.2n, which is equivalent to providing 0.4n for the data to be stored in the cache queue next. storage space. By properly expanding the length of the cache queue at the right time, it is beneficial to make reasonable use of resources while avoiding the problem of repeated storage of the same feature due to the mismatch between the production speed of the feature producer and the consumption speed of the feature consumer. question.
在一个实施例中,特征入库方法中涉及的缓存队列的长度增加的流程图可以参考图2,包括:In one embodiment, the flow chart of increasing the length of the buffer queue involved in the feature storage method can refer to FIG. 2, including:
步骤201:判断缓存队列中当前缓存的特征所占用的缓存长度是否大于0.8n,如果是,则执行步骤202,否则该流程结束;Step 201: Determine whether the cache length occupied by the currently cached features in the cache queue is greater than 0.8n, if yes, execute
步骤202:将缓存队列的队列长度增大至1.2n。Step 202: Increase the queue length of the cache queue to 1.2n.
在一个实施例中,特征入库方法还包括:在预设时间段内连续k次检测到缓存队列中当前缓存的特征所占用的长度小于第三预设长度的情况下,将所述缓存队列的队列长度减少至第四预设长度;其中,所述第三预设长度小于所述第一预设长度,所述k为大于1的自然数。第三预设长度和第四预设长度可以根据实际需要进行设置,可以理解的是,由于是将缓存队列的队列长度减小至第四预设长度,因此,第四预设长度可以小于初始长度。预设时间段可 以根据实际需要进行设置,本实施例对此不做具体限定。如果在预设时间段内连续k次检测到缓存队列中当前缓存的特征所占用的长度小于第三预设长度,且由于第三预设长度小于第一预设长度,说明在一段时间内缓存队列中缓存的数据量持续较小,此时将缓存队列的队列长度减少至第四预设长度,有利于在满足当前缓存队列的缓存需求的同时,避免资源的浪费。尤其在多进程场景下,能够有效的节省资源,促进资源利用最大化。In one embodiment, the feature warehousing method further includes: when it is detected k consecutive times within the preset time period that the length occupied by the currently cached feature in the cache queue is less than the third preset length, storing the cache queue The length of the queue is reduced to a fourth preset length; wherein, the third preset length is smaller than the first preset length, and the k is a natural number greater than 1. The third preset length and the fourth preset length can be set according to actual needs. It can be understood that, since the queue length of the cache queue is reduced to the fourth preset length, the fourth preset length can be smaller than the initial length. The preset time period can be set according to actual needs, which is not specifically limited in this embodiment. If it is detected that the length occupied by the currently cached feature in the cache queue is less than the third preset length for k consecutive times within the preset period of time, and because the third preset length is less than the first preset length, it means that the buffer is cached for a period of time The amount of data cached in the queue continues to be small. At this time, reducing the queue length of the cache queue to a fourth preset length is beneficial to avoid waste of resources while meeting the cache requirements of the current cache queue. Especially in a multi-process scenario, it can effectively save resources and maximize resource utilization.
在一个实施例中,所述第三预设长度大于0且小于或等于0.5n,所述第四预设长度大于0.5n且小于n。第三预设长度和第四预设长度分别用T3、T4表示,则T3和T4满足的关系分别为:0<T3≤0.5n;0.5n<T4<n。第三预设长度大于0且小于或等于0.5n,能够较好的衡量缓存队列存储数据较少的状态,第四预设长度大于0.5n且小于n,能够较好的保证长度减小后的缓存队列也能够满足缓存队列的当前缓存需求,还有利于避免资源的浪费。In one embodiment, the third preset length is greater than 0 and less than or equal to 0.5n, and the fourth preset length is greater than 0.5n and less than n. The third preset length and the fourth preset length are denoted by T3 and T4 respectively, and the relationship satisfied by T3 and T4 is respectively: 0<T3≤0.5n; 0.5n<T4<n. The third preset length is greater than 0 and less than or equal to 0.5n, which can better measure the state of less data stored in the cache queue. The fourth preset length is greater than 0.5n and less than n, which can better ensure that the reduced length The cache queue can also meet the current cache requirements of the cache queue, and is also beneficial to avoid waste of resources.
在一个实施例中,第三预设长度为0.5n,第四预设长度为0.6n。即在预设时间段内连续k次检测到缓存队列中当前缓存的特征所占用的长度小于0.5n的情况下,将缓存队列的队列长度减少至0.6n,小于0.5n即不到初始长度的一半,能够很好的表征缓存队列中长期缓存量较少的状态,此时再将缓存队列的队列长度减少至0.6n,相当于能够为接下来要存入缓存队列中的数据至少提供0.1n的存储空间。通过在恰当的时机对缓存队列的长度进行恰当的缩短,有利于在合理利用资源的同时,避免因特征生产者的生产速度与特征消费者的消费速度不匹配而产生的同一特征重复入库的问题。In one embodiment, the third preset length is 0.5n, and the fourth preset length is 0.6n. That is, when the length occupied by the currently cached features in the cache queue is detected k times in a preset period of time is less than 0.5n, the queue length of the cache queue is reduced to 0.6n, and if it is less than 0.5n, it is less than the original length Half, it can well represent the state of the cache queue with less long-term cache. At this time, the queue length of the cache queue is reduced to 0.6n, which is equivalent to providing at least 0.1n for the data to be stored in the cache queue next. storage space. By properly shortening the length of the cache queue at the right time, it is beneficial to make reasonable use of resources while avoiding the problem of repeated storage of the same feature due to the mismatch between the production speed of the feature producer and the consumption speed of the feature consumer. question.
在一个实施例中,特征入库方法中涉及的缓存队列的长度缩短的流程图可以参考图3,包括:In one embodiment, the flow chart of shortening the length of the cache queue involved in the method for storing features can refer to FIG. 3 , including:
步骤301:判断缓存队列中当前缓存的特征所占用的长度是否小于0.5n;如果是,则执行步骤302,否则该流程结束;Step 301: Determine whether the length occupied by the currently cached feature in the cache queue is less than 0.5n; if yes, execute
步骤302:累加在预设时间段内连续检测到的缓存队列中当前缓存的特征所占用的长度小于0.5n的次数;Step 302: accumulating the number of times that the length of the currently cached feature in the cache queue detected continuously within a preset time period is less than 0.5n;
步骤303:判断累加的次数是否大于k;如果是,则执行步骤304,否则执行步骤305;Step 303: judge whether the accumulated number of times is greater than k; if yes, execute
步骤304:将缓存队列的队列长度减少至0.6n;Step 304: reducing the queue length of the cache queue to 0.6n;
步骤305:将累加的次数设置为0。Step 305: Set the accumulated times to 0.
在一个实施例中,在类似图1所示的特征入库的流程图中,可以还包含缓存队列的长度增加流程,比如类似图2中的流程,也可以还包含缓存队列的长度缩短流程,比如类似图3中的流程,还可以同时包含缓存队列的长度增加和缩短流程。即本申请实施例中使用了可动态调整的缓存队列,队列长度可以自适应地随特征数据生产速度的改变而发生改变,这就有效的平衡了空间复杂度和入库效率。In one embodiment, in the flow chart of feature storage similar to that shown in FIG. 1 , the process of increasing the length of the cache queue may be included, such as the process similar to that in FIG. 2 , and the process of shortening the length of the cache queue may also be included. For example, similar to the process in FIG. 3 , it may also include the process of increasing and shortening the length of the cache queue at the same time. That is, in the embodiment of the present application, a dynamically adjustable cache queue is used, and the length of the queue can be adaptively changed as the production speed of feature data changes, which effectively balances the space complexity and storage efficiency.
在一个实施例中,特征入库方法的流程示意图可以参考图4,包括:In one embodiment, the schematic flow chart of the feature storage method can refer to Figure 4, including:
步骤401:初始化长度为n的缓存队列q;Step 401: Initialize a cache queue q with a length of n;
步骤402:获取图片并提取图片的特征f;比如,获取安防摄像机采集的图片,特征f即为待入库的特征;Step 402: Obtain the picture and extract the feature f of the picture; for example, to obtain the picture collected by the security camera, the feature f is the feature to be stored;
步骤403:确定当前缓存队列内的特征所占的缓存长度m与n的关系;Step 403: Determine the relationship between the cache length m and n occupied by the features in the current cache queue;
步骤404:若m>0.8n,则将缓存队列的长度修改为1.2n;Step 404: If m>0.8n, modify the length of the cache queue to 1.2n;
步骤405:若在一段时间内连续k次检测到m<0.5n,则将缓存队列的长度修改为0.6n;Step 405: If m<0.5n is detected for k consecutive times within a period of time, modify the length of the cache queue to 0.6n;
步骤406:若m≤0.8n或者未在一段时间内连续k次检测到m<0.5n,则不改变缓存队列 长度n;Step 406: If m≤0.8n or if m<0.5n is not detected for k consecutive times within a period of time, then do not change the cache queue length n;
步骤407:在缓存队列q中检索特征f是否存在,若存在则执行步骤408,否则执行步骤409;Step 407: Search whether the feature f exists in the cache queue q, if it exists, execute
步骤408:舍弃该特征f;Step 408: Discard the feature f;
步骤409:在数据表中检索特征f是否存在,若存在则执行步骤408,否则执行步骤410;Step 409: Search whether the feature f exists in the data table, if it exists, execute
步骤410:将特征f插入缓存队列q;Step 410: Insert feature f into cache queue q;
重复执行上述步骤步骤401到步骤410;Repeat the steps from
步骤411:若缓存队列q不空,则不断从q中取特征入库。Step 411: If the cache queue q is not empty, continuously extract features from q and put them into the library.
需要说明的是,虽然步骤411写在图4中的最后一个步骤,但并不代表411未最后一个执行的步骤,即只要缓存队列q中存在缓存的特征,就可以执行步骤411。It should be noted that although
本实施例中,由于缓存队列的检索是内存中的检索,所以会大大提高相同特征的检索效率。而且可以动态改变缓存队列的长度(既可以缩短也可以扩展),从根本上解决了特征生产者的生产速度与特征消费者的消费速度不一致即不匹配的问题。而且对于安防摄像机提供的视频流或图片流来说,同一个人的图片特征一般会集中到一个时间段内产生的,这就使得同一个人的图片特征在缓存队列中被命中的可能性大大提高,从而进一步提高了检索效率。另外,本实施例中检索和入库两个过程可以并发进行,有利于进一步提高入库效率。In this embodiment, since the retrieval of the cache queue is a retrieval in memory, the retrieval efficiency of the same feature will be greatly improved. Moreover, the length of the cache queue can be dynamically changed (it can be shortened or expanded), which fundamentally solves the problem that the production speed of the feature producer is inconsistent with the consumption speed of the feature consumer, that is, it does not match. Moreover, for the video stream or picture stream provided by the security camera, the picture features of the same person are generally generated in one time period, which greatly increases the possibility of the picture features of the same person being hit in the cache queue. Thus, the retrieval efficiency is further improved. In addition, in this embodiment, the two processes of retrieval and storage can be performed concurrently, which is beneficial to further improve storage efficiency.
需要说明的是,本申请实施例中的上述各示例均为为方便理解进行的举例说明,并不对本发明的技术方案构成限定。It should be noted that the above examples in the embodiments of the present application are all illustrations for the convenience of understanding, and do not limit the technical solution of the present invention.
上面各种方法的步骤划分,只是为了描述清楚,实现时可以合并为一个步骤或者对某些步骤进行拆分,分解为多个步骤,只要包括相同的逻辑关系,都在本专利的保护范围内;对算法中或者流程中添加无关紧要的修改或者引入无关紧要的设计,但不改变其算法和流程的核心设计都在该专利的保护范围内。The step division of the above various methods is only for the sake of clarity of description. During implementation, it can be combined into one step or some steps can be split and decomposed into multiple steps. As long as they include the same logical relationship, they are all within the scope of protection of this patent. ; Adding insignificant modifications or introducing insignificant designs to the algorithm or process, but not changing the core design of the algorithm and process are all within the scope of protection of this patent.
本申请实施例还提供了一种电子设备,如图5所示,包括:至少一个处理器501;以及,与至少一个处理器501通信连接的存储器502;其中,存储器502存储有可被至少一个处理器501执行的指令,指令被至少一个处理器501执行,以使至少一个处理器501能够执行上述实施例中的特征入库方法。The embodiment of the present application also provides an electronic device, as shown in FIG. 5 , including: at least one
其中,存储器502和处理器501采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器501和存储器502的各种电路连接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路连接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经处理器501处理的数据通过天线在无线介质上进行传输,进一步,天线还接收数据并将数据传送给处理器501。Wherein, the
处理器501负责管理总线和通常的处理,还可以提供各种功能,包括定时,外围接口,电压调节、电源管理以及其他控制功能。而存储器502可以被用于存储处理器501在执行操作时所使用的数据。
本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述方法实施例。The embodiment of the present application also provides a computer-readable storage medium storing a computer program. The above method embodiments are implemented when the computer program is executed by the processor.
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。That is, those skilled in the art can understand that all or part of the steps in the method of the above-mentioned embodiments can be completed by instructing related hardware through a program, the program is stored in a storage medium, and includes several instructions to make a device ( It may be a single-chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .
本领域的普通技术人员可以理解,上述各实施方式是实现本发明的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本发明的精神和范围。Those of ordinary skill in the art can understand that the above-mentioned embodiments are specific examples for realizing the present invention, and in practical applications, various changes can be made to it in form and details without departing from the spirit and spirit of the present invention. scope.
Claims (10)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111681412.2A CN116401272A (en) | 2021-12-28 | 2021-12-28 | Feature warehousing method, electronic equipment and computer readable storage medium |
CN202111681412.2 | 2021-12-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023124841A1 true WO2023124841A1 (en) | 2023-07-06 |
Family
ID=86997656
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/137033 WO2023124841A1 (en) | 2021-12-28 | 2022-12-06 | Method for storing feature in database, electronic device and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116401272A (en) |
WO (1) | WO2023124841A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102088395A (en) * | 2009-12-02 | 2011-06-08 | 杭州华三通信技术有限公司 | Method and device for adjusting media data cache |
US20120265743A1 (en) * | 2011-04-13 | 2012-10-18 | International Business Machines Corporation | Persisting of a low latency in-memory database |
CN106156278A (en) * | 2016-06-24 | 2016-11-23 | 努比亚技术有限公司 | A kind of database data reading/writing method and device |
CN109144992A (en) * | 2017-06-15 | 2019-01-04 | 北京京东尚科信息技术有限公司 | A kind of method and apparatus of data storage |
-
2021
- 2021-12-28 CN CN202111681412.2A patent/CN116401272A/en active Pending
-
2022
- 2022-12-06 WO PCT/CN2022/137033 patent/WO2023124841A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102088395A (en) * | 2009-12-02 | 2011-06-08 | 杭州华三通信技术有限公司 | Method and device for adjusting media data cache |
US20120265743A1 (en) * | 2011-04-13 | 2012-10-18 | International Business Machines Corporation | Persisting of a low latency in-memory database |
CN106156278A (en) * | 2016-06-24 | 2016-11-23 | 努比亚技术有限公司 | A kind of database data reading/writing method and device |
CN109144992A (en) * | 2017-06-15 | 2019-01-04 | 北京京东尚科信息技术有限公司 | A kind of method and apparatus of data storage |
Also Published As
Publication number | Publication date |
---|---|
CN116401272A (en) | 2023-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106164865B (en) | The method and system of the affairs batch processing of dependence perception for data duplication | |
US11301425B2 (en) | Systems and computer implemented methods for semantic data compression | |
US10043220B2 (en) | Method, device and storage medium for data processing | |
CN105354247B (en) | It is a kind of to support to deposit the geographical video data tissue management method for calculating linkage | |
CN110175213A (en) | A kind of oracle database synchronization system and method based on SCN mode | |
WO2017219858A1 (en) | Streaming data distributed processing method and device | |
CN108491332A (en) | A kind of real-time buffering updating method and system based on Redis | |
CN110110006A (en) | Data managing method and Related product | |
US20160147830A1 (en) | Managing datasets produced by alert-triggering search queries | |
CN110597630B (en) | Method and system for processing content resources in distributed system | |
US20140236987A1 (en) | System and method for audio signal collection and processing | |
WO2017020735A1 (en) | Data processing method, backup server and storage system | |
CN113360571A (en) | Characteristic mark-based power grid monitoring system memory database relation database synchronization method | |
CN106844727B (en) | Mass image characteristic data distributed acquisition processing and grading application system and method | |
CN113873025B (en) | Data processing method and device, storage medium and electronic equipment | |
US9003054B2 (en) | Compressing null columns in rows of the tabular data stream protocol | |
WO2020083023A1 (en) | Event flow processing method, electronic device, and readable storage medium | |
WO2023124841A1 (en) | Method for storing feature in database, electronic device and computer readable storage medium | |
CN115185679A (en) | Task processing method and device for artificial intelligence algorithm, server and storage medium | |
CN118540371A (en) | Cluster resource management method and device, storage medium and electronic equipment | |
CN112181302A (en) | Data multilevel storage and access method and system | |
WO2025010967A1 (en) | Video-picture hybrid storage method, apparatus and system | |
US20230079719A1 (en) | Geotagged video spatial indexing method based on temporal information | |
CN111581420B (en) | Flink-based medical image real-time retrieval method | |
CN111046246A (en) | Label updating method and device and distributed storage system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22914096 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22914096 Country of ref document: EP Kind code of ref document: A1 |