[go: up one dir, main page]

CN110647825A - Method, device and equipment for determining unmanned supermarket articles and storage medium - Google Patents

Method, device and equipment for determining unmanned supermarket articles and storage medium Download PDF

Info

Publication number
CN110647825A
CN110647825A CN201910836187.1A CN201910836187A CN110647825A CN 110647825 A CN110647825 A CN 110647825A CN 201910836187 A CN201910836187 A CN 201910836187A CN 110647825 A CN110647825 A CN 110647825A
Authority
CN
China
Prior art keywords
article
user
item
shopping
identity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910836187.1A
Other languages
Chinese (zh)
Inventor
陈志明
叶灵
文介华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weaving Point Intelligent Technology Co Ltd
Original Assignee
Guangzhou Weaving Point Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weaving Point Intelligent Technology Co Ltd filed Critical Guangzhou Weaving Point Intelligent Technology Co Ltd
Priority to CN201910836187.1A priority Critical patent/CN110647825A/en
Publication of CN110647825A publication Critical patent/CN110647825A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/04Payment circuits
    • G06Q20/06Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme
    • G06Q20/065Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme using e-cash
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07GREGISTERING THE RECEIPT OF CASH, VALUABLES, OR TOKENS
    • G07G1/00Cash registers
    • G07G1/12Cash registers electronically operated
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Finance (AREA)
  • General Business, Economics & Management (AREA)
  • Signal Processing (AREA)
  • Strategic Management (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Marketing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for determining unmanned supermarket articles, which relate to the technical field of intelligent control and comprise the following steps: shooting video data containing a shopping picture of a user through an image acquisition device, wherein the image acquisition device is arranged in a shopping area; if the object taking action is detected in the video data, acquiring the identity of the user, the object identification of the object to be taken and the number of the objects; and adding the item identification and the item quantity into a shopping list corresponding to the identity identification. By adopting the method, the object taken by the user can be simply, quickly and accurately determined.

Description

Method, device and equipment for determining unmanned supermarket articles and storage medium
Technical Field
The embodiment of the application relates to the technical field of intelligent control, in particular to a method, a device, equipment and a storage medium for determining unmanned supermarket articles.
Background
With the development of intellectualization, the unmanned supermarket plays an important role in the life of people gradually, compared with the traditional supermarket, the unmanned supermarket does not need to be provided with supermarket staff such as shopping guides, security guards, cashiers, supermarket attendants and the like, and the labor cost is saved for merchants.
How to accurately identify whether the user takes the articles or not is a crucial link in the normal operation of the unmanned supermarket. In the prior art, when a user leaves an unmanned supermarket, the user confirms articles taken by the user in a self-service article scanning mode, however, when the number of the users is too large, the user can wait for the self-service article scanning for too long time. Meanwhile, the self-service scanning mode depends on the operation capability of the user, and is not beneficial to the groups with poor operation capability, such as children or the old.
In order to reduce the dependence of the unmanned supermarket on the self-service article scanning technology, a camera in the security system can be used for tracking the user through the picture acquired by the camera, and the article selected by the user is confirmed in the tracking process. When tracking the user, it is necessary to collect the appearance characteristics of the user, such as the body shape, the clothing style, and/or the color of the user. If it is detected that the goods on the shelf are taken by the user, the user who takes the goods can be tracked through the camera. However, when the number of users in front of the shelf is large, the user who takes the item cannot be accurately and quickly determined by tracking the user, and thus the shopping list is disordered.
In conclusion, how to simply, quickly and accurately determine the articles taken by the user becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for determining articles in an unmanned supermarket, so that the articles taken by a user can be determined simply, quickly and accurately.
In a first aspect, an embodiment of the present application provides a method for determining an unmanned supermarket item, including:
shooting video data containing a shopping picture of a user through an image acquisition device, wherein the image acquisition device is arranged in a shopping area;
if the object taking action is detected in the video data, acquiring the identity of the user, the object identification of the object to be taken and the number of the objects;
and adding the item identification and the item quantity into a shopping list corresponding to the identity identification.
Further, the detecting of the article pickup action in the video data includes:
identifying a pixel area of a user hand in the video data;
and determining an article taking action according to the pixel area.
Further, the acquiring the identity of the user, the article identifier of the article to be taken, and the article quantity includes:
identifying first facial feature data of a user in the video data;
determining the identity of the user according to the first facial feature data;
determining the article identification of the article to be taken according to the article taking action;
and acquiring weight change data acquired by a corresponding sensor according to the article identification, and acquiring the number of articles according to the weight change data.
Further, the determining the identity of the user according to the first facial feature data includes:
matching the first facial feature data with each second facial feature data stored in a facial feature data set to obtain second facial feature data most similar to the first facial feature data;
and acquiring an identity corresponding to the most similar second face feature data, and taking the acquired identity as the identity of the user.
Further, the determining the item identifier of the taken item according to the item taking action includes:
acquiring a video picture containing an article taking action in the video data;
analyzing the video picture to confirm the image of the taken article in the video picture;
and determining the article identification of the taken article according to the image of the taken article.
Further, the acquiring weight change data collected by a corresponding sensor according to the article identifier and obtaining the article quantity according to the weight change data includes:
determining an item placing area of the taken item on the shelf according to the item identification;
acquiring weight change data acquired by a sensor arranged in the article placement area;
acquiring the weight of the single article corresponding to the article identifier;
and obtaining article data according to the weight change data and the weight of the single article.
Further, after the adding the item identifier and the item quantity to the shopping list corresponding to the identity identifier, the method further includes:
determining shopping expenses according to the shopping list when the user is detected to leave the shopping area;
and sending the shopping expense to a consumption account of the user so that the consumption account carries out fee deduction according to the shopping expense.
In a second aspect, an embodiment of the present application further provides an unmanned supermarket item determination device, including:
the image acquisition module is used for shooting video data containing a shopping picture of a user through an image acquisition device, and the image acquisition device is arranged in a shopping area;
the data identification module is used for acquiring the identity of the user, the article identification of the article to be taken and the article quantity if the article taking action is detected in the video data;
and the list adding module is used for adding the item identification and the item quantity into the shopping list corresponding to the identity identification.
In a third aspect, an embodiment of the present application further provides an unmanned supermarket item determination device, including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the unmanned supermarket item determination method of the first aspect.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the unmanned supermarket item determination method according to the first aspect.
According to the method, the device, the equipment and the storage medium for determining the articles in the unmanned supermarket, the video data containing a shopping picture of a user is shot through the image acquisition device in the shopping area, the identity of the user, the article identification of the article to be taken and the article quantity are obtained when the action of taking the articles is detected according to the video data, then the article identification and the article quantity are added into the shopping list corresponding to the identity, the action of taking the articles by the user is acquired through the image acquisition device, whether the user takes the articles or not can be accurately judged, the user identity of the article to be taken can be accurately determined, the article identification of the article to be taken and the article quantity can be accurately determined, the shopping list of the user can be rapidly and accurately determined, additional operation of the user is not needed in the process, and the shopping experience of the user is improved.
Drawings
Fig. 1 is a flowchart of an unmanned supermarket item determination method according to an embodiment of the present application;
fig. 2 is a flowchart of an unmanned supermarket item determination method provided in the second embodiment of the present application;
fig. 3 is a schematic structural diagram of an unmanned supermarket item determination device provided in the third embodiment of the present application;
fig. 4 is a schematic structural diagram of an unmanned supermarket item determination device provided in the fourth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are for purposes of illustration and not limitation. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action or object from another entity or action or object without necessarily requiring or implying any actual such relationship or order between such entities or actions or objects. For example, "first" and "second" of the first facial feature data and the second facial feature data are used to distinguish two different facial feature data.
Example one
Fig. 1 is a flowchart of an unmanned supermarket item determination method according to an embodiment of the present application. The method for determining the goods in the unmanned supermarket is applied to the scene of the unmanned supermarket, wherein the unmanned supermarket comprises at least one goods shelf, and goods for sale are placed on the goods shelf. Further, the unmanned supermarket item determination method can be executed by an unmanned supermarket item determination device, and the unmanned supermarket item determination device can be realized in a software and/or hardware mode and integrated in the unmanned supermarket item determination device. The unmanned supermarket article determination device can be an intelligent device with data processing and analyzing capabilities, such as a computer, a mobile phone and a tablet personal computer.
Each unmanned supermarket is provided with corresponding unmanned supermarket article determining equipment, wherein one unmanned supermarket article determining equipment corresponds to one unmanned supermarket, or a plurality of unmanned supermarkets share one unmanned supermarket article determining equipment. Further, the unmanned supermarket item determination device can be in data communication with other devices configured in the unmanned supermarket. The specific types and installation positions of other devices can be set according to actual conditions, for example, the other devices can be cameras, alarms, various sensors (such as gravity sensors, smoke sensors and the like), access switches and the like.
Specifically, referring to fig. 1, the method for determining an unmanned supermarket item provided in this embodiment includes:
and step 110, shooting video data containing a shopping picture of the user through an image acquisition device, wherein the image acquisition device is arranged in the shopping area.
By way of example, a shopping area may be understood as an interior area of an unmanned supermarket, in which a user may select goods, wherein the goods may be understood as items placed on shelves. Further, an image acquisition device is installed in the shopping area, and the image acquisition device can comprise at least one camera. The installation position of the image acquisition device can be set according to actual conditions, for example, the image acquisition device can be installed on each shelf in a shopping area, and/or the image acquisition device can be installed above each shelf at a set distance to acquire videos of the environment around the shelf. Typically, the image capturing device may be in data communication with the unmanned supermarket item determining device, for example, the image capturing device sends captured video data to the unmanned supermarket item determining device, or the unmanned supermarket item determining device controls a capturing range of the image capturing device. The image acquisition device is set in the embodiment to acquire video data in real time and send the acquired video data to the unmanned supermarket article determination equipment.
Further, the image acquisition device shoots video data in real time, then the unmanned supermarket article determination equipment identifies the video data acquired by the image acquisition device in real time, and if the object existing in the video data is identified to contain people, it is determined that people are in the shopping area, that is, the video data containing the shopping pictures of the user is acquired.
And step 120, if the article taking action is detected in the video data, acquiring the identity of the user, the article identification of the article to be taken and the article quantity.
Illustratively, the take item action is an action of the user taking an item on a shelf. Specifically, human behavior recognition is performed on video data including a user shopping picture to determine whether an article taking action is detected. The embodiment of the method for recognizing human body behaviors is not limited. For example, human key points are identified in the video data, wherein the human key points are the hands of the user. And then, identifying a motion path of the human body key point in the video data, if the motion path is consistent with a motion path of a preset article taking motion, analyzing whether the video picture of the human body key point contains an article, and if the motion path contains the article, determining that the article taking motion is analyzed. For another example, a pickup motion recognition model is trained in advance by adopting a machine learning mode, then video data containing a user shopping picture is used as an input of the recognition model, and whether a pickup motion is recognized or not is determined according to an output result of the recognition model. In another example, a pixel region of the user's hand is identified in the video data, and a determination is made as to whether an article pickup motion is detected based on the pixel region.
Further, after the action of taking the article is recognized, the identity of the user taking the article is confirmed. The identity may be understood as a user ID, which has uniqueness. The embodiment of the creation rule of the identity is not limited. Typically, the identification determination method may be set according to actual conditions, for example, the identification of the user is determined by means of facial feature data. At this time, the method for determining the user identity may be: the method comprises the steps of obtaining facial feature data of a user in video data, and matching the facial feature data with pre-stored facial feature data, wherein each pre-stored facial feature data has a corresponding identity. And if the corresponding facial feature identification is matched in the pre-stored facial feature data, taking the identity identification corresponding to the matched facial feature identification as the identity identification of the user.
Further, in addition to identifying the identification, it is also necessary to determine the items taken by the user and the number of items taken. The items can be distinguished by item identifiers, which can also be understood as item IDs, which have uniqueness. The embodiment of the generation rule of the item identifier is not limited. Specifically, the determination method of the article identifier may be set according to actual conditions, for example, an image of the article to be taken may be present in a video picture including the motion of taking the article, then, the features (such as shape, color, and label style) of the article to be taken in the image are extracted, and the article identifier is determined according to the features. For another example, the shelf corresponding to the image acquisition device which detects the action of taking the article is determined, and then the article identifier is determined according to the weight change data acquired by the gravity sensor on the shelf. Wherein, the article of every type of article are placed the region and are set up a set of gravity sensor, and every gravity sensor of group includes at least one gravity sensor, and when the user took article, the weight change data of the gravity sensor collection that corresponds can change, and then can confirm that the article of being taken article are placed the region, later, place regional definite article sign according to article.
The number of items taken may be determined by a gravity sensor on the shelf. After the article identification is determined, weight change data collected by the gravity sensor corresponding to the article placement area is determined, and then the article quantity of the taken article is determined according to the weight change data and the single article weight of the taken article.
And step 130, adding the item identification and the item quantity into a shopping list corresponding to the identity identification.
Specifically, each identity corresponds to a shopping list, and the shopping list is used for recording articles taken by a user in an unmanned supermarket in the shopping process. Generally, after a user enters a shopping area, after the identity of the user is obtained by an unmanned supermarket article determination device, a shopping list corresponding to the identity is established. Wherein, the shopping list at least records the item identification and the item quantity. It will be appreciated that the shopping list is continually updated during the user's shopping. And deleting the corresponding shopping list after confirming that the user finishes the shopping, or deleting the corresponding shopping list at set time intervals (such as 24 hours). However, the embodiment of the present invention is not limited to this, and for example, the end of the present shopping is confirmed when the user is confirmed to leave the shopping area.
If more than two users associate shopping, a plurality of identification labels of the associate shopping can be set to share a shopping list.
Above-mentioned, shoot the video data who contains user's shopping picture through the image acquisition device in the shopping area, and when detecting the article action of taking according to video data, acquire user's identification and the article sign and the article quantity of the article of being taken, later, add article sign and article quantity to the technical means in the shopping list that the identification corresponds, gather user's the article action of taking through image acquisition device, can accurately judge whether the user has taken the article, and can accurately confirm the user identity of the article of taking, the article sign and the article quantity of the article of being taken, and then quick, accurately confirm user's shopping list, and above-mentioned process need not the user and carries out extra operation, user's shopping experience has been improved.
Example two
Fig. 2 is a flowchart of an unmanned supermarket item determination method provided in the second embodiment of the present application. The present embodiment is embodied on the basis of the above-described embodiments.
Specifically, the step of setting the operation of detecting the article to be taken in the video data includes: identifying a pixel area of a user hand in the video data; and determining an article taking action according to the pixel area. Further, the setting of the identity of the user, the article identity of the article to be taken, and the number of the articles includes: identifying first facial feature data of a user in the video data; determining the identity of the user according to the first facial feature data; determining the article identification of the article to be taken according to the article taking action; and acquiring weight change data acquired by the corresponding sensor according to the article identification, and acquiring the quantity of the articles according to the weight change data.
Exemplarily, referring to fig. 2, the method for determining an unmanned supermarket item provided in this embodiment specifically includes:
step 201, shooting video data containing a shopping picture of a user through an image acquisition device, wherein the image acquisition device is arranged in a shopping area.
Step 202, identifying a pixel area of a user hand in the video data.
Specifically, a user hand pixel region in the video data is extracted. A pixel region may be understood as a collection of pixels that contain a hand. The hand of the user may only include the fingers of the user, or include the fingers and the palm of the user, or include the fingers, the palm and the arm of the user. It is understood that the embodiment of the manner for extracting the pixel region of the user's hand is not limited.
And step 203, determining an article taking action according to the pixel area.
Optionally, the motion trajectory of the hand of the user is determined according to the pixel area of the hand of the user in the video data, wherein the motion trajectory includes a movement trajectory of the hand of the user and a line-shaped trajectory of the hand, and the shape trajectory of the hand mainly aims at the fingers and can reflect the curved shapes of the fingers. When the article is taken, the finger of the user can take a grabbing shape, a moving track departing from the shelf can be generated, and the article taking action is determined according to the moving track determined through the pixel area.
Optionally, adjacent pixel regions around the pixel region are obtained, wherein the number of pixels in the adjacent pixel regions can be set according to actual conditions. Further, image recognition is carried out on the adjacent pixel regions to determine whether the adjacent pixel regions contain the images of the taken articles, namely, whether the adjacent pixel regions contain the characteristics of the articles is determined, and if yes, the article taking action is determined to be detected. The embodiment of the technical means for identifying the characteristics of the article is not limited, for example, the characteristics of the article to be taken are identified in a machine learning manner.
Optionally, each article placement area corresponds to at least one camera and one group of gravity sensors, and each article placement area places similar articles. Specifically, the distance between the hand of the user and the shelf (or the article placement area) corresponding to the camera can be determined according to the pixel area, when the distance is smaller than the preset distance, whether the data collected by the corresponding gravity sensor changes or not is detected, if the data changes, the user is determined to take the article on the shelf, and then the action of taking the article is determined to be detected. If the user does not take the article, the user is determined to not take the article, namely the article taking action is not detected. The preset distance can be set according to actual conditions.
It can be understood that after the video data containing the shopping picture of the user is processed, if the action of taking the article is detected, the subsequent operation is executed, otherwise, the video data containing the shopping picture of the user is continuously processed.
And step 204, identifying first facial feature data of the user in the video data.
The face feature data refers to feature data that can distinguish a user in a face image, and has uniqueness. In an embodiment, facial feature data obtained based on an image capture device in a shopping area is recorded as first facial feature data. Typically, when the image capture device captures video data containing the motion of picking up an item, the video data also contains an image of the user's face. Specifically, a face recognition (also referred to as face recognition) technique is performed on a face image containing a shopping screen of the user to acquire first face feature data.
Step 205, the identity of the user is determined according to the first facial feature data.
Optionally, when the first facial feature data of the user is acquired for the first time, a corresponding identity is created, and in the subsequent tracking process, if the same first facial feature data is identified, the identity is confirmed to be tracked. Optionally, the facial feature data corresponding to each identity is stored in advance, the acquired first facial feature data is matched with the facial feature data stored in advance, if the corresponding facial feature is matched, the identity corresponding to the facial feature is used as the identity corresponding to the first facial feature data, otherwise, the comparison is determined to be failed, and an alarm is started to prompt that currently there is an unrelated person to enter the shopping area.
In the embodiment, the description is given by taking the pre-stored facial feature data as an example, and at this time, the step specifically includes steps 2051-2053:
step 2051, matching the first facial feature data with each second facial feature data stored in the facial feature data set to obtain second facial feature data most similar to the first facial feature data.
The second facial feature data is the facial feature data entered by the user when entering the shopping area. The set of second facial feature data is denoted as a facial feature data set, where each second facial feature data has a corresponding identification. Specifically, the inlet of the shopping area is also provided with an image acquisition device, and the image acquisition device at the inlet is marked as a second image acquisition device for distinguishing the image acquisition device from the image acquisition device in the shopping area. Optionally, the embodiment of the installation position of the second image capturing device is not limited. Generally, the second image acquisition device shoots a video picture at an entrance, and when a user is at the entrance, second face feature data of the user can be acquired through the second image acquisition device. Further, when the user enters the shopping area for the first time, the second image acquisition device acquires and stores second face feature data of the user, and meanwhile, an identity corresponding to the second face feature data is established. When the subsequent user enters the shopping area again, the second face feature data can be directly collected and the corresponding identity is determined.
Specifically, the first face feature data is matched with each second face feature data, and the second face feature data most similar to the first face feature data is obtained. The most similar means that the similarity between the second face feature data and the first face feature data is the highest and is higher than a set threshold. The above-mentioned embodiment of the similarity calculation method is not limited, and the setting threshold may be set according to actual situations.
And step 2052, acquiring an identity corresponding to the most similar second face feature data, and taking the acquired identity as the identity of the user.
Specifically, each second face feature data corresponds to one identity, so that the identity corresponding to the most similar second face feature data can be obtained, and then the identity is used as the identity corresponding to the first face feature data, that is, the identity of the user who performs the action of taking the article is obtained.
And step 206, determining the article identification of the article to be taken according to the article taking action.
In the embodiment, the description will be given by taking an example of identifying an article identifier by identifying a video frame including an article pickup action.
Step 2061, obtaining a video image containing the article taking action in the video data.
Specifically, since the above step detects the article taking action in the video data, the step may directly acquire the video frame including the article taking action, and it is understood that the video frame may be regarded as a subset of the video data.
Step 2062, the video frame is analyzed to confirm the image of the article taken in the video frame.
Specifically, since the user performs the operation of picking up the object by the hand, at this time, the user hand and the picked-up object should be adjacent pixels in the video image. Therefore, pixels around the pixel area of the user's hand can be analyzed to obtain the pixel area of the picked-up item, i.e., the picked-up item image.
Step 2063, determining the article identification of the article to be taken according to the image of the article to be taken.
Illustratively, the characteristics of each article and the corresponding article identification are stored in advance, then, the characteristics of the image of the article to be taken are extracted and compared with the characteristics stored in advance to find the characteristics most similar to the article to be taken, and then, the article identification with the most similar characteristics is obtained as the identification of the article to be taken. Or, training the images of the articles and the corresponding article identifications in advance by adopting a machine learning mode to obtain an article identification model, then taking the images of the articles as the input of the article identification model, and obtaining the output result as the article identifications.
It will be appreciated that the item identity may also be determined by a gravity sensor of the item placement area. Each article placing area is used for placing similar articles, when a user takes the articles, data collected by the gravity sensor in the area are changed, and then article identification corresponding to the article placing area is determined.
And step 207, acquiring weight change data acquired by the corresponding sensor according to the article identification, and acquiring the quantity of the articles according to the weight change data.
Specifically, the sensor is a gravity sensor, which can collect weight data in a detection range in real time. In the embodiment, the same kind of articles in the shelf are placed together, and each article has an article placing area. At the moment, each type of article placing area corresponds to one group of gravity sensors, the number of articles can be obtained through data collected by the gravity sensors, and the data collected by the gravity sensors is recorded as weight change data. Accordingly, the present step is set to include a post-headquarter 2071 to a step 2074.
Step 2071, determine the item placement area of the item on the shelf according to the item identifier.
The position of the articles on the shelf is marked as an article placing area. Typically, when placing an item on a shelf, the unmanned supermarket item determination device records the item placement area of the item in the shelf and the item identification. After the article identifier is obtained, the article placement area of the article to be taken can be determined according to the article placement area and the article identifier recorded in advance.
Step 2072, obtain weight change data collected by the sensors disposed in the article placement area.
Typically, each article placement area is provided with a group of gravity sensors, and the unmanned supermarket article determination device records the corresponding relation between the article device area and the gravity sensors. After the article placing area of the taken article is obtained, the weight change data collected by the gravity sensor corresponding to the article placing area can be obtained. It can be understood that the weight change data is relative data, that is, the weight change data is the weight change of the article placement area before and after the user takes the article.
Step 2073, obtain the weight of the single item corresponding to the item identifier.
Specifically, the weight of each article corresponding to each article identifier is recorded in advance. The individual item weight is the weight data collected by the sensor when the number of items is 1. And when the article identification of the taken article is determined, acquiring the corresponding unit weight.
And 2074, obtaining the quantity of the articles according to the weight change data and the weight of the single article.
In particular, the quantity of items is determined based on the weight variation data and the weight of the individual item. Wherein the number of articles is the number of articles taken by the user. Optionally, the weight change data is divided by the weight of the individual product to obtain the quantity of the articles. In practical application, due to factors such as acquisition errors of the sensors, the weight change data is divided by the weight of a single product to obtain a decimal, at the moment, an integer can be obtained by rounding, and the integer is taken as the number of articles to be taken.
It is understood that steps 206-207 and steps 204-205 may be performed simultaneously.
And step 208, adding the item identification and the item quantity into a shopping list corresponding to the identity identification.
And step 209, determining the shopping fee according to the shopping list when the user is detected to leave the shopping area.
The embodiment of the method for detecting the user leaving the shopping area is not limited. For example, an image capture device is disposed at an exit of the shopping area and is referred to as a third image capture device, and facial feature data captured by the third image capture device is referred to as third facial feature data. Specifically, the third face feature data is confirmed through the video data collected by the third image collecting device, and the third face feature data is matched with the first face feature data to determine the corresponding identity, so that the user corresponding to the identity is determined to leave the shopping area.
Further, after the user is determined to leave the shopping area, the shopping list corresponding to the identity is obtained. And then, determining the shopping expense of the user at this time based on the shopping list. Optionally, the item identifier and the corresponding unit price of each item are pre-stored in the unmanned supermarket item determination device. And after the shopping list is obtained, acquiring the item identification in the shopping list, determining the corresponding unit price according to the item identification, and then calculating the shopping expense according to the item quantity of the shopping list and the corresponding unit price.
And step 210, sending the shopping expense to a consumption account of the user so that the consumption account deducts the expense according to the shopping expense.
The type of the consumption account is not limited.
Typically, when the user is a registered user, the user may be instructed to bind the consumption account in the user registration process. And meanwhile, establishing a corresponding relation between the consumption account and the identity. And after the shopping expense is settled, obtaining a deduction account corresponding to the identity. When the user is a non-registered user, after the shopping expense is settled, the user is prompted to input a consumption account number at the exit, and third face characteristic data is continuously collected in the user input process so as to determine whether the user changes. And if the user changes, stopping recording the consumption account. It should be noted that, when a plurality of users shop simultaneously, only one identity in a plurality of identities corresponding to the shopping list needs to be the identity of the currently input deduction account.
And further, sending shopping cost to a consumption account, and deducting money by the consumption account according to the shopping cost or automatically deducting money after prompting a user to input a password. The communication method and communication flow embodiment with the consumption account are not limited. Further, when the payment is confirmed to be completed, the user is allowed to leave the shopping area from the outlet.
The method comprises the steps of shooting video data containing a shopping picture of a user through an image acquisition device in a shopping area, identifying a pixel area of a hand of the user in the video data, identifying an article taking action according to the pixel area, determining first surface characteristic data of the user, further determining an identity of the user according to the first surface characteristic data, simultaneously determining an article identification and an article quantity of the article to be taken, further updating a shopping list of the user, and carrying out self-help settlement according to the shopping list when the user leaves the shopping area, thereby realizing quick and accurate shopping list of the user, quickly and accurately determining whether the user takes the article or not by acquiring a user purchasing picture and detecting the article taking action, accurately identifying the identity of the user through the facial characteristic data, and accurately obtaining the article identification through the video picture of the article to be taken, the shopping list accuracy is further guaranteed by combining the quantity of the articles which can be obtained through weight change data collected by the sensor, meanwhile, the shopping expense is automatically calculated based on the shopping list, the shopping expense is not required to be calculated in a mode of self-service article scanning in line, the shopping process of a user is simplified, and the use experience of the user is improved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an unmanned supermarket item determination device provided in the third embodiment of the present application. Referring to fig. 3, the unmanned supermarket item determination apparatus includes: an image acquisition module 301, a data identification module 302, and a manifest addition module 303.
The image acquisition module 301 is configured to shoot video data including a shopping picture of a user through an image acquisition device, where the image acquisition device is arranged in a shopping area; a data identification module 302, configured to obtain an identity of a user, an item identifier of an item to be taken, and an item quantity if an item taking action is detected in the video data; a list adding module 303, configured to add the item identifier and the item quantity to a shopping list corresponding to the identity identifier.
Above-mentioned, shoot the video data who contains user's shopping picture through the image acquisition device in the shopping area, and when detecting the article action of taking according to video data, acquire user's identification and the article sign and the article quantity of the article of being taken, later, add article sign and article quantity to the technical means in the shopping list that the identification corresponds, gather user's the article action of taking through image acquisition device, can accurately judge whether the user has taken the article, and can accurately confirm the user identity of the article of taking, the article sign and the article quantity of the article of being taken, and then quick, accurately confirm user's shopping list, and above-mentioned process need not the user and carries out extra operation, user's shopping experience has been improved.
On the basis of the above embodiment, the data identification module 302 includes: the pixel identification unit is used for identifying a pixel area of a user hand in the video data; the action determining unit is used for determining an article taking action according to the pixel area; and the shopping data acquisition unit is used for acquiring the identity of the user, the article identification of the taken article and the article quantity.
On the basis of the above embodiment, the data identification module 302 includes: the characteristic identification unit is used for identifying first surface characteristic data of a user in the video data if the action of taking an article is detected in the video data; a first identifier determining unit, configured to determine an identity identifier of the user according to the first facial feature data; the second identification determining unit is used for determining the article identification of the article to be taken according to the article taking action; and the quantity determining unit is used for acquiring weight change data acquired by the corresponding sensor according to the article identification and obtaining the quantity of the articles according to the weight change data.
On the basis of the above embodiment, the first identification determination unit includes: a feature matching subunit, configured to match the first facial feature data with each of second facial feature data stored in a set of facial feature data, so as to obtain second facial feature data that is most similar to the first facial feature data; and the identification obtaining subunit is configured to obtain an identity identifier corresponding to the most similar second facial feature data, and use the obtained identity identifier as the identity identifier of the user.
On the basis of the above embodiment, the second identification determination unit includes: the image acquisition subunit is used for acquiring a video image containing the action of taking an article in the video data; the picture analyzing subunit is used for analyzing the video picture to confirm the image of the taken article in the video picture; and the identification determining subunit is used for determining the article identification of the taken article according to the taken article image.
On the basis of the above embodiment, the number determination unit includes: the area determining subunit is used for determining an item placing area of the taken item on the shelf according to the item identification; the gravity data acquisition subunit is used for acquiring weight change data acquired by a sensor arranged in the article placement area; the single-item weight obtaining subunit is used for obtaining the single-item weight corresponding to the article identifier; and the article quantity determining subunit is used for obtaining the article quantity according to the weight change data and the single article weight.
On the basis of the above embodiment, the method further includes: the expense calculation module is used for determining shopping expense according to the shopping list after the item identification and the item quantity are added into the shopping list corresponding to the identity identification and when the user is detected to leave the shopping area; and the expense sending module is used for sending the shopping expense to a consumption account of the user so as to enable the consumption account to deduct the expense according to the shopping expense.
The unmanned supermarket article determining device provided by the embodiment can be used for executing the unmanned supermarket article determining method provided by any one of the embodiments, and has corresponding functions and beneficial effects.
Example four
Fig. 4 is a schematic structural diagram of an unmanned supermarket item determination device provided in the fourth embodiment of the present application. As shown in fig. 4, the unmanned supermarket item determination apparatus includes a processor 40, a memory 41, an input device 42, an output device 43, and a communication device 44; the number of the processors 40 in the unmanned supermarket item determination device can be one or more, and one processor 40 is taken as an example in fig. 4; the processor 40, the memory 41, the input device 42, the output device 43 and the communication device 44 in the unmanned supermarket item determination device can be connected through a bus or other means, and the connection through the bus is taken as an example in fig. 4.
The memory 41 is a computer-readable storage medium, and may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the method for determining an unmanned supermarket item according to the embodiment of the present invention (for example, the image acquisition module 301, the data identification module 302, and the list adding module 304 in the unmanned supermarket item determination device). The processor 40 executes various functional applications and data processing of the unmanned supermarket item determination device by running software programs, instructions and modules stored in the memory 41, namely, the unmanned supermarket item determination method is realized.
The memory 41 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the unmanned supermarket item determination device, and the like. Further, the memory 41 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 41 may further include memory located remotely from processor 40, which may be connected to the unmanned supermarket item determination device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 42 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the unmanned supermarket item determination apparatus. The output device 43 may include a display device such as a display screen. The communication device 44 is used for data communication with an image acquisition device, a sensor, and the like.
The unmanned supermarket article determining device comprises the unmanned supermarket article determining device, can be used for executing any unmanned supermarket article determining method, and has corresponding functions and beneficial effects.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a method for determining an unmanned supermarket item, the method including:
shooting video data containing a shopping picture of a user through an image acquisition device, wherein the image acquisition device is arranged in a shopping area;
if the object taking action is detected in the video data, acquiring the identity of the user, the object identification of the object to be taken and the number of the objects;
and adding the item identification and the item quantity into a shopping list corresponding to the identity identification.
Of course, the storage medium provided by the embodiment of the present invention includes computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also perform related operations in the method for determining an unmanned supermarket item provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the apparatus for determining an unmanned supermarket item, each of the units and modules included in the apparatus is only divided according to the functional logic, but is not limited to the above division, as long as the corresponding function can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An unmanned supermarket item determination method is characterized by comprising the following steps:
shooting video data containing a shopping picture of a user through an image acquisition device, wherein the image acquisition device is arranged in a shopping area;
if the object taking action is detected in the video data, acquiring the identity of the user, the object identification of the object to be taken and the number of the objects;
and adding the item identification and the item quantity into a shopping list corresponding to the identity identification.
2. The method according to claim 1, wherein the detecting of the pick-up of the item in the video data comprises:
identifying a pixel area of a user hand in the video data;
and determining an article taking action according to the pixel area.
3. The method for determining unmanned supermarket items according to claim 1, wherein the obtaining of the identity of the user, the item identity of the item to be taken and the number of items comprises:
identifying first facial feature data of a user in the video data;
determining the identity of the user according to the first facial feature data;
determining the article identification of the article to be taken according to the article taking action;
and acquiring weight change data acquired by a corresponding sensor according to the article identification, and acquiring the number of articles according to the weight change data.
4. The method of claim 3, wherein the determining the identity of the user according to the first facial feature data comprises:
matching the first facial feature data with each second facial feature data stored in a facial feature data set to obtain second facial feature data most similar to the first facial feature data;
and acquiring an identity corresponding to the most similar second face feature data, and taking the acquired identity as the identity of the user.
5. The unmanned supermarket item determination method of claim 3, wherein determining the item identifier of the taken item according to the item taking action comprises:
acquiring a video picture containing an article taking action in the video data;
analyzing the video picture to confirm the image of the taken article in the video picture;
and determining the article identification of the taken article according to the image of the taken article.
6. The unmanned supermarket item determination method of claim 3, wherein the obtaining weight change data collected by a corresponding sensor according to the item identifier and obtaining the item quantity according to the weight change data comprises:
determining an item placing area of the taken item on the shelf according to the item identification;
acquiring weight change data acquired by a sensor arranged in the article placement area;
acquiring the weight of the single article corresponding to the article identifier;
and obtaining the quantity of the articles according to the weight change data and the weight of the single article.
7. The method according to claim 1, wherein after the item identifier and the item quantity are added to a shopping list corresponding to the identity identifier, the method further comprises:
determining shopping expenses according to the shopping list when the user is detected to leave the shopping area;
and sending the shopping expense to a consumption account of the user so that the consumption account carries out fee deduction according to the shopping expense.
8. An unmanned supermarket item determination device, comprising:
the image acquisition module is used for shooting video data containing a shopping picture of a user through an image acquisition device, and the image acquisition device is arranged in a shopping area;
the data identification module is used for acquiring the identity of the user, the article identification of the article to be taken and the article quantity if the article taking action is detected in the video data;
and the list adding module is used for adding the item identification and the item quantity into the shopping list corresponding to the identity identification.
9. An unmanned supermarket item determination apparatus, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the unmanned supermarket item determination method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the unmanned supermarket item determination method according to any one of claims 1 to 7.
CN201910836187.1A 2019-09-05 2019-09-05 Method, device and equipment for determining unmanned supermarket articles and storage medium Withdrawn CN110647825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910836187.1A CN110647825A (en) 2019-09-05 2019-09-05 Method, device and equipment for determining unmanned supermarket articles and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910836187.1A CN110647825A (en) 2019-09-05 2019-09-05 Method, device and equipment for determining unmanned supermarket articles and storage medium

Publications (1)

Publication Number Publication Date
CN110647825A true CN110647825A (en) 2020-01-03

Family

ID=69010351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910836187.1A Withdrawn CN110647825A (en) 2019-09-05 2019-09-05 Method, device and equipment for determining unmanned supermarket articles and storage medium

Country Status (1)

Country Link
CN (1) CN110647825A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274489A (en) * 2020-03-25 2020-06-12 北京百度网讯科技有限公司 Information processing method, apparatus, equipment and storage medium
CN111680654A (en) * 2020-06-15 2020-09-18 杭州海康威视数字技术股份有限公司 Personnel information acquisition method, device and equipment based on article picking and placing event
CN112712657A (en) * 2020-12-23 2021-04-27 网银在线(北京)科技有限公司 Monitoring method, device, monitoring system, monitoring equipment and storage medium
CN113706227A (en) * 2021-11-01 2021-11-26 微晟(武汉)技术有限公司 Goods shelf commodity recommendation method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274489A (en) * 2020-03-25 2020-06-12 北京百度网讯科技有限公司 Information processing method, apparatus, equipment and storage medium
CN111274489B (en) * 2020-03-25 2023-12-15 北京百度网讯科技有限公司 Information processing method, device, equipment and storage medium
CN111680654A (en) * 2020-06-15 2020-09-18 杭州海康威视数字技术股份有限公司 Personnel information acquisition method, device and equipment based on article picking and placing event
CN111680654B (en) * 2020-06-15 2023-10-13 杭州海康威视数字技术股份有限公司 Personnel information acquisition method, device and equipment based on article picking and placing event
CN112712657A (en) * 2020-12-23 2021-04-27 网银在线(北京)科技有限公司 Monitoring method, device, monitoring system, monitoring equipment and storage medium
CN113706227A (en) * 2021-11-01 2021-11-26 微晟(武汉)技术有限公司 Goods shelf commodity recommendation method and device

Similar Documents

Publication Publication Date Title
US11501523B2 (en) Goods sensing system and method for goods sensing based on image monitoring
US11790433B2 (en) Constructing shopper carts using video surveillance
JP7677481B2 (en) Store device, store system, store management method, and program
TWI778030B (en) Store apparatus, store management method and program
CN110647825A (en) Method, device and equipment for determining unmanned supermarket articles and storage medium
US11176597B2 (en) Associating shoppers together
CN111263224B (en) Video processing method and device and electronic equipment
US20230027382A1 (en) Information processing system
JP7545801B2 (en) Information processing system, method and program for controlling information processing system
JP2022539920A (en) Method and apparatus for matching goods and customers based on visual and gravity sensing
CN112215167B (en) Intelligent store control method and system based on image recognition
CN110648186B (en) Data analysis method, device, equipment and computer readable storage medium
JP7540430B2 (en) Information processing device, information processing method, and program
EP3901841B1 (en) Settlement method, apparatus, and system
CN112307864A (en) Method, device and human-computer interaction system for determining target object
EP3474183A1 (en) System for tracking products and users in a store
EP3629276A1 (en) Context-aided machine vision item differentiation
US20230005348A1 (en) Fraud detection system and method
CN111260685B (en) Video processing method and device and electronic equipment
CN110689389A (en) Computer vision-based shopping list automatic maintenance method and device, storage medium and terminal
CN111178860A (en) Settlement method, device, equipment and storage medium for unmanned convenience store
CN111783509B (en) Automatic settlement method, device, system and storage medium
WO2019077561A1 (en) Device for detecting the interaction of users with products arranged on a stand or display rack of a store
CN110659955A (en) Multi-user shopping management method, device, equipment and storage medium
JP2025000739A (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200103

WW01 Invention patent application withdrawn after publication