[go: up one dir, main page]

CN111167731B - Product sorting method, product sorting system and intelligent sorting robot - Google Patents

Product sorting method, product sorting system and intelligent sorting robot Download PDF

Info

Publication number
CN111167731B
CN111167731B CN201911013064.4A CN201911013064A CN111167731B CN 111167731 B CN111167731 B CN 111167731B CN 201911013064 A CN201911013064 A CN 201911013064A CN 111167731 B CN111167731 B CN 111167731B
Authority
CN
China
Prior art keywords
image
sorting
coding
region
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911013064.4A
Other languages
Chinese (zh)
Other versions
CN111167731A (en
Inventor
李淼
闫琳
张少华
于天水
鲁阳
付中涛
房小立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Cobot Technology Co ltd
Original Assignee
Wuhan Cobot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Cobot Technology Co ltd filed Critical Wuhan Cobot Technology Co ltd
Priority to CN201911013064.4A priority Critical patent/CN111167731B/en
Publication of CN111167731A publication Critical patent/CN111167731A/en
Application granted granted Critical
Publication of CN111167731B publication Critical patent/CN111167731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/342Sorting according to other particular properties according to optical properties, e.g. colour
    • B07C5/3422Sorting according to other particular properties according to optical properties, e.g. colour using video scanning devices, e.g. TV-cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C2501/00Sorting according to a characteristic or feature of the articles or material to be sorted
    • B07C2501/0063Using robots

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of product sorting, and provides a product sorting method, a product sorting system and an intelligent sorting robot, wherein the method comprises the following steps: the method comprises the steps of obtaining an image of a product on a product line, respectively identifying a face pixel area and area characteristic information from the image, searching out a unique characteristic template matched with the face pixel area from a preset characteristic template library according to the area characteristic information, and positioning key pose information corresponding to the area characteristic information on the image according to the unique characteristic template.

Description

Product sorting method, product sorting system and intelligent sorting robot
Technical Field
The invention relates to the technical field of product sorting, in particular to a product sorting method, a product sorting system and an intelligent sorting robot.
Background
In the related technology, a product enters an acquisition area under the transmission of a product line, in the acquisition area, an imaging terminal intermittently shoots a plurality of images of the product on the product line, a controller calculates coordinates and a moving distance of the product under a coordinate system of the imaging terminal in real time according to two continuously shot images, the controller converts the coordinates under the coordinate system of the imaging terminal into coordinates under a coordinate system of sorting equipment after judging that the two images have the same product pixel area, updates the coordinates under the coordinate system of the sorting equipment into current coordinates by using the moving distance, plans a moving track according to the current coordinates and preset coordinates when judging that the product is in a grabbing range according to the current coordinates, and sends an instruction to the sorting equipment, and the sorting equipment executes grabbing actions for grabbing the product according to the instruction so that the product is sorted from the product line by the sorting equipment.
However, under the restriction of the operations of processing two images, updating coordinates, judging a pixel area of a product, judging a grabbing range, planning a track and the like, the controller needs a long time to send out an instruction, and the sending time of the instruction is delayed, so that the grabbing action is delayed by the sorting terminal, the product is delayed and sorted from the product line by the sorting equipment, and the product sorting efficiency is reduced.
Disclosure of Invention
The invention provides an image-based product sorting method, a product sorting system and an intelligent sorting robot, aiming at the problem that the existing image-based product sorting method delays sorting out products from a product line.
The invention provides an image-based product sorting method in a first aspect, which comprises the following steps:
acquiring an image of a product on a product line;
respectively identifying a face pixel region and region characteristic information from the image;
searching out a unique feature template matched with the area pixel region from a preset feature template library according to the region feature information;
positioning key pose information corresponding to the region feature information on the image according to the unique feature template;
sorting out the products from the product line according to the key pose information;
a second aspect of the present invention provides an image-based product sorting system, comprising: a vision subsystem and a sorting subsystem coupled to the vision subsystem;
the vision subsystem is used for acquiring images of products on a product line;
the sorting subsystem is used for acquiring the images; respectively identifying a face pixel region and region characteristic information from the image; searching out a unique feature template matched with the area pixel region from a preset feature template library according to the region feature information; positioning key pose information corresponding to the region feature information on the image according to the unique feature template; sorting the products from the product line according to the key pose information.
The intelligent sorting robot according to a third aspect of the present invention is configured to store at least one instruction or at least one program or set of codes or set of instructions, which is loaded and executed to implement the operational steps performed by the image-based product sorting method according to the first aspect.
The product sorting method, the product sorting system and the intelligent sorting robot provided by the invention have the beneficial effects that: the method has the advantages that the unique characteristic template is accurately and quickly searched out from the preset characteristic template library by utilizing the face pixel area and the area characteristic information belonging to the same image, the key pose information is accurately and quickly positioned on the image by utilizing the unique characteristic template and the area characteristic information, and the positioning time of the key pose information is shortened.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for sorting products based on images according to the present invention;
fig. 2 is a schematic diagram of an architecture of an image-based product sorting system according to the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth to illustrate, but are not to be construed to limit the scope of the invention.
Example one
As shown in fig. 1, an image-based product sorting method includes the following operations:
step 10, acquiring an image of a product on a product line;
step 20, identifying a face pixel area and area characteristic information from the image respectively;
step 30, searching a unique feature template matched with the area pixel region from a preset feature template library according to the region feature information;
step 40, positioning key pose information corresponding to the regional characteristic information on the image according to the unique characteristic template;
and step 50, sorting the products from the product line according to the key pose information.
In some embodiments, the product may be a medicine box, the product line may be a conveyor belt, and the top surface of the product may be provided with a bar code or a two-dimensional code.
In some specific ways, the region feature information includes mask information, a positioning window, and a texture feature, where the mask information and the positioning window are used to position a surface pixel region on the image; the mask information may include a mask on which pixels within the area pixel region are represented by a parameter of 0 and pixels not within the area pixel region are represented by a parameter of 1, the positioning window may be a rectangular bounding box circumscribing the area pixel region, and the texture feature may include a character string describing a texture of the area pixel region.
In some specific ways, each feature template in the preset database includes a standard mask, a standard texture feature and a standard feature map; the key pose information comprises a central point three-dimensional coordinate and a rotating angle associated with the central point three-dimensional coordinate, and the central point three-dimensional coordinate and the rotating angle are used for positioning the pose of the central point of the surface pixel area.
The unique feature template is accurately and quickly searched from the preset feature template library by utilizing the face pixel area and the area feature information belonging to the same image, the key pose information is accurately and quickly positioned on the image by utilizing the unique feature template and the area feature information, and the positioning time of the key pose information is shortened.
As an optional implementation manner, step 20 specifically includes the following steps:
step 21, respectively obtaining a pre-trained first deep learning model, a pre-trained second deep learning model and a pre-trained third deep learning model;
step 22, respectively extracting a face pixel region, mask information and a positioning window on the image according to the first deep learning model;
step 23, extracting a coding pixel area on the image according to the second deep learning model;
24, extracting texture feature symbols in a face pixel region according to the third deep learning model;
and 25, extracting a coding feature character in the coding pixel region according to the third deep learning model.
In some embodiments, the first deep-learning model includes an input layer, a first convolutional layer, a second convolutional layer, a third convolutional layer, and an output layer; inputting an image from an input layer to a first convolution layer, the first convolution layer identifying an interested area corresponding to the top surface of the product on the image, inputting the interested area to an output layer as a face pixel area, and inputting the face pixel area to a second convolution layer and a third convolution layer respectively; the second convolution layer extracts mask information in the area of the face pixel and inputs the mask information to the output layer; extracting a positioning window from the area of the surface pixel of the third convolution layer, and inputting the positioning window into the output layer; the output layer outputs the surface pixel regions, mask information and positioning windows respectively.
In some specific ways, step 21 and step 22 are executed in parallel to roughly identify two pixel regions from the image quickly, so that the identification time of the two pixel regions can be shortened, and step 24 and step 25 are executed in parallel to identify two features from the two pixel regions quickly and accurately, so that the identification time of the two features can be shortened, and the image identification efficiency can be improved.
In some specific ways, on the image, the encoding pixel regions are distributed in the planar pixel region, the region feature information may further include an encoding feature symbol for identifying the encoding pixel regions, the encoding pixel regions are regions including all pixel points corresponding to the barcode or/and the two-dimensional code, and the encoding feature symbol may include the number of all barcode pixel points or/and the number of all two-dimensional code pixel points.
In some specific ways, the first deep learning model may adopt a Maskcnn image segmentation model, the second deep learning model may adopt a Yolo semantic segmentation model, and the third deep learning model may adopt a Resnet image detection model.
In some specific modes, the Resnet image detection model comprises a first deep learning network and a second deep learning network, a surface pixel region is input into the first deep learning network, the first deep learning network extracts texture characteristics in the surface pixel region, a coding pixel region is input into the second deep learning network, and the second deep learning network extracts coding characteristics in the coding pixel region.
Step 25 specifically comprises the following steps: enhancing the coding pixel region according to the contrast enhancer model to obtain a coding enhancement region; and decoding the coding enhancement region according to the barcode decoding submodel to obtain a coding characteristic symbol.
In some specific ways, the Resnet image detection model may include an image contrast enhancement sub-model and a barcode decoding sub-model, the image contrast enhancement sub-model may employ an adaptive image contrast enhancement algorithm, the barcode decoding sub-model may employ a barcode decoding algorithm or/and a two-dimensional code decoding algorithm, the encoding characteristic symbol may include decoding information of all barcode pixels or/and decoding information of all two-dimensional code pixels, and the encoding enhancement region has a higher contrast than the encoding pixel region, which facilitates fast decoding of the encoding characteristic symbol from the encoding enhancement region to reduce a decoding failure probability.
As an optional implementation manner, step 30 specifically includes the following steps: searching a candidate characteristic template set matched with the texture characteristic symbol and the coding characteristic symbol from a preset characteristic template library; and searching out a unique feature template matched with the face pixel region from the candidate feature template set.
In some specific modes, the Resnet image detection model further comprises a feature template search submodel, wherein the feature template search submodel is used for inputting the face pixel region, the texture feature symbol and the coding feature symbol into the feature template search submodel respectively, the feature template search submodel searches candidate feature templates matched with the texture feature symbol and the coding feature symbol in a preset feature template library, all the candidate feature templates form a set, the candidate feature templates are prevented from being omitted, the candidate feature template set can be searched comprehensively, the feature template search submodel also searches a unique feature template matched with the face pixel region in the set, the feature template search submodel can comprise an SIFT image search matching algorithm, one feature template is searched accurately, and the search accuracy of the feature template is improved.
In some implementations, the feature template search submodel may include an ECC entropy correlation coefficient algorithm that calculates an ECC value between the face pixel region and each candidate feature template in the set of candidate feature templates, determining one candidate template corresponding to a maximum ECC value as the unique feature template that matches the face pixel region.
As an optional implementation manner, step 40 specifically includes the following steps: extracting mask information and a positioning window from the region characteristic information; extracting mask information and a positioning window from the region characteristic information; positioning a face pixel area on the image according to the mask information and the positioning window; positioning a three-dimensional coordinate of a central point in a face pixel area according to the unique characteristic template; and solving the rotation angle according to the three-dimensional coordinates of the central point.
In some specific modes, mask information, a positioning window and a unique feature template are respectively input into an image pose measurement model, the image pose measurement model positions a face pixel area including the mask information and the positioning window on an image, a central point is calibrated in the face pixel area by using the unique feature template, a three-dimensional coordinate of the calibrated central point on the image is measured, and the three-dimensional coordinate is taken as a three-dimensional coordinate of the central point.
In some specific modes, an abscissa and an ordinate in three-dimensional coordinates of the central point are respectively input into an angle calculation model, and the angle calculation model utilizes the abscissa and the ordinate to solve the rotation angle, so that the rotation angle can be quickly solved in a simple mode.
In some embodiments, the angle calculation model is specifically represented as: angels = arctan (y/x), angels denotes rotation angle, acrtan denotes arctan function, x denotes abscissa, and y denotes ordinate.
As an optional implementation manner, step 50 specifically includes the following steps: acquiring a preset homogeneous transformation matrix; respectively extracting a three-dimensional coordinate and a rotation angle of a central point from the key pose information; carrying out coordinate transformation on the three-dimensional coordinate of the central point according to a preset homogeneous transformation matrix to obtain a coordinate under a coordinate system of the sorting equipment; driving sorting equipment under a sorting equipment coordinate system to grab a product according to coordinates under the sorting equipment coordinate system, so that the product is grabbed from a product line by the sorting equipment; and driving the sorting equipment for grabbing the products to rotate according to the rotation angle, so that the products are rotated by the sorting equipment by the rotation angle.
Example two
As shown in fig. 2, an image-based product sorting system includes: a vision subsystem and a sorting subsystem coupled to the vision subsystem; the vision subsystem is used for acquiring images of products on a product line; and the sorting subsystem is used for acquiring the image, respectively identifying the area pixel region and the region characteristic information from the image, searching out a unique characteristic model matched with the area pixel region from a preset characteristic template library according to the region characteristic information, positioning key position and pose information corresponding to the region characteristic information on the image according to the unique characteristic template, and sorting the product from the product line according to the key position and pose information.
The vision subsystem comprises a power supply input end, a light source, a camera, a collecting card and the like, and the sorting subsystem comprises a controller, a computer, sorting equipment and the like, for example: the sorting equipment can be a mobile robot or an industrial robot, and the controller is electrically connected with the acquisition card to couple the vision subsystem with the sorting subsystem.
The power supply input end is used for supplying power to the camera, the acquisition card, the computer and the sorting equipment; a light source for supplementing light to a product operated by the product line; the camera is used for acquiring the image of the product on the product line and inputting the image of the product on the product line into the acquisition card; the acquisition card is used for inputting the image of the product on the product line into the controller; and the controller is used for inputting the image of the product on the product line into the computer.
The computer is used for respectively storing the first deep learning model, the second deep learning model and the third deep learning model; respectively extracting a face pixel region, mask information and a positioning window on the image according to the first deep learning model; extracting a coding pixel region on the image according to the second deep learning model; extracting texture feature symbols in the face pixel region according to the third deep learning model; and extracting the coding characteristic symbol in the coding pixel region according to the third deep learning model.
The computer is specifically configured to: and enhancing the coding pixel region according to the contrast enhancer model to obtain a coding enhancement region, and decoding the coding enhancement region according to the bar code decoding submodel to obtain a coding characteristic symbol.
And the computer is also used for searching a candidate characteristic template set matched with the texture characteristic character and the coding characteristic character from a preset characteristic template library, and searching a unique characteristic template matched with the face pixel region from the candidate characteristic template set.
The computer is also used for extracting mask information and a positioning window from the regional characteristic information; positioning a face pixel area on the image according to the mask information and the positioning window; positioning a three-dimensional coordinate of a central point in a face pixel area according to the unique characteristic template; and solving the rotation angle according to the three-dimensional coordinate of the central point.
The computer is also used for storing a preset homogeneous transformation matrix; respectively extracting a three-dimensional coordinate and a rotation angle of a central point from the key pose information; and carrying out coordinate transformation on the three-dimensional coordinate of the central point according to a preset homogeneous transformation matrix to obtain the coordinate of the sorting equipment in a coordinate system.
The computer is also used for generating a grabbing instruction according to the coordinates under the coordinate system of the sorting equipment and inputting the grabbing instruction to the controller; the controller is also used for inputting the grabbing instruction to the sorting equipment; and the sorting equipment is used for grabbing the products according to the grabbing instructions so that the products are grabbed from the product line by the sorting equipment.
The computer is also used for generating a rotation instruction according to the rotation angle and inputting the rotation instruction to the controller; the controller is also used for inputting the rotating instruction to the sorting equipment; and the sorting equipment is used for grabbing the product to rotate according to the rotating instruction so that the product is rotated by the rotating angle through the sorting equipment.
EXAMPLE III
The utility model provides an intelligence letter sorting robot, includes base, support, camera, mechanical letter sorting arm, memory, encoder and driver, and mechanical letter sorting arm includes robotic arm and installs the clamping jaw at robotic arm end.
The top at the base is installed respectively to the bottom of support and robotic arm's bottom, and the camera is installed on the top of support, and the clamping jaw is installed at robotic arm's end, and memory, encoder and driver are all installed and are hollow base inside, and the encoder is connected with memory, driver and camera electricity respectively, and the driver is connected with robotic arm electricity.
A memory configured to store at least one instruction or at least one program or set of codes or set of instructions that is loaded and executed by the encoder to cause the intelligent sorting robot to implement the operational steps performed by the image based product sorting method of embodiment one.
A camera configured to capture a picture of the product on the product line and input the image to the memory.
And the encoder is configured to read the image, the first deep learning model, the second deep learning model, the third deep learning model and the preset homogeneous transformation matrix from the memory respectively, identify a face pixel region and region feature information from the image respectively, search out a unique feature template matched with the face pixel region from a preset feature template library according to the region feature information, and position key pose information corresponding to the region feature information on the image according to the unique feature template.
The encoder is further configured to respectively extract the three-dimensional coordinates of the central point and the rotation angle from the key pose information, perform coordinate transformation on the three-dimensional coordinates of the central point according to a preset homogeneous transformation matrix to obtain coordinates under a coordinate system of the sorting equipment, and respectively input the coordinates and the rotation angle under the coordinate system of the sorting equipment to the driver.
And the driver is configured to drive the mechanical sorting arm under the robot coordinate system to grab the product according to the coordinate under the robot coordinate system, so that the product is grabbed from the product line by the mechanical sorting arm, and the mechanical sorting arm which grabs the product is driven to rotate according to the rotation angle, so that the product is rotated by the mechanical sorting arm by the rotation angle, the product is sorted step by step, and the difficulty of sorting the product is reduced.
The reader should understand that in the description of this specification, references to the terms "aspect," "embodiment," "implementation," and "particular manner" etc. mean that a particular feature, step, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention, and the terms "first" and "second" etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated, whereby the feature defined as "first" and "second" etc. may explicitly or implicitly include at least one such feature.
In this specification, where the terminology above is used for the purpose of describing particular features, steps or characteristics in general, it is not necessary for the terminology above to be restricted to the same embodiments or examples, and the described particular features, steps or characteristics may be combined in any suitable manner in one or more particular examples or examples, or those skilled in the art may combine or combine features of different embodiments or examples described herein and features of different embodiments or examples without conflict with one another.

Claims (8)

1. A method for image-based sorting of products, comprising:
acquiring an image of a product on a product line;
respectively identifying a face pixel area and area characteristic information from the image;
searching out a unique feature template matched with the area pixel region from a preset feature template library according to the region feature information;
positioning key pose information corresponding to the region feature information on the image according to the unique feature template;
sorting out the products from the product line according to the key pose information;
the region characteristic information comprises mask information, a positioning window, a texture characteristic symbol and a coding characteristic symbol; identifying a face pixel region and region feature information from the image respectively, specifically comprising:
respectively acquiring a pre-trained first deep learning model, a pre-trained second deep learning model and a pre-trained third deep learning model;
respectively extracting the face pixel region, the mask information and the positioning window on the image according to the first deep learning model, wherein the mask information and the positioning window are used for positioning the face pixel region on the image;
extracting a coding pixel region on the image according to the second deep learning model;
extracting the texture feature in the face pixel region according to the third deep learning model;
extracting the coding feature in the coding pixel region according to the third deep learning model;
on the image, the coding pixel regions are distributed in the area pixel regions, the region characteristic information further comprises coding characteristic symbols used for identifying the coding pixel regions, the coding pixel regions are regions comprising all pixel points corresponding to the bar codes or/and the two-dimensional codes, and the coding characteristic symbols comprise the number of all bar code pixel points or/and the number of all two-dimensional code pixel points.
2. The image-based product sorting method according to claim 1, wherein the third deep learning model includes an image contrast enhancer model and a barcode decoding sub-model; extracting the coding feature in the coding pixel region according to the third deep learning model specifically includes:
enhancing the coding pixel region according to the contrast enhancement submodel to obtain a coding enhancement region;
and decoding the coding enhancement region according to the bar code decoding submodel to obtain the coding characteristic symbol.
3. The image-based product sorting method according to claim 1 or 2, wherein searching out a unique feature template matching the area of the face pixel from a preset feature template library according to the area feature information specifically comprises:
searching a candidate characteristic template set matched with the texture characteristic symbol and the coding characteristic symbol from the preset characteristic template library;
and searching the unique feature template matched with the face pixel region from the candidate feature template set.
4. The image-based product sorting method according to any one of claims 1-2, wherein the key pose information comprises a center point three-dimensional coordinate and a rotation angle associated with the center point three-dimensional coordinate; positioning key pose information corresponding to the region feature information on the image according to the unique feature template, which specifically comprises:
extracting mask information and a positioning window from the region characteristic information;
positioning the face pixel region on the image according to the mask information and the positioning window;
positioning a central point three-dimensional coordinate in the face pixel area according to the unique feature template;
and solving the rotation angle according to the three-dimensional coordinate of the central point.
5. The image-based product sorting method according to any one of claims 1-2, wherein sorting the products from the product line according to the key pose information comprises:
acquiring a preset homogeneous transformation matrix;
respectively extracting a three-dimensional coordinate and a rotation angle of a central point from the key pose information;
carrying out coordinate transformation on the three-dimensional coordinate of the central point according to the preset homogeneous transformation matrix to obtain a coordinate under a coordinate system of sorting equipment;
driving the sorting equipment under the coordinate system of the sorting equipment to grab the product according to the coordinates under the coordinate system of the sorting equipment, so that the product is grabbed from the product line by the sorting equipment;
and driving the sorting equipment which grasps the product to rotate according to the rotation angle, so that the product is rotated by the sorting equipment by the rotation angle.
6. An image-based product sorting system, comprising: a vision subsystem and a sorting subsystem coupled to the vision subsystem;
the vision subsystem is used for acquiring images of products on a product line;
the sorting subsystem is used for acquiring the images; respectively identifying a face pixel region and region characteristic information from the image; searching out a unique feature template matched with the area pixel region from a preset feature template library according to the region feature information; positioning key pose information corresponding to the region feature information on the image according to the unique feature template; sorting out the products from the product line according to the key pose information;
the region characteristic information comprises mask information, a positioning window, a texture characteristic symbol and a coding characteristic symbol; the sorting subsystem is specifically configured to:
respectively acquiring a pre-trained first deep learning model, a pre-trained second deep learning model and a pre-trained third deep learning model;
respectively extracting the face pixel region, the mask information and the positioning window on the image according to the first deep learning model, wherein the mask information and the positioning window are used for positioning the face pixel region on the image;
extracting a coding pixel region on the image according to the second deep learning model;
extracting the texture feature in the face pixel region according to the third deep learning model;
extracting the coding feature in the coding pixel region according to a third deep learning model;
on the image, the coding pixel regions are distributed in the area pixel region, the region characteristic information further comprises a coding characteristic symbol used for identifying the coding pixel regions, the coding pixel regions are regions comprising all pixel points corresponding to the bar code or/and the two-dimensional code, and the coding characteristic symbol comprises the number of all bar code pixel points or/and the number of all two-dimensional code pixel points.
7. The image-based product sorting system according to claim 6, wherein the key pose information comprises a center point three-dimensional coordinate and a rotation angle associated with the center point three-dimensional coordinate; the sorting subsystem is further specifically configured to:
extracting mask information and a positioning window from the region characteristic information;
positioning the face pixel region on the image according to the mask information and the positioning window;
positioning a central point three-dimensional coordinate in the face pixel area according to the unique feature template;
and solving the rotation angle according to the three-dimensional coordinate of the central point.
8. An intelligent sorting robot, characterized by being configured to store at least one instruction or at least one program or set of codes or set of instructions, which is loaded and executed to implement the operational steps performed by the image based product sorting method according to any of claims 1 to 5.
CN201911013064.4A 2019-10-23 2019-10-23 Product sorting method, product sorting system and intelligent sorting robot Active CN111167731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911013064.4A CN111167731B (en) 2019-10-23 2019-10-23 Product sorting method, product sorting system and intelligent sorting robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911013064.4A CN111167731B (en) 2019-10-23 2019-10-23 Product sorting method, product sorting system and intelligent sorting robot

Publications (2)

Publication Number Publication Date
CN111167731A CN111167731A (en) 2020-05-19
CN111167731B true CN111167731B (en) 2022-11-08

Family

ID=70624570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911013064.4A Active CN111167731B (en) 2019-10-23 2019-10-23 Product sorting method, product sorting system and intelligent sorting robot

Country Status (1)

Country Link
CN (1) CN111167731B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111712A (en) * 2021-03-11 2021-07-13 稳健医疗用品股份有限公司 AI identification positioning method, system and device for bagged product
CN114653629B (en) * 2022-02-23 2025-01-21 深圳市如本科技有限公司 Sorting method based on visual recognition, intelligent sorting system and readable storage medium
CN117358622B (en) * 2023-12-08 2024-04-16 格力大松(宿迁)生活电器有限公司 Method, device and system for detecting indoor and outdoor units of air conditioner

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103889879A (en) * 2011-10-19 2014-06-25 克朗设备公司 Identifying, matching and tracking multiple objects in a sequence of images
CN107150032A (en) * 2016-03-04 2017-09-12 上海电气集团股份有限公司 A kind of workpiece identification based on many image acquisition equipments and sorting equipment and method
CN108480227A (en) * 2018-03-30 2018-09-04 烟台维度机器人有限公司 A kind of apple method for sorting and system based on image recognition
CN108748146A (en) * 2018-05-30 2018-11-06 武汉库柏特科技有限公司 A kind of Robotic Hand-Eye Calibration method and system
CN109701901A (en) * 2018-12-30 2019-05-03 杭州翰融智能科技有限公司 A kind of Material Sorting system and method

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6701001B1 (en) * 2000-06-20 2004-03-02 Dunkley International, Inc. Automated part sorting system
US7403872B1 (en) * 2007-04-13 2008-07-22 Gii Acquisition, Llc Method and system for inspecting manufactured parts and sorting the inspected parts
US8774455B2 (en) * 2011-03-02 2014-07-08 Raf Technology, Inc. Document fingerprinting
CN105654092B (en) * 2015-11-25 2019-08-30 小米科技有限责任公司 Feature extracting method and device
CN106874914B (en) * 2017-01-12 2019-05-14 华南理工大学 A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks
US10376927B2 (en) * 2017-11-15 2019-08-13 Darren Davison Object sorting devices
CN108555908B (en) * 2018-04-12 2020-07-28 同济大学 A method for gesture recognition and picking of stacked workpieces based on RGBD cameras
CN110125036B (en) * 2019-04-25 2020-12-22 广东工业大学 A self-identification sorting method based on template matching
CN110232710B (en) * 2019-05-31 2021-06-11 深圳市皕像科技有限公司 Article positioning method, system and equipment based on three-dimensional camera
CN110302981B (en) * 2019-06-17 2021-03-23 华侨大学 Solid waste sorting online grabbing method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103889879A (en) * 2011-10-19 2014-06-25 克朗设备公司 Identifying, matching and tracking multiple objects in a sequence of images
CN107150032A (en) * 2016-03-04 2017-09-12 上海电气集团股份有限公司 A kind of workpiece identification based on many image acquisition equipments and sorting equipment and method
CN108480227A (en) * 2018-03-30 2018-09-04 烟台维度机器人有限公司 A kind of apple method for sorting and system based on image recognition
CN108748146A (en) * 2018-05-30 2018-11-06 武汉库柏特科技有限公司 A kind of Robotic Hand-Eye Calibration method and system
CN109701901A (en) * 2018-12-30 2019-05-03 杭州翰融智能科技有限公司 A kind of Material Sorting system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
简述图像被动取证技术;林祥等;《Engineering》;20180215(第01期);66-89 *

Also Published As

Publication number Publication date
CN111167731A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
US11049278B2 (en) System and method for visual identification, and system and method for classifying and sorting
CN111523610B (en) Article identification method for efficient labeling of samples
EP3975123A1 (en) Map constructing method, positioning method and system, wireless communication terminal, and computer-readable medium
EP3499414B1 (en) Lightweight 3d vision camera with intelligent segmentation engine for machine vision and auto identification
US7831098B2 (en) System and method for visual searching of objects using lines
CN111167731B (en) Product sorting method, product sorting system and intelligent sorting robot
CN111523545B (en) Article searching method combined with depth information
CN103530590A (en) DPM (direct part mark) two-dimensional code recognition system
CN117292193B (en) Multi-station intelligent logistics conveying system
Kaymak et al. Implementation of object detection and recognition algorithms on a robotic arm platform using raspberry pi
Lamon et al. Environmental modeling with fingerprint sequences for topological global localization
CN112149690A (en) A traceability method and traceability system based on biological image feature recognition
CN107490346A (en) A kind of RFID multi-tags Network Three-dimensional measurement modeling method of view-based access control model
Streiff et al. 3D3L: Deep learned 3D keypoint detection and description for LiDARs
CN113989604A (en) A tire DOT information recognition method based on end-to-end deep learning
CN109993715A (en) A kind of robot vision image preprocessing system and image processing method
CN117870570A (en) Automatic marking method, system, equipment and medium for three-dimensional measurement out-of-tolerance range
CN107291813B (en) Example searching method based on semantic segmentation scene
CN107895166B (en) A Method for Robust Recognition of Objects Based on Geometric Hash Method Based on Feature Descriptors
CN114332062B (en) Loop detection method and device, loop detection robot, and storage medium
Yuan et al. Fast QR code detection based on BING and AdaBoost-SVM
CN115810197A (en) Multi-mode electric power form recognition method and device
Zhao et al. Visual odometry-A review of approaches
Li et al. Online workpieces recognition for the robotic spray-painting production line with a low-cost RGB-D camera
CN109359705B (en) System and method for searching misplaced books

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant