[go: up one dir, main page]

CN110796669A - Vertical frame positioning method and equipment - Google Patents

Vertical frame positioning method and equipment Download PDF

Info

Publication number
CN110796669A
CN110796669A CN201911033630.8A CN201911033630A CN110796669A CN 110796669 A CN110796669 A CN 110796669A CN 201911033630 A CN201911033630 A CN 201911033630A CN 110796669 A CN110796669 A CN 110796669A
Authority
CN
China
Prior art keywords
vertical frame
detection area
frame detection
mobile phone
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911033630.8A
Other languages
Chinese (zh)
Inventor
徐鹏
沈圣远
常树林
姚巨虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yueyi Network Information Technology Co Ltd
Original Assignee
Shanghai Yueyi Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yueyi Network Information Technology Co Ltd filed Critical Shanghai Yueyi Network Information Technology Co Ltd
Priority to CN201911033630.8A priority Critical patent/CN110796669A/en
Publication of CN110796669A publication Critical patent/CN110796669A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The method comprises the steps of obtaining an appearance image of a mobile phone to be detected; carrying out vertical frame detection on the appearance image to obtain a vertical frame detection area; sequentially carrying out pixel expansion and pixel segmentation on the vertical frame detection area to obtain a segmented vertical frame detection area; and performing pixel clustering on the segmented vertical frame detection area to obtain the frame position of the vertical frame of the mobile phone, detecting, segmenting and aggregating the area of the mobile phone in a deep learning mode, and filtering image information outside the vertical frame, so that the vertical frame of the mobile phone is accurately positioned, and the error identification rate of the defects of the vertical frame area is favorably reduced.

Description

Vertical frame positioning method and equipment
Technical Field
The present application relates to the field of computers, and in particular, to a method and an apparatus for positioning a vertical bezel.
Background
In the prior art, the positioning of the vertical frame is mainly based on a traditional image algorithm, and the fine positioning of the vertical frame is obtained through color space conversion, filtering, edge extraction and region aggregation; because the traditional image processing mode depends on the selection of the threshold value to a great extent, the second-hand mobile phone has different degrees of difference in aspects such as color, appearance, aging degree and the like, and therefore, the determined threshold value is difficult to be given. The traditional image processing mode is not suitable for the detection of the second-hand mobile phone, so that the problems of inaccurate detection of the mobile phone frame and the like can occur. Therefore, how to overcome the problem of inaccuracy in detecting the second-hand mobile phone frame can better adapt to the difference of the second-hand mobile phone, and the realization of the positioning detection of the second-hand mobile phone frame is a direction which needs to be researched in the industry.
Disclosure of Invention
An object of the present application is to provide a method and an apparatus for positioning a vertical frame, so as to solve the problem of inaccurate positioning of the vertical frame in the prior art.
According to one aspect of the application, the vertical frame positioning method comprises the following steps:
acquiring an appearance image of a mobile phone to be detected;
carrying out vertical frame detection on the appearance image to obtain a vertical frame detection area;
sequentially carrying out pixel expansion and pixel segmentation on the vertical frame detection area to obtain a segmented vertical frame detection area;
and carrying out pixel clustering on the divided vertical frame detection area to obtain the frame position of the vertical frame of the mobile phone.
Further, in the vertical frame positioning method, performing vertical frame detection on the appearance image to obtain a vertical frame detection area, including:
acquiring a vertical border detection model, wherein the vertical border detection model is determined by a residual error network resnet 50;
and performing vertical frame detection on the appearance image based on the vertical frame detection model to obtain a vertical frame detection area.
Further, in the vertical frame positioning method, sequentially performing pixel expansion and pixel segmentation on the vertical frame detection area to obtain a segmented vertical frame detection area, including:
and expanding a preset number of pixels outwards around the vertical frame detection area, and performing pixel segmentation on the expanded vertical frame detection area through a convolutional neural network U-net to obtain the segmented vertical frame detection area.
Further, in the vertical frame positioning method, performing pixel clustering on the divided vertical frame detection area to obtain the frame position of the vertical frame of the mobile phone, includes:
performing pixel clustering on all pixel points in the divided vertical frame detection area, and connecting all points corresponding to the vertical frame in the divided vertical frame detection area after clustering together to obtain the area of the vertical frame of the mobile phone;
and intercepting the maximum external rectangle of the area of the vertical frame of the mobile phone to obtain the frame position of the vertical frame of the mobile phone.
Further, in the vertical frame positioning method, performing pixel clustering on all pixel points in the divided vertical frame detection area includes:
judging whether pixel points of the divided vertical frame detection area are in the vertical frame of the mobile phone or not;
if yes, reserving corresponding pixel points and pixel values thereof in the divided vertical frame detection area;
if not, setting the corresponding pixel points in the divided vertical frame detection area to be black.
Further, in the vertical frame positioning method, the preset number of pixels is 100 pixels.
According to another aspect of the present application, there is also provided a computer readable medium having computer readable instructions stored thereon, which, when executed by a processor, cause the processor to implement the method of any one of the above.
According to another aspect of the present application, there is also provided a vertical frame positioning apparatus including:
one or more processors;
a computer-readable medium for storing one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the above.
Compared with the prior art, the appearance image of the mobile phone to be detected is obtained; carrying out vertical frame detection on the appearance image to obtain a vertical frame detection area; sequentially carrying out pixel expansion and pixel segmentation on the vertical frame detection area to obtain a segmented vertical frame detection area; and performing pixel clustering on the segmented vertical frame detection area to obtain the frame position of the vertical frame of the mobile phone, detecting, segmenting and aggregating the area of the mobile phone in a deep learning mode, and filtering image information outside the vertical frame, so that the vertical frame of the mobile phone is accurately positioned, and the error identification rate of the defects of the vertical frame area is favorably reduced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a vertical frame positioning method in accordance with an aspect of the subject application;
FIG. 2 illustrates an appearance image of a mobile phone to be tested in a vertical bezel positioning method according to an aspect of the present application;
FIG. 3 illustrates a schematic diagram of a vertical frame detection area in a vertical frame positioning method according to an aspect of the subject application;
FIG. 4 is a schematic diagram illustrating an enlarged vertical frame detection area of pixels in a vertical frame positioning method according to an aspect of the present application;
FIG. 5 illustrates a schematic diagram of a partitioned vertical frame detection area in a vertical frame positioning method according to an aspect of the subject application;
the same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 is a schematic flowchart illustrating a vertical frame positioning method according to an aspect of the present application, applied to a process of positioning a vertical frame of a mobile phone, where the method includes step S11, step S12, step S13, and step S14, where the method specifically includes:
step S11, acquiring an appearance image of the mobile phone to be detected; here, the mobile phone may be a used mobile phone, a mobile phone returned to a factory due to a problem of factory quality, a recycled mobile phone, or the like, and the used mobile phone may have various differences in various aspects such as color, appearance, aging degree, and the like. The appearance image of the mobile phone can include, but is not limited to, images for showing the appearance of the mobile phone, such as a front view, a left view, a right view, a rear view, a top view, and a bottom view of the mobile phone.
Step S12, carrying out vertical frame detection on the appearance image to obtain a vertical frame detection area; here, the purpose of the vertical frame detection is to detect a vertical frame detection area, which is an approximate area where the vertical frame is located, and after the vertical frame detection, coordinates of different pixel points corresponding to the vertical frame detection area, a confidence degree corresponding to the vertical frame detection area, and the like can be obtained.
Step S13, sequentially performing pixel expansion and pixel segmentation on the vertical frame detection area to obtain a segmented vertical frame detection area; firstly, performing pixel expansion processing on a vertical frame detection area to obtain a vertical frame detection area after pixel expansion, so as to perform pixel segmentation processing on the vertical frame detection area; and then, the pixel expanded vertical frame detection area is subjected to pixel division processing to obtain a divided vertical frame detection area. After the pixel segmentation is carried out on the vertical frame detection area, whether each pixel in the vertical frame detection area belongs to the vertical frame area or not can be judged visually.
And step S14, carrying out pixel clustering on the divided vertical frame detection area to obtain the frame position of the vertical frame of the mobile phone.
The steps S11 to S14 are performed to detect, segment, and aggregate the regions of the mobile phone in a deep learning manner, and filter out image information outside the vertical frame, so as to realize accurate positioning of the vertical frame of the mobile phone, and facilitate reduction of error recognition rate of defects in the vertical frame region.
For example, first, an appearance image a of a mobile phone to be detected is obtained as shown in fig. 2, where the mobile phone to be detected may be a second-hand mobile phone with a defective appearance. Then, performing vertical frame detection on the appearance image to obtain a vertical frame detection area a1 shown in fig. 3; here, the purpose of the vertical frame detection is to detect a vertical frame detection area a1, which is an approximate area where a vertical frame is located, and the detection result includes coordinates of different pixel points corresponding to the vertical frame detection area; and detecting the confidence score corresponding to the region by the vertical frame, and the like. Then, performing pixel expansion processing on the vertical frame detection area a1 to obtain a pixel-expanded vertical frame detection area a2 shown in fig. 4, so as to perform pixel segmentation processing on the vertical frame detection area in the following; then, the pixel-enlarged vertical frame detection area a2 is subjected to pixel division processing to obtain a divided vertical frame detection area A3 as shown in fig. 5. After the pixel division is performed on the vertical frame detection area a2 after the pixel expansion, it can be determined whether each pixel in the vertical frame detection area belongs to the vertical frame area. Finally, performing pixel clustering on the divided vertical frame detection area A3 to obtain a frame position D of the vertical frame of the mobile phone, wherein four coordinates corresponding to the frame position D of the vertical frame of the mobile phone are respectively: the number is (x1, y1), (x1, y2), (x2, y1) and (x2, y2), so that the vertical frame of the mobile phone can be accurately positioned, and the error identification rate of the defects of the vertical frame area can be reduced.
Next to the foregoing embodiment of the present application, the step S12 performs vertical frame detection on the appearance image to obtain a vertical frame detection area, including:
obtaining a vertical bounding box detection model, wherein the vertical bounding box detection model is determined by a residual error network resnet 50.
And performing vertical frame detection on the appearance image based on the vertical frame detection model to obtain a vertical frame detection area.
For example, the obtaining of the vertical frame detection model established based on resnet50 may specifically include: first, the improvement to resnet50 includes: after pruning based on resnet50, 1 convolution kernel is replaced by 2 convolution kernels inside the residual network block. Secondly, acquiring at least one recovered training appearance image of the mobile phone, namely acquiring training appearance images manually marked with 1000 mobile phones; then, detecting and predicting the vertical frame of each mobile phone according to the improved resnet50 to obtain a prediction result T of the vertical frame detection area of the vertical frame of the mobile phone indicated by the training appearance image corresponding to each mobile phone, and simultaneously obtaining a real result S of the vertical frame detection area of the vertical frame of the mobile phone indicated by the training appearance image corresponding to each mobile phone, and calculating a difference V between the prediction result T and the real result S of the vertical frame detection area of the vertical frame of the mobile phone indicated by the training appearance image corresponding to each mobile phone; then, inputting the difference value into the vertical frame detection model M established based on improved resnet50, and adjusting parameters of the vertical frame detection model M to obtain an improved vertical frame detection model M, so as to realize continuous training and optimization of the vertical frame detection model M, wherein the vertical frame detection model M is more favorable for obtaining deeper features in the process of performing vertical frame detection on the appearance image of the mobile phone; and finally, performing vertical frame detection on the appearance image based on the vertical frame detection model M to obtain a vertical frame detection area A1 shown in FIG. 3, which is favorable for realizing the positioning accuracy of the vertical frame.
Next, in the foregoing embodiment of the present application, the step S13 sequentially performs pixel expansion and pixel division on the vertical frame detection area to obtain a divided vertical frame detection area, including:
and expanding a preset number of pixels outwards around the vertical frame detection area, and performing pixel segmentation on the expanded vertical frame detection area through a convolutional neural network U-net to obtain the segmented vertical frame detection area. Here, the predetermined number of pixels may be any number of pixels, and in a preferred embodiment of an aspect of the present application, the predetermined number of pixels may be preferably 100 pixels.
For example, if the preset number of pixel expansion is 100, that is, 100 pixels are expanded outwards around the vertical frame detection area a1 shown in fig. 3, so as to obtain a pixel-expanded vertical frame detection area a2 shown in fig. 4, and the pixel-expanded vertical frame detection area a2 is subjected to pixel segmentation through a convolutional neural network U-net, so as to obtain the segmented vertical frame detection area A3 shown in fig. 5, which is beneficial to achieving accurate positioning of a subsequent vertical frame. The size of the image subjected to pixel segmentation processing by the convolutional neural network U-net is the same as that of the image before segmentation, and whether a pixel point in the segmented image belongs to a target detection object can be distinguished, if so, the pixel value of the pixel point is reserved, and if not, the pixel value of the pixel point is set to be zero, namely, the pixel point is set to be black. For example, the input original image is: 128 × 128, obtaining a multi-layer reduced-resolution feature map by multi-layer convolution calculation, for example, corresponding to: 64 × 64, 32 × 32, 16 × 16, the pictures 32 × 32, 64 × 64, 128 × 128 can be obtained by upsampling once, and iteration is performed through a loss function, so that the position of the vertical frame (the non-frame position is black) can be well distinguished from the finally obtained image 128 × 128, because the input original image 128 × 128 is consistent with the output image 128 × 128, the convolutional neural network U-net is mainly used for confirming whether each pixel point in the image to be processed belongs to the target detection object (for example, the position of the vertical frame).
Next, in the foregoing embodiment of the present application, the step S14 performing pixel clustering on the divided vertical frame detection area to obtain a frame position of the vertical frame of the mobile phone includes:
and performing pixel clustering on all pixel points in the divided vertical frame detection area, and connecting all points corresponding to the vertical frame in the divided vertical frame detection area after clustering together to obtain the area of the vertical frame of the mobile phone. And intercepting the maximum external rectangle of the area of the vertical frame of the mobile phone to obtain the frame position of the vertical frame of the mobile phone, thereby realizing the accurate positioning of the vertical frame of the mobile phone and being beneficial to reducing the error recognition rate of the defects of the area of the vertical frame.
For example, if all the pixels in the divided vertical frame detection area a3 shown in fig. 5 are: a1, a2, A3, a4, a5... a, performing pixel clustering on all pixel points a1, a2, A3, a4, and a5... a in the partitioned vertical frame detection region A3 shown in fig. 5, and connecting all points corresponding to vertical frames in the clustered partitioned vertical frame detection region to obtain a region a4 of the vertical frame of the mobile phone; and intercepting the maximum external rectangle of the area A4 of the vertical frame of the mobile phone to obtain the frame position D of the vertical frame of the mobile phone, thereby realizing the accurate positioning of the vertical frame of the mobile phone and being beneficial to reducing the error recognition rate of the defects of the area of the vertical frame.
Further, the pixel clustering of all the pixel points in the divided vertical frame detection area includes:
judging whether pixel points of the divided vertical frame detection area are in the vertical frame of the mobile phone or not;
if yes, reserving corresponding pixel points and pixel values thereof in the divided vertical frame detection area;
if not, setting the corresponding pixel points in the divided vertical frame detection area to be black, namely setting the pixel values of the pixel points in the area of the non-vertical frame to be 0.
For example, all the pixel points a in the divided vertical frame detection area a3 shown in fig. 5 are detected1、a2、a3、a4、a5......anWhen performing pixel clustering, the pixel point a of the divided vertical frame detection area a3 shown in fig. 5 is determined1、a2、a3、a4、a5......anWhether the detected vertical frame is in the vertical frame of the mobile phone or not is judged, if so, corresponding pixel points and pixel values thereof in the divided vertical frame detection area are reserved; if not, detecting the divided vertical frames in the areaThe corresponding pixel point is set to be black, namely the pixel value of the pixel point is set to be 0. Pixel a of a region of a non-vertical frame2、a4、a5、a7、......an-1And (5) placing the blank in black. Then, all the points corresponding to the vertical frames in the clustered divided vertical frame detection area A3 shown in fig. 5 are connected together to obtain an area a4 of the vertical frame of the mobile phone. Finally, maximum external rectangle intercepting is carried out on an area A4 of a vertical frame of the mobile phone by calling a related function in an Open Source Computer Vision Library (OpenCV), so as to obtain a frame position D of the vertical frame of the mobile phone, wherein four points of the intercepted maximum external rectangle are x1, x2, y1 and y2, and thus four coordinates corresponding to the frame position D of the vertical frame of the mobile phone are respectively obtained as follows: the number is (x1, y1), (x1, y2), (x2, y1) and (x2, y2), so that the vertical frame of the mobile phone can be accurately positioned, and the error identification rate of the defects of the vertical frame area can be reduced.
According to another aspect of the present application, there is also provided a computer readable medium having stored thereon computer readable instructions, which, when executed by a processor, cause the processor to implement the method of controlling user base alignment as described above.
According to another aspect of the present application, there is also provided a vertical bezel positioning apparatus, comprising:
one or more processors;
a computer-readable medium for storing one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement a method of controlling user base station on a device as described above.
Here, for details of each embodiment of the device, reference may be specifically made to corresponding parts of the embodiment of the method for controlling user base pairing at the device side, and details are not described here.
In the practical application scenario of the vertical frame positioning method provided by the application, in the process of recovering and detecting the second-hand mobile phone, firstly, the appearance image a of the mobile phone to be detected is obtained as shown in fig. 2, and the mobile phone to be detected can be a second-hand mobile phone with a defective appearance.
Then, the obtaining of the vertical frame detection model established based on resnet50 may specifically include: first, the improvement to resnet50 includes: after pruning processing is carried out based on resnet50, 2 convolution kernels are used for replacing 1 convolution kernel in the residual error network block; secondly, acquiring at least one recovered training appearance image of the mobile phone, namely acquiring training appearance images manually marked with 1000 mobile phones; then, detecting and predicting the vertical frame of each mobile phone according to the improved resnet50 to obtain a prediction result T of the vertical frame detection area of the vertical frame of the mobile phone indicated by the training appearance image corresponding to each mobile phone, and simultaneously obtaining a real result S of the vertical frame detection area of the vertical frame of the mobile phone indicated by the training appearance image corresponding to each mobile phone, and calculating a difference V between the prediction result T and the real result S of the vertical frame detection area of the vertical frame of the mobile phone indicated by the training appearance image corresponding to each mobile phone; then, inputting the difference value into the vertical frame detection model M established based on improved resnet50, and adjusting parameters of the vertical frame detection model M to obtain an improved vertical frame detection model M, so as to realize continuous training and optimization of the vertical frame detection model M, wherein the vertical frame detection model M is more favorable for obtaining deeper features in the process of performing vertical frame detection on the appearance image of the mobile phone; and finally, performing vertical frame detection on the appearance image based on the vertical frame detection model M to obtain a vertical frame detection area A1 shown in FIG. 3, which is favorable for realizing the positioning accuracy of the vertical frame.
Then, the preset number of pixel expansion is set to 100, that is, 100 pixels are expanded outwards around the vertical frame detection area a1 shown in fig. 3, so as to obtain a pixel expanded vertical frame detection area a2 shown in fig. 4, and the pixel segmentation is performed on the expanded vertical frame detection area a2 through a convolutional neural network U-net, so as to obtain a segmented vertical frame detection area A3 shown in fig. 5, which is beneficial to achieving accurate positioning of a subsequent vertical frame.
Then, all the pixels a in the divided vertical frame detection area a3 shown in fig. 5 are detected1、a2、a3、a4、a5......anAnd carrying out pixel clustering. Determining the pixel point a of the divided vertical frame detection area A3 shown in FIG. 51、a2、a3、a4、a5......anWhether the mobile phone is in a vertical frame of the mobile phone; if yes, reserving corresponding pixel points and pixel values thereof in the divided vertical frame detection area; if not, setting the pixel value of the pixel point to be 0 when the corresponding pixel point in the divided vertical frame detection area is black; pixel a of a region of a non-vertical frame2、a4、a5、a7、......an-1Placing the mixture in black; then, all the points corresponding to the vertical frames in the clustered divided vertical frame detection area A3 shown in fig. 5 are connected together to obtain an area a4 of the vertical frame of the mobile phone.
Finally, maximum external rectangle intercepting is carried out on an area A4 of a vertical frame of the mobile phone by calling a related function in an Open Source Computer Vision Library (OpenCV), so as to obtain a frame position D of the vertical frame of the mobile phone, wherein four points of the intercepted maximum external rectangle are x1, x2, y1 and y2, and thus four coordinates corresponding to the frame position D of the vertical frame of the mobile phone are respectively obtained as follows: the number is (x1, y1), (x1, y2), (x2, y1) and (x2, y2), so that the vertical frame of the mobile phone can be accurately positioned, and the error identification rate of the defects of the vertical frame area can be reduced.
In summary, the appearance image of the mobile phone to be detected is obtained; carrying out vertical frame detection on the appearance image to obtain a vertical frame detection area; sequentially carrying out pixel expansion and pixel segmentation on the vertical frame detection area to obtain a segmented vertical frame detection area; and performing pixel clustering on the segmented vertical frame detection area to obtain the frame position of the vertical frame of the mobile phone, detecting, segmenting and aggregating the area of the mobile phone in a deep learning mode, and filtering image information outside the vertical frame, so that the vertical frame of the mobile phone is accurately positioned, and the error identification rate of the defects of the vertical frame area is favorably reduced.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (8)

1. A method for positioning a vertical frame, the method comprising:
acquiring an appearance image of a mobile phone to be detected;
carrying out vertical frame detection on the appearance image to obtain a vertical frame detection area;
sequentially carrying out pixel expansion and pixel segmentation on the vertical frame detection area to obtain a segmented vertical frame detection area;
and carrying out pixel clustering on the divided vertical frame detection area to obtain the frame position of the vertical frame of the mobile phone.
2. The method of claim 1, wherein performing vertical frame detection on the appearance image to obtain a vertical frame detection area comprises:
acquiring a vertical border detection model, wherein the vertical border detection model is determined by a residual error network resnet 50;
and performing vertical frame detection on the appearance image based on the vertical frame detection model to obtain a vertical frame detection area.
3. The method of claim 2, wherein performing pixel expansion and pixel segmentation on the vertical frame detection area in sequence to obtain a segmented vertical frame detection area comprises:
and expanding a preset number of pixels outwards around the vertical frame detection area, and performing pixel segmentation on the expanded vertical frame detection area through a convolutional neural network U-net to obtain the segmented vertical frame detection area.
4. The method of claim 3, wherein performing pixel clustering on the divided vertical frame detection area to obtain a frame position of the vertical frame of the mobile phone comprises:
performing pixel clustering on all pixel points in the divided vertical frame detection area, and connecting all points corresponding to the vertical frame in the divided vertical frame detection area after clustering together to obtain the area of the vertical frame of the mobile phone;
and intercepting the maximum external rectangle of the area of the vertical frame of the mobile phone to obtain the frame position of the vertical frame of the mobile phone.
5. The method of claim 4, wherein pixel clustering all pixels in the partitioned vertical frame detection region comprises:
judging whether pixel points of the divided vertical frame detection area are in the vertical frame of the mobile phone or not;
if yes, reserving corresponding pixel points and pixel values thereof in the divided vertical frame detection area;
if not, setting the corresponding pixel points in the divided vertical frame detection area to be black.
6. The method according to any one of claims 3 to 5, wherein the predetermined number of pixels is 100 pixels.
7. A computer readable medium having computer readable instructions stored thereon, which, when executed by a processor, cause the processor to implement the method of any one of claims 1 to 6.
8. An apparatus for positioning a vertical frame, the apparatus comprising:
one or more processors;
a computer-readable medium for storing one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
CN201911033630.8A 2019-10-28 2019-10-28 Vertical frame positioning method and equipment Pending CN110796669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911033630.8A CN110796669A (en) 2019-10-28 2019-10-28 Vertical frame positioning method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911033630.8A CN110796669A (en) 2019-10-28 2019-10-28 Vertical frame positioning method and equipment

Publications (1)

Publication Number Publication Date
CN110796669A true CN110796669A (en) 2020-02-14

Family

ID=69441663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911033630.8A Pending CN110796669A (en) 2019-10-28 2019-10-28 Vertical frame positioning method and equipment

Country Status (1)

Country Link
CN (1) CN110796669A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833300A (en) * 2020-06-04 2020-10-27 西安电子科技大学 A method and device for defect detection of composite material components based on generative adversarial learning
CN112819788A (en) * 2021-02-01 2021-05-18 上海悦易网络信息技术有限公司 Image stability detection method and device
US11798250B2 (en) 2019-02-18 2023-10-24 Ecoatm, Llc Neural network based physical condition evaluation of electronic devices, and associated systems and methods
US11843206B2 (en) 2019-02-12 2023-12-12 Ecoatm, Llc Connector carrier for electronic device kiosk
US11922467B2 (en) 2020-08-17 2024-03-05 ecoATM, Inc. Evaluating an electronic device using optical character recognition
US11989710B2 (en) 2018-12-19 2024-05-21 Ecoatm, Llc Systems and methods for vending and/or purchasing mobile phones and other electronic devices
US12033454B2 (en) 2020-08-17 2024-07-09 Ecoatm, Llc Kiosk for evaluating and purchasing used electronic devices
US12271929B2 (en) 2020-08-17 2025-04-08 Ecoatm Llc Evaluating an electronic device using a wireless charger

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197492A (en) * 2019-05-23 2019-09-03 山东师范大学 A kind of cardiac MRI left ventricle dividing method and system
CN110287950A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target detection and the training method of target detection model, device and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197492A (en) * 2019-05-23 2019-09-03 山东师范大学 A kind of cardiac MRI left ventricle dividing method and system
CN110287950A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target detection and the training method of target detection model, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
匡纲要 等: "《合成孔径雷达目标检测理论、算法及应用》", 国防科技大学出版社 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11989710B2 (en) 2018-12-19 2024-05-21 Ecoatm, Llc Systems and methods for vending and/or purchasing mobile phones and other electronic devices
US11843206B2 (en) 2019-02-12 2023-12-12 Ecoatm, Llc Connector carrier for electronic device kiosk
US11798250B2 (en) 2019-02-18 2023-10-24 Ecoatm, Llc Neural network based physical condition evaluation of electronic devices, and associated systems and methods
US12223684B2 (en) 2019-02-18 2025-02-11 Ecoatm, Llc Neural network based physical condition evaluation of electronic devices, and associated systems and methods
CN111833300A (en) * 2020-06-04 2020-10-27 西安电子科技大学 A method and device for defect detection of composite material components based on generative adversarial learning
CN111833300B (en) * 2020-06-04 2023-03-14 西安电子科技大学 Composite material component defect detection method and device based on generation countermeasure learning
US11922467B2 (en) 2020-08-17 2024-03-05 ecoATM, Inc. Evaluating an electronic device using optical character recognition
US12033454B2 (en) 2020-08-17 2024-07-09 Ecoatm, Llc Kiosk for evaluating and purchasing used electronic devices
US12271929B2 (en) 2020-08-17 2025-04-08 Ecoatm Llc Evaluating an electronic device using a wireless charger
CN112819788A (en) * 2021-02-01 2021-05-18 上海悦易网络信息技术有限公司 Image stability detection method and device
CN112819788B (en) * 2021-02-01 2023-02-07 上海万物新生环保科技集团有限公司 Image stability detection method and device

Similar Documents

Publication Publication Date Title
CN110796669A (en) Vertical frame positioning method and equipment
CN110827244A (en) Method and equipment for detecting appearance flaws of electronic equipment
CN110827247B (en) Label identification method and device
CN110827249A (en) Electronic equipment backboard appearance flaw detection method and equipment
CN110827246A (en) Electronic equipment frame appearance flaw detection method and equipment
RU2541353C2 (en) Automatic capture of document with given proportions
CN111612781A (en) A screen defect detection method, device and head-mounted display device
CN111091123A (en) Text region detection method and equipment
CN111340752A (en) Screen detection method and device, electronic equipment and computer readable storage medium
CN111291661B (en) Method and equipment for identifying text content of icon in screen
CN110796646A (en) Method and device for detecting defects of screen area of electronic device
CN110675399A (en) Screen appearance flaw detection method and equipment
CN107292318A (en) Image significance object detection method based on center dark channel prior information
US9046496B2 (en) Capturing method for images with different view-angles and capturing system using the same
CN111210473A (en) Mobile phone contour positioning method and equipment
CN115880288B (en) Detection method, system and computer equipment for electronic element welding
CN111325717A (en) Mobile phone defect position identification method and equipment
CN112085022A (en) Method, system and equipment for recognizing characters
CN111046746A (en) License plate detection method and device
CN111462098A (en) Method, device, equipment and medium for detecting overlapping of shadow areas of object to be detected
CN111028195B (en) Example segmentation based redirected image quality information processing method and system
CN113573137A (en) Video canvas boundary detection method, system, terminal equipment and storage medium
CN118366167A (en) Character defect detection method and related equipment
CN112052859A (en) A method and device for precise positioning of license plate in free scene
CN111292374B (en) Method and equipment for automatically plugging and unplugging USB interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 1101-1103, No. 433, Songhu Road, Yangpu District, Shanghai

Applicant after: Shanghai wanwansheng Environmental Protection Technology Group Co.,Ltd.

Address before: Room 1101-1103, No. 433, Songhu Road, Yangpu District, Shanghai

Applicant before: SHANGHAI YUEYI NETWORK INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200214

RJ01 Rejection of invention patent application after publication