CN115439854B - Scanning display method based on interconnection of scanning pen and intelligent terminal - Google Patents
Scanning display method based on interconnection of scanning pen and intelligent terminal Download PDFInfo
- Publication number
- CN115439854B CN115439854B CN202211078719.8A CN202211078719A CN115439854B CN 115439854 B CN115439854 B CN 115439854B CN 202211078719 A CN202211078719 A CN 202211078719A CN 115439854 B CN115439854 B CN 115439854B
- Authority
- CN
- China
- Prior art keywords
- text
- data
- scanning
- image
- interconnection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000004308 accommodation Effects 0.000 claims abstract description 11
- 238000005266 casting Methods 0.000 claims abstract description 4
- 238000012937 correction Methods 0.000 claims description 11
- 238000012795 verification Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000003044 adaptive effect Effects 0.000 claims 1
- 238000005299 abrasion Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010219 correlation analysis Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/1444—Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/19007—Matching; Proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/26—Techniques for post-processing, e.g. correcting the recognition result
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Character Input (AREA)
Abstract
The invention relates to the field of scanning identification, and discloses a scanning display method based on interconnection of a scanning pen and an intelligent terminal, which comprises the following steps: step1: establishing a database, and importing text data to a terminal interconnection platform; step2: starting a scanning end, and projecting a light shadow coverage area in a text area to be identified; step3: manually adjusting the casting angle and pre-positioning; step4: setting a maximum text accommodation threshold; step5: adjusting the angle of the projection end until the shadow coverage reaches a maximum text accommodation threshold; step6: dividing an area in which readable information exists, and marking; step7: judging whether to start the association matching, performing range recognition measures by projecting texts by using light shadows, recognizing text data on a large scale, automatically adjusting the angle of a projection end according to the direction of the texts, setting the reading quantity of the texts according to the needs, adjusting flexibly, eliminating manual handheld scanning, reducing interference factors during manual operation, ensuring the accuracy of recognition and having quite good market prospect.
Description
Technical Field
The invention relates to the technical field of scanning identification, in particular to a scanning display method based on interconnection of a scanning pen and an intelligent terminal.
Background
The traditional scanner inputs the image of the printed text into the computer through scanning, and then uses OCR software to convert the image into the text, but the recognition accuracy is very limited due to the influence of typesetting format and other factors, and the scanning pen on the market at present mainly comprises two types: the first is to take bar code as scanning object, identify the type of the represented article by identifying the bar code, mainly used in places such as warehouse and supermarket, the second scanning pen scanner is composed of optical system, linear array image sensor and roller, in the scanning process, one-dimensional scanning data are generated into two-dimensional image data by the continuous rotation of roller, and scanned characters are identified;
however, the scanning pen in the prior art can realize the scanning of the printed text, but has limited scanning speed, is inconvenient to use under the condition of needing large-area text recognition, is manually operated to scan, is easy to cause certain deviation in the recognition process due to human factors, further generates the phenomenon of inaccurate recognition result, and has shorter service life due to mechanical abrasion between the roller and the paper surface.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects existing in the prior art, the invention provides a scanning display method based on interconnection of a scanning pen and an intelligent terminal, which can effectively solve the problems that the scanning pen in the prior art can realize scanning of printed characters, but the scanning speed is limited, the scanning pen is inconvenient to use under the condition that a large area of text is required to be identified, the scanning is manually controlled, a certain deviation is easy to occur in the identification process due to human factors, the identification result is inaccurate, and the service life of a product is short due to mechanical abrasion between a roller and a paper surface.
(II) technical scheme
In order to achieve the above object, the present invention is realized by the following technical scheme,
the invention discloses a scanning display method based on interconnection of a scanning pen and an intelligent terminal, which comprises the following steps:
step1: establishing a database, and importing text data to a terminal interconnection platform;
step2: starting a scanning end, and projecting a light shadow coverage area in a text area to be identified;
step3: manually adjusting the casting angle and pre-positioning;
step4: setting a maximum text accommodation threshold;
step5: adjusting the angle of the projection end until the shadow coverage reaches a maximum text accommodation threshold;
step6: dividing an area in which readable information exists, and marking;
step7: judging whether to start association matching;
step8: the method comprises the steps of performing image data acquisition on a marked area without starting, processing the marked area through a microprocessor, performing text reconstruction on scanning data, generating text data and submitting the text data to a display end;
step9: starting, automatically dividing paragraphs, extracting part of characters, and matching the text data stored in the terminal interconnection platform;
step10: after error correction verification, selecting a part of text content with the largest matching coefficient, pushing the part of text content at a display end, and synchronously submitting the associated content of the matched text content;
the error correction verification process in Step10 includes the following steps:
step101: starting error correction verification, and receiving extracted data;
step102: judging whether the extracted data is consistent with the local storage data or not;
step103: continuously operating according to a preset setting;
step104: marking a text passage in which the identified text differs from the locally stored text;
step105: covering the difference of the original text paragraphs with local data, and submitting the generated text to a display end as a priority recommendation item;
step106: and directly translating the original text paragraphs with the differences through image data identification, generating texts as secondary recommended items, and submitting the secondary recommended items to a display end.
Furthermore, the database in Step1 is used as a local storage end for storing all the written data, and remotely receiving and transmitting the data of the terminal interconnection platform.
Further, the adjustment of the projection angle in Step3 is to adjust the irradiation angle of the optical component for identifying the data based on the text region to be scanned.
Still further, the setting manner of the maximum text accommodation threshold in Step4 includes: program automatic setting and manual editing setting.
Furthermore, the readable information area in Step6 is divided, the light and shadow coverage area is primarily identified through the image sensor, the area with readable characteristics is marked, the rest area is screened out, the character recognition process is carried out, the image is subjected to binarization processing, the potential histogram of the scanned image is constructed, the threshold value is calculated by utilizing potential function clustering, and the self-adaptive threshold value segmentation is completed by adopting a method of combining a global threshold value and a local threshold value, so that the text background and the text characters are distinguished.
Furthermore, the microprocessor in Step8 directly receives the signal output by the CMOS area array image sensor and performs the corresponding image processing function.
Further, the association matching operation in Step9 performs data transmission and reception through the wireless network.
Further, in Step8, after the scan data is obtained, the image stitching algorithm is used to make a composition, and the function formula is expressed as follows:
where Ci (j) represents the projection function of the i-th frame scan image, ci+1 (j) represents the projection function of the i+1-th frame scan image, the argument j represents the column number of the image, and the function Ci (j) represents the sum of black pixels in the j-th column image data of the i-th frame scan image.
(III) beneficial effects
Compared with the prior art, the technical proposal provided by the invention has the following beneficial effects,
1. according to the invention, the range recognition measure is added to project the text by the light shadow, so that the text data is recognized on a large scale, the angle of the projection end can be automatically adjusted according to the direction of the text, the text reading amount is set according to the requirement, the adjustment is flexible, the manual handheld scanning is not needed, the method is extremely convenient, the interference factor during manual operation is reduced, the recognition accuracy is ensured, and the method has quite good market prospect;
2. the invention is not in direct contact with the carrier of the text to be identified, reduces the problem of mechanical abrasion existing in the original device, greatly prolongs the service life, and can effectively improve the use speed;
3. the invention can carry out association matching on the identification content in the local storage end by combining with the intelligent terminal interconnection platform, can rapidly obtain similar results by analyzing and matching the existing data, can synchronously submit other association content of the matched text content, can greatly reduce the time spent by the user in the actual operation process, saves the operation and ensures the good satisfaction of the user;
4. the invention can identify the errors in the original text, correct and mark the errors and provide direct translation at the errors by adding error correction verification measures so as to allow the user to compare and correct the errors in time, thereby helping the user to quickly solve the translation disorder.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is evident that the drawings in the following description are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a flow chart of the error correction verification process of the present invention;
FIG. 3 is a schematic diagram of an architecture of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention is further described below with reference to examples.
Example 1
The scanning display method based on interconnection of the scanning pen and the intelligent terminal in the embodiment, as shown in fig. 1 and 3, comprises the following steps:
step1: establishing a database, and importing text data to a terminal interconnection platform;
step2: starting a scanning end, and projecting a light shadow coverage area in a text area to be identified;
step3: manually adjusting the casting angle and pre-positioning;
step4: setting a maximum text accommodation threshold;
step5: adjusting the angle of the projection end until the shadow coverage reaches a maximum text accommodation threshold;
step6: dividing an area in which readable information exists, and marking;
step7: judging whether to start association matching;
step8: the method comprises the steps of performing image data acquisition on a marked area without starting, processing the marked area through a microprocessor, performing text reconstruction on scanning data, generating text data and submitting the text data to a display end;
step9: starting, automatically dividing paragraphs, extracting part of characters, and matching the text data stored in the terminal interconnection platform;
step10: after error correction verification is carried out, selecting part of text content with the largest matching coefficient, pushing the part of text content at a display end, and synchronously submitting the associated content of the matched text content.
In this embodiment, the database in Step1 is used as a local storage end, and is used to store all the written data, and remotely receive and send the data of the terminal interconnection platform.
And the projection angle in Step3 is adjusted by taking a text region to be scanned as a reference, and the irradiation angle of the optical component for identifying data is adjusted.
The setting mode of the maximum text accommodation threshold in Step4 includes: program automatic setting and manual editing setting.
Dividing the readable information area in Step6, primarily identifying the light and shadow coverage area through an image sensor, marking the area with readable characteristics, screening out the rest areas, performing binarization processing on the image in the character identification process, performing threshold calculation through constructing a potential histogram of the scanned image by utilizing potential function clustering, and completing self-adaptive threshold segmentation by adopting a method of combining a global threshold and a local threshold to distinguish text background from text characters.
And the microprocessor in Step8 directly receives signals output by the CMOS area array image sensor and completes corresponding image processing functions.
And performing association matching operation in Step9, and transmitting and receiving data through a wireless network.
In the embodiment, a database is established on a terminal interconnection platform, large-scale automatic identification is carried out through projection during scanning, a scanning threshold value is set according to requirements, direct identification is carried out when association matching is not required, data are obtained, when association matching is required, the identified information is compared with the local database data, a recommended answer is given, and the rest associated contents are synchronously submitted;
a large amount of image data is generated during the scanning process to form a sequence of scanned images. And for the same scanning object, the slower the scanning speed is, the larger the data volume is, and in order to reduce the image data volume, the correlation analysis of the inter-frame image data is completed by adopting an exclusive-or operation algorithm and the like.
Example 2
In other aspects, the present embodiment further provides a process of error correction verification, as shown in fig. 2, including the following steps:
step101: starting error correction verification, and receiving extracted data;
step102: judging whether the extracted data is consistent with the local storage data or not;
step103: continuously operating according to a preset setting;
step104: marking a text passage in which the identified text differs from the locally stored text;
step105: covering the difference of the original text paragraphs with local data, and submitting the generated text to a display end as a priority recommendation item;
step106: and directly translating the original text paragraphs with the differences through image data identification, generating texts as secondary recommended items, and submitting the secondary recommended items to a display end.
In the implementation of the embodiment, in the process of association matching, the possible error positions are marked and modified, meanwhile, the error positions are directly translated, and the two results are submitted to a display end;
since the font sizes of the scanned objects may be different, a plurality of rows of characters are included in the scanned image, and for convenience of subsequent character recognition, an automatic line removal technology must be used to retain one row of characters, so that other characters in the scanned image are removed.
Example 3
In this embodiment, after the scan data is acquired in Step8, the image stitching algorithm is used to make a composition, and the function formula is expressed as follows:
where Ci (j) represents the projection function of the i-th frame scan image, ci+1 (j) represents the projection function of the i+1-th frame scan image, the argument j represents the column number of the image, and the function Ci (j) represents the sum of black pixels in the j-th column image data of the i-th frame scan image.
In the embodiment, since the size of the scanned image is 128×80 pixels per frame, the variation range of j is 0 to 127. Since the front part of the i+1th frame scan image will overlap with the rear part of the i frame scan image in general, the extraction of the motion vector is to require the coordinate offset of the overlapping image in the two frames, which can be achieved by performing a correlation operation on the projection function, and during the correlation operation, the forefront part of the projection function ci+1 (j) of the i+1th frame scan image and the projection function Ci (j) of the i frame scan image should be selected first for performing a correlation operation.
In summary, a database is established on a terminal interconnection platform, large-scale automatic identification is performed through projection during scanning, a scanning threshold is set according to requirements, direct identification is performed when association matching is not required, data is obtained, when association matching is required, the identified information is compared with local database data, a recommended answer is given, the rest associated contents are submitted synchronously, measures of range identification are performed through shadow projection text, direct contact with a carrier of the text to be identified is avoided, large-scale identification of text data is performed, the angle of a projection end is automatically adjusted according to the text azimuth, text reading amount is set according to requirements, adjustment is flexible, manual handheld scanning is not required, and a large amount of image data can be generated during scanning, so that a scanning image sequence is formed. And for the same scanning object, the slower the scanning speed is, the larger the data volume is, and in order to reduce the image data volume, the correlation analysis of the inter-frame image data is completed by adopting an exclusive OR operation algorithm and the like;
in the process of association matching, marking and modifying a possible error, simultaneously, directly translating the error, submitting two results to a display end for comparison by a user, and timely correcting possible differences in font size of a scanned object, so that a scanned image contains a plurality of rows of characters, and for convenience of subsequent character recognition, one row of the characters is reserved by adopting an automatic line removal technology, and other characters in the scanned image are eliminated;
since the size of the scanned image is 128×80 pixels per frame, the variation range of j is 0 to 127. Since the front part of the i+1th frame scan image will overlap with the rear part of the i frame scan image in general, the extraction of the motion vector is to require the coordinate offset of the overlapping image in the two frames, which can be achieved by performing a correlation operation on the projection function, and during the correlation operation, the forefront part of the projection function ci+1 (j) of the i+1th frame scan image and the projection function Ci (j) of the i frame scan image should be selected first for performing a correlation operation.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; while the invention has been described in detail with reference to the foregoing embodiments, it will be appreciated by those skilled in the art that variations may be made in the techniques described in the foregoing embodiments, or equivalents may be substituted for elements thereof; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (8)
1. A scanning display method based on interconnection of a scanning pen and an intelligent terminal is characterized by comprising the following steps:
step1: establishing a database, and importing text data to a terminal interconnection platform;
step2: starting a scanning end, and projecting a light shadow coverage area in a text area to be identified;
step3: manually adjusting the casting angle and pre-positioning;
step4: setting a maximum text accommodation threshold;
step5: adjusting the angle of the projection end until the shadow coverage reaches a maximum text accommodation threshold;
step6: dividing an area in which readable information exists, and marking;
step7: judging whether to start association matching;
step8: the method comprises the steps of performing image data acquisition on a marked area without starting, processing the marked area through a microprocessor, performing text reconstruction on scanning data, generating text data and submitting the text data to a display end;
step9: starting, automatically dividing paragraphs, extracting part of characters, and matching the text data stored in the terminal interconnection platform;
step10: after error correction verification, selecting a part of text content with the largest matching coefficient, pushing the part of text content at a display end, and synchronously submitting the associated content of the matched text content;
the error correction verification process in Step10 includes the following steps:
step101: starting error correction verification, and receiving extracted data;
step102: judging whether the extracted data is consistent with the local storage data or not;
step103: continuously operating according to a preset setting;
step104: marking a text passage in which the identified text differs from the locally stored text;
step105: covering the difference of the original text paragraphs with local data, and submitting the generated text to a display end as a priority recommendation item;
step106: and directly translating the original text paragraphs with the differences through image data identification, generating texts as secondary recommended items, and submitting the secondary recommended items to a display end.
2. The scan display method based on interconnection of a scan pen and an intelligent terminal according to claim 1, wherein the database in Step1 is used as a local storage end for storing all written data, and remotely receiving and transmitting data of a terminal interconnection platform.
3. The scan display method based on interconnection of a scanning pen and an intelligent terminal according to claim 1, wherein the adjustment of the projection angle in Step3 is based on a text region to be scanned, and the irradiation angle of an optical component for identifying data is adjusted.
4. The scan display method based on interconnection of a scan pen and an intelligent terminal according to claim 1, wherein the setting mode of the maximum text accommodation threshold in Step4 comprises: program automatic setting and manual editing setting.
5. The scanning display method based on interconnection of a scanning pen and an intelligent terminal according to claim 1, wherein the division of readable information areas in Step6 is characterized in that a light and shadow coverage area is primarily identified through an image sensor, the areas with readable features are marked, the rest areas are screened out, the character recognition process is carried out, the image is subjected to binarization processing, a potential histogram of the scanned image is constructed, threshold calculation is carried out by utilizing potential function clustering, and an adaptive threshold segmentation is completed by adopting a method of combining a global threshold and a local threshold, so that text backgrounds and text characters are distinguished.
6. The method for displaying the scan based on the interconnection of the scanning pen and the intelligent terminal according to claim 1, wherein the microprocessor in Step8 directly receives the signal output by the CMOS area array image sensor and performs the corresponding image processing function.
7. The scan display method based on interconnection of a scanning pen and an intelligent terminal according to claim 1, wherein the association matching operation in Step9 is performed for data transmission and data reception through a wireless network.
8. The scan display method based on interconnection of a scan pen and an intelligent terminal according to claim 1, wherein after the scan data is acquired in Step8, the image stitching algorithm is used for composition, and the function formula is expressed as follows:
where Ci (j) represents the projection function of the i-th frame scan image, ci+1 (j) represents the projection function of the i+1-th frame scan image, the argument j represents the column number of the image, and the function Ci (j) represents the sum of black pixels in the j-th column image data of the i-th frame scan image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211078719.8A CN115439854B (en) | 2022-09-05 | 2022-09-05 | Scanning display method based on interconnection of scanning pen and intelligent terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211078719.8A CN115439854B (en) | 2022-09-05 | 2022-09-05 | Scanning display method based on interconnection of scanning pen and intelligent terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115439854A CN115439854A (en) | 2022-12-06 |
CN115439854B true CN115439854B (en) | 2023-05-02 |
Family
ID=84247636
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211078719.8A Active CN115439854B (en) | 2022-09-05 | 2022-09-05 | Scanning display method based on interconnection of scanning pen and intelligent terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115439854B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201051266Y (en) * | 2007-07-06 | 2008-04-23 | 湖南优立达数码科技有限公司 | Portable scan learning device |
CN101241395A (en) * | 2007-08-22 | 2008-08-13 | 中国人民解放军国防科学技术大学 | scan input method |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5748807A (en) * | 1992-10-09 | 1998-05-05 | Panasonic Technologies, Inc. | Method and means for enhancing optical character recognition of printed documents |
JP4083724B2 (en) * | 2004-09-30 | 2008-04-30 | 株式会社東芝 | Character reader |
CN101770569A (en) * | 2008-12-31 | 2010-07-07 | 汉王科技股份有限公司 | Dish name recognition method based on OCR |
CN103927352A (en) * | 2014-04-10 | 2014-07-16 | 江苏唯实科技有限公司 | Chinese business card OCR (optical character recognition) data correction system utilizing massive associated information of knowledge base |
US9679179B2 (en) * | 2015-08-31 | 2017-06-13 | I.R.I.S. | Method for processing information from a hand-held scanning device |
BE1023388B1 (en) * | 2015-08-31 | 2017-03-01 | I.R.I.S. S.A. | Method and system for correcting an image from a hand scanner |
JP2017054285A (en) * | 2015-09-09 | 2017-03-16 | ルネサスエレクトロニクス株式会社 | Scan system, terminal device, and scan method |
CN107358227A (en) * | 2017-06-29 | 2017-11-17 | 努比亚技术有限公司 | A kind of mark recognition method, mobile terminal and computer-readable recording medium |
CN109492644A (en) * | 2018-10-16 | 2019-03-19 | 深圳壹账通智能科技有限公司 | A kind of matching and recognition method and terminal device of exercise image |
CN112052846A (en) * | 2020-08-18 | 2020-12-08 | 安徽淘云科技有限公司 | Control method and control device of scanning pen, electronic equipment and readable storage medium |
CN113449649A (en) * | 2021-06-29 | 2021-09-28 | 东莞市小精灵教育软件有限公司 | Scanning display method and system based on interconnection of scanning pen and intelligent terminal |
CN114494679B (en) * | 2021-12-10 | 2024-12-31 | 上海精密计量测试研究所 | A double-layer PDF generation and proofreading method and device |
CN113986018B (en) * | 2021-12-30 | 2022-08-09 | 江西影创信息产业有限公司 | Vision impairment auxiliary reading and learning method and system based on intelligent glasses and storage medium |
CN114781997B (en) * | 2022-04-06 | 2024-08-23 | 中国矿业大学 | Intelligent inspection system and implementation method for special construction scheme of dangerous engineering |
-
2022
- 2022-09-05 CN CN202211078719.8A patent/CN115439854B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201051266Y (en) * | 2007-07-06 | 2008-04-23 | 湖南优立达数码科技有限公司 | Portable scan learning device |
CN101241395A (en) * | 2007-08-22 | 2008-08-13 | 中国人民解放军国防科学技术大学 | scan input method |
Also Published As
Publication number | Publication date |
---|---|
CN115439854A (en) | 2022-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111291629B (en) | Method, device, computer equipment and computer storage medium for recognizing text in images | |
US8682077B1 (en) | Method for omnidirectional processing of 2D images including recognizable characters | |
CN101908136B (en) | Table identifying and processing method and system | |
CN102360419B (en) | Method and system for computer scanning reading management | |
US9465774B2 (en) | Optical character recognition system using multiple images and method of use | |
CN102682267B (en) | Method for quickly positioning and extracting one-dimensional and two-dimensional code graphics aiming at digital images | |
CN111476109A (en) | Bill processing method, bill processing apparatus, and computer-readable storage medium | |
US20110240740A1 (en) | Imaging terminal, imaging sensor to determine document orientation based on bar code orientation and methods for operating the same | |
CN113903024A (en) | Handwritten bill numerical value information identification method, system, medium and device | |
US7221810B2 (en) | Method and device for recording of information | |
CN105678322A (en) | Sample labeling method and apparatus | |
CN103489254B (en) | Lottery recognition method and lottery recognition system | |
CN111127339A (en) | Method and device for correcting trapezoidal distortion of document image | |
CN113158895A (en) | Bill identification method and device, electronic equipment and storage medium | |
CN102842034A (en) | Device for laser scanning and automatically identifying carved character and identification method | |
TW202013239A (en) | Image processing system, image processing method and program product | |
CN103473518B (en) | Waybill typing and black and white color lump coding/decoding system | |
EP4369291A1 (en) | Apparatus and method for identifying empty tray | |
EP4369286A1 (en) | Shadow elimination device and method, empty disk recognition device and method | |
CN115810197A (en) | Multi-mode electric power form recognition method and device | |
CN115439854B (en) | Scanning display method based on interconnection of scanning pen and intelligent terminal | |
US12183048B2 (en) | Image stitching method, apparatus and device based on reinforcement learning and storage medium | |
WO2022082431A1 (en) | Systems and methods for extracting information from paper media based on depth information | |
CN109635798B (en) | Information extraction method and device | |
CN116563876A (en) | Invoice identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |