CN110969159B - Image recognition method and device and electronic equipment - Google Patents
Image recognition method and device and electronic equipment Download PDFInfo
- Publication number
- CN110969159B CN110969159B CN201911087426.4A CN201911087426A CN110969159B CN 110969159 B CN110969159 B CN 110969159B CN 201911087426 A CN201911087426 A CN 201911087426A CN 110969159 B CN110969159 B CN 110969159B
- Authority
- CN
- China
- Prior art keywords
- position information
- target
- acquisition device
- image acquisition
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- DMSMPAJRVJJAGA-UHFFFAOYSA-N benzo[d]isothiazol-3-one Chemical group C1=CC=C2C(=O)NSC2=C1 DMSMPAJRVJJAGA-UHFFFAOYSA-N 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 abstract description 6
- 238000004590 computer program Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the disclosure provides an image recognition method, an image recognition device and electronic equipment, which belong to the field of image recognition, wherein the method comprises the following steps: respectively acquiring first projection point position information and second projection point position information formed by a first image acquisition device and a second image acquisition device aiming at a target point; determining target position information of the target point based on the first proxel position information, the second proxel position information, the position information of the first image acquisition device and the position information of the second image acquisition device; searching a target image corresponding to the target position information in images formed by the first image acquisition device and the second image acquisition device; target information corresponding to the target point is identified in the target image. By the processing scheme, the accuracy of image recognition is improved.
Description
Technical Field
The disclosure relates to the technical field of image recognition, and in particular relates to an image recognition method, an image recognition device and electronic equipment.
Background
The touch pen is used for playing various targeted games and activities through children, and senses such as touch sense, vision sense, hearing sense and the like are continuously stimulated to enrich the experience of the children, so that the interests of the children are increased, and the brain nerves of the children are developed. The touch-and-talk pen is small and convenient, is very portable, can be used at any time and anywhere, namely, can be used for speaking, and can add sound to boring characters, so that the contents of books are richer, reading and learning are more interesting, and the teaching in lively can be fully realized.
The touch-and-talk pen can be said to be a learning tool with high technology which breaks through the traditional thinking, and the learning method of clicking and reading from place to place is combined with the learning method of listening and speaking, so that the learning interest of children is improved, the development of the right brain is stimulated, the learning is performed in happiness, the textbook knowledge is absorbed, and the improvement of the learning score is no longer a problem. The portable desk is small in size and easy to carry, and can be used in schools or outside classes.
At present, a text point reading scheme for educational products is that a 'where to read' is based on monocular camera capturing, image matching is carried out through a finger model algorithm, a finger is captured to find out text or characters pointed by a finger, the scheme has low precision, but when the color contrast between the finger and the text is not obvious, the finger recognition rate is very low, and accurate point reading cannot be carried out.
Disclosure of Invention
In view of the above, embodiments of the present disclosure provide an image recognition method, an image recognition device, and an electronic device, so as to at least partially solve the problems in the prior art.
In a first aspect, an embodiment of the present disclosure provides an image recognition method, including:
respectively acquiring first projection point position information and second projection point position information formed by a first image acquisition device and a second image acquisition device aiming at a target point;
determining target position information of the target point based on the first proxel position information, the second proxel position information, the position information of the first image acquisition device and the position information of the second image acquisition device;
searching a target image corresponding to the target position information in images formed by the first image acquisition device and the second image acquisition device;
target information corresponding to the target point is identified in the target image.
According to a specific implementation manner of the embodiment of the present disclosure, the acquiring the first proxel position information and the second proxel position information formed by the first image capturing device and the second image capturing device for the target point includes:
establishing a first space coordinate system by taking the center of a projection plane of the first image acquisition device as a coordinate origin;
establishing a second space coordinate system by taking the center of the screen projection plane of the second image acquisition device as a coordinate origin;
and determining the first projection point position information and the second projection point position information based on the first space coordinate system and the second space coordinate system.
According to a specific implementation manner of the embodiment of the present disclosure, the determining the first proxel position information and the second proxel position information based on the first spatial coordinate system and the second spatial coordinate system includes:
acquiring a first projection coordinate of the target point on a projection plane of the first image acquisition device;
acquiring a second projection coordinate of the target point on a projection plane of the second image acquisition device;
and determining the first projection point position information and the second projection point position information based on the first projection coordinates and the second projection coordinates respectively.
According to a specific implementation manner of the embodiment of the present disclosure, the determining, based on the first proxel position information, the second proxel position information, the position information of the first image capturing device, and the position information of the second image capturing device, the target position information of the target point includes:
acquiring first camera position information and second camera position information contained in the first image acquisition device and the second image acquisition device respectively;
determining a first included angle formed by the target point on the first image acquisition device based on the first camera position information, the first projection point position information and the position information of the first image acquisition device;
determining a second included angle formed by the target point on the second image acquisition device based on the second camera position information, the second projection point position information and the position information of the second image acquisition device;
and determining the position of the target point based on the first included angle and the second included angle.
According to a specific implementation manner of the embodiment of the present disclosure, the determining the position of the target point based on the first angle and the second angle includes:
determining a first distance from the target point to the first image acquisition device and a second distance from the target point to the second image acquisition device based on the first included angle and the second included angle;
based on the first distance and the second distance, a position of the target point is determined.
According to a specific implementation manner of the embodiment of the present disclosure, searching, in the images formed by the first image acquisition device and the second image acquisition device, a target image corresponding to the target position information includes:
searching images formed by the first image acquisition device and the second image acquisition device in a preset range by taking the target position information as a center;
and taking the image formed by the first image acquisition device and the second image acquisition device in the searched preset range as the target image.
According to a specific implementation manner of the embodiment of the present disclosure, the identifying, in the target image, target information corresponding to the target point includes:
performing image recognition on the content in the target image to form image recognition information;
and selecting information closest to the target point from the image identification information to form the target information.
According to a specific implementation manner of the embodiment of the present disclosure, after identifying the target information corresponding to the target point in the target image, the method further includes:
and playing the target information in a voice mode.
In a second aspect, an embodiment of the present disclosure provides an image recognition apparatus, including:
the acquisition module is used for respectively acquiring first projection point position information and second projection point position information formed by the first image acquisition device and the second image acquisition device aiming at the target point;
a determining module, configured to determine target position information of the target point based on the first proxel position information, the second proxel position information, the position information of the first image capturing device, and the position information of the second image capturing device;
the searching module is used for searching a target image corresponding to the target position information in the images formed by the first image acquisition device and the second image acquisition device;
and the identification module is used for identifying target information corresponding to the target point in the target image.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image recognition method of the first aspect or any implementation of the first aspect.
In a fourth aspect, embodiments of the present disclosure also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the image recognition method of the first aspect or any implementation manner of the first aspect.
In a fifth aspect, embodiments of the present disclosure also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the image recognition method of the first aspect or any implementation of the first aspect.
The image recognition scheme in the embodiment of the disclosure comprises the steps of obtaining first projection point position information and second projection point position information formed by a first image acquisition device and a second image acquisition device aiming at a target point respectively;
determining target position information of the target point based on the first proxel position information, the second proxel position information, the position information of the first image acquisition device and the position information of the second image acquisition device; searching a target image corresponding to the target position information in images formed by the first image acquisition device and the second image acquisition device; target information corresponding to the target point is identified in the target image. By the processing scheme, the accuracy of image recognition can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is a flowchart of an image recognition method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an image recognition method according to an embodiment of the disclosure;
FIG. 3 is a flowchart of another image recognition method according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of another image recognition method according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image recognition device according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Other advantages and effects of the present disclosure will become readily apparent to those skilled in the art from the following disclosure, which describes embodiments of the present disclosure by way of specific examples. It will be apparent that the described embodiments are merely some, but not all embodiments of the present disclosure. The disclosure may be embodied or practiced in other different specific embodiments, and details within the subject specification may be modified or changed from various points of view and applications without departing from the spirit of the disclosure. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the following claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present disclosure, one skilled in the art will appreciate that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. In addition, such apparatus may be implemented and/or such methods practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should also be noted that the illustrations provided in the following embodiments merely illustrate the basic concepts of the disclosure by way of illustration, and only the components related to the disclosure are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided in order to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides an image recognition method. The image recognition method provided in this embodiment may be performed by a computing device, which may be implemented as software, or as a combination of software and hardware, and may be integrally provided in a server, a client, or the like.
Referring to fig. 1 and 2, the image recognition method in the embodiment of the disclosure may include the following steps:
s101, respectively acquiring first projection point position information and second projection point position information formed by a first image acquisition device and a second image acquisition device aiming at a target point.
At present, a text point reading scheme for educational products is that a 'where to read' is based on monocular camera capturing, image matching is carried out through a finger model algorithm, a finger is captured to find out text or characters pointed by a finger, the scheme has low precision, but when the color contrast between the finger and the text is not obvious, the finger recognition rate is very low, and accurate point reading cannot be carried out. According to the scheme, the method and the device are not influenced by ambient light and the contrast of the finger and the book, the finger is accurately positioned, and a text point reading function is realized.
Specifically, in the process of performing an image, a first image capturing device and a second image capturing device may be provided, where the first image capturing device may be one of the binocular camera devices (for example, a camera), and the second image capturing device may be the other of the binocular camera devices (for example, a camera). The target point may be a pointing direction object (e.g., a finger or a stylus, etc.) that is pointed by the user on a target object (e.g., a book or a screen of a pointing device). The first and second proxel position information can be obtained by acquiring the position information of the target point at the proxel formed by the first and second image acquisition devices.
S102, determining target position information of the target point based on the first projection point position information, the second projection point position information, the position information of the first image acquisition device and the position information of the second image acquisition device.
After the first projection point position information, the second projection point position information, the position information of the first image acquisition device and the position information of the second image acquisition device are obtained, the target point can be accurately spatially positioned based on the information, so that the position information of the target point relative to the first image acquisition device and the second image acquisition device is obtained.
Specifically, see FIG. 2, O 1 ,O 2 Two cameras (first image acquisition device and second image acquisition device)Second image acquisition means) and their placement angles are known. P (P) 1 ,P 2 Is the projection point of the object P at the target point on the camera imaging plane, i.e. the point on the photograph where P is taken. According to the placement positions of the first image acquisition device and the second image acquisition device and the positions of the P1 and P2 points, the three-dimensional position of the point P in the space is easy to calculate, and then the target position information of the target point is determined.
S103, searching a target image corresponding to the target position information in images formed by the first image acquisition device and the second image acquisition device.
The first image acquisition device and the second image acquisition device acquire images of other objects in the field of view at the same time in addition to the images of the target point. For example, when a user performs a point reading operation on a point reading or a screen of a point reading machine through a finger or a point reading pen, the first image acquisition device and the second image acquisition device acquire part or all of point reading contents on the point reading or the screen of the point reading machine besides the image of the target point, and at this time, the point reading contents can be searched.
For this purpose, the images formed by the first image capturing device and the second image capturing device may be searched within a preset range with the target position information formed by the target point as the center, so that the image within the preset range is used as the target image corresponding to the target position information. The target image may contain all the click-through content in the current view, or may contain part of the click-through content in the current view.
S104, identifying target information corresponding to the target point in the target image.
The target information corresponding to the target point can be obtained by identifying the content in the target image, and the target information can be one or a combination of characters, pictures and the like based on the difference of the content indicated by the target point.
Through the content in the embodiment, the first image acquisition device and the second image acquisition device are utilized to jointly determine the position of the target, so that the recognition accuracy of the target point is improved.
Referring to fig. 3, according to a specific implementation manner of the embodiment of the disclosure, the acquiring the first proxel position information and the second proxel position information formed by the first image capturing device and the second image capturing device for the target point includes:
s301, a first space coordinate system is established by taking the center of a projection plane of the first image acquisition device as a coordinate origin.
The first spatial coordinate system may include coordinates in three spatial directions of x, y, and z, and the projection plane of the first image capturing device may be taken as the x, y direction, and the projection plane perpendicular to the first image capturing device may be taken as the z direction.
S302, a second space coordinate system is established by taking the center of the screen projection plane of the second image acquisition device as the origin of coordinates.
The second spatial coordinate system may include coordinates in three spatial directions of x, y, and z, and the projection plane of the first image capturing device may be taken as the x, y direction, and the projection plane perpendicular to the second image capturing device may be taken as the z direction.
S303, determining the first projection point position information and the second projection point position information based on the first space coordinate system and the second space coordinate system.
Specifically, a first projection coordinate of the target point on a projection plane of the first image acquisition device may be obtained; acquiring a second projection coordinate of the target point on a projection plane of the second image acquisition device; and determining the first projection point position information and the second projection point position information based on the first projection coordinates and the second projection coordinates respectively.
With the above-described embodiments, the spatial position of the target point can be determined based on the set first and second spatial coordinate systems.
Referring to fig. 4, according to a specific implementation manner of the embodiment of the disclosure, the determining the target position information of the target point based on the first proxel position information, the second proxel position information, the position information of the first image capturing device, and the position information of the second image capturing device includes:
s401, respectively acquiring first camera position information and second camera position information contained in the first image acquisition device and the second image acquisition device;
s402, determining a first included angle formed by the target point on the first image acquisition device based on the first camera position information, the first projection point position information and the position information of the first image acquisition device;
s403, determining a second included angle formed by the target point on the second image acquisition device based on the second camera position information, the second projection point position information and the position information of the second image acquisition device;
s404, determining the position of the target point based on the first included angle and the second included angle.
Specifically, a first distance between the target point and the first image acquisition device and a second distance between the target point and the second image acquisition device may be determined based on the first angle and the second angle, and a position of the target point may be determined based on the first distance and the second distance.
By the fact, the position of the target point can be accurately calculated.
According to a specific implementation manner of the embodiment of the present disclosure, searching, in the images formed by the first image acquisition device and the second image acquisition device, a target image corresponding to the target position information includes: searching images formed by the first image acquisition device and the second image acquisition device in a preset range by taking the target position information as a center; and taking the image formed by the first image acquisition device and the second image acquisition device in the searched preset range as the target image.
According to a specific implementation manner of the embodiment of the present disclosure, the identifying, in the target image, target information corresponding to the target point includes: performing image recognition on the content in the target image to form image recognition information; and selecting information closest to the target point from the image identification information to form the target information.
According to a specific implementation manner of the embodiment of the present disclosure, after identifying the target information corresponding to the target point in the target image, the method further includes: and playing the target information in a voice mode.
Corresponding to the above method embodiment, referring to fig. 5, the embodiment of the present disclosure further provides an image recognition apparatus 50, including:
an acquiring module 501, configured to acquire first proxel position information and second proxel position information formed by the first image capturing device and the second image capturing device for the target point respectively;
a determining module 502, configured to determine target position information of the target point based on the first proxel position information, the second proxel position information, the position information of the first image capturing device, and the position information of the second image capturing device;
a searching module 503, configured to search, in images formed by the first image capturing device and the second image capturing device, a target image corresponding to the target position information;
an identification module 504 is configured to identify target information corresponding to the target point in the target image.
The parts of this embodiment, which are not described in detail, are referred to the content described in the above method embodiment, and are not described in detail herein.
Referring to fig. 6, an embodiment of the present disclosure also provides an electronic device 60, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image recognition method of the foregoing method embodiments.
The disclosed embodiments also provide a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the image recognition method in the foregoing method embodiments.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the image recognition method in the foregoing method embodiments.
Referring now to fig. 6, a schematic diagram of an electronic device 60 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device 60 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic device 60 are also stored. The processing device 601, the ROM602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 60 to communicate with other devices wirelessly or by wire to exchange data. While an electronic device 60 having various means is shown, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects an internet protocol address from the at least two internet protocol addresses and returns the internet protocol address; receiving an Internet protocol address returned by the node evaluation equipment; wherein the acquired internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer-readable medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the disclosure are intended to be covered by the protection scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (9)
1. An image recognition method, comprising:
respectively acquiring first projection point position information and second projection point position information formed by a first image acquisition device and a second image acquisition device aiming at a target point;
determining target position information of the target point based on the first proxel position information, the second proxel position information, the position information of the first image capturing device, and the position information of the second image capturing device specifically includes: acquiring first camera position information and second camera position information contained in the first image acquisition device and the second image acquisition device respectively;
determining a first included angle formed by the target point on the first image acquisition device based on the first camera position information, the first projection point position information and the position information of the first image acquisition device;
determining a second included angle formed by the target point on the second image acquisition device based on the second camera position information, the second projection point position information and the position information of the second image acquisition device;
based on the first included angle and the second included angle, determining the position of the target point specifically includes: determining a first distance from the target point to the first image acquisition device and a second distance from the target point to the second image acquisition device based on the first included angle and the second included angle; determining a position of the target point based on the first distance and the second distance;
searching a target image corresponding to target position information in a preset range by taking the target position information of the target point as a center in images formed by the first image acquisition device and the second image acquisition device, wherein the images formed by the first image acquisition device and the second image acquisition device comprise the images of the target point and the images of other objects in an acquisition view;
target information corresponding to the target point is identified in the target image.
2. The method according to claim 1, wherein the acquiring the first and second proxel position information formed by the first and second image capturing devices for the target point, respectively, includes:
establishing a first space coordinate system by taking the center of a projection plane of the first image acquisition device as a coordinate origin;
establishing a second space coordinate system by taking the center of the screen projection plane of the second image acquisition device as a coordinate origin;
and determining the first projection point position information and the second projection point position information based on the first space coordinate system and the second space coordinate system.
3. The method of claim 2, wherein the determining the first proxel location information and the second proxel location information based on the first spatial coordinate system and the second spatial coordinate system comprises:
acquiring a first projection coordinate of the target point on a projection plane of the first image acquisition device;
acquiring a second projection coordinate of the target point on a projection plane of the second image acquisition device;
and determining the first projection point position information and the second projection point position information based on the first projection coordinates and the second projection coordinates respectively.
4. The method according to claim 1, wherein searching for the target image corresponding to the target position information in the images formed by the first image capturing device and the second image capturing device includes:
searching images formed by the first image acquisition device and the second image acquisition device in a preset range by taking the target position information as a center;
and taking the image formed by the first image acquisition device and the second image acquisition device in the searched preset range as the target image.
5. The method of claim 1, wherein the identifying target information corresponding to the target point in the target image comprises:
performing image recognition on the content in the target image to form image recognition information;
and selecting information closest to the target point from the image identification information to form the target information.
6. The method according to claim 1, wherein after identifying target information corresponding to the target point in the target image, the method further comprises:
and playing the target information in a voice mode.
7. An image recognition apparatus, comprising:
the acquisition module is used for respectively acquiring first projection point position information and second projection point position information formed by the first image acquisition device and the second image acquisition device aiming at the target point;
the determining module is configured to determine target position information of the target point based on the first proxel position information, the second proxel position information, the position information of the first image capturing device, and the position information of the second image capturing device, and specifically includes: acquiring first camera position information and second camera position information contained in the first image acquisition device and the second image acquisition device respectively;
determining a first included angle formed by the target point on the first image acquisition device based on the first camera position information, the first projection point position information and the position information of the first image acquisition device;
determining a second included angle formed by the target point on the second image acquisition device based on the second camera position information, the second projection point position information and the position information of the second image acquisition device;
based on the first included angle and the second included angle, determining the position of the target point specifically includes: determining a first distance from the target point to the first image acquisition device and a second distance from the target point to the second image acquisition device based on the first included angle and the second included angle; determining a position of the target point based on the first distance and the second distance;
the searching module is used for searching a target image corresponding to the target position information in a preset range by taking the target position information of the target point as a center in the images formed by the first image acquisition device and the second image acquisition device, wherein the images formed by the first image acquisition device and the second image acquisition device comprise the images of the target point and the images of other objects in an acquisition view;
and the identification module is used for identifying target information corresponding to the target point in the target image.
8. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image recognition method of any one of the preceding claims 1-6.
9. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the image recognition method of any one of the preceding claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911087426.4A CN110969159B (en) | 2019-11-08 | 2019-11-08 | Image recognition method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911087426.4A CN110969159B (en) | 2019-11-08 | 2019-11-08 | Image recognition method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110969159A CN110969159A (en) | 2020-04-07 |
CN110969159B true CN110969159B (en) | 2023-08-08 |
Family
ID=70030570
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911087426.4A Active CN110969159B (en) | 2019-11-08 | 2019-11-08 | Image recognition method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110969159B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111781585B (en) * | 2020-06-09 | 2023-07-18 | 浙江大华技术股份有限公司 | Method for determining firework setting-off position and image acquisition equipment |
CN111753715B (en) * | 2020-06-23 | 2024-06-21 | 广东小天才科技有限公司 | Method and device for shooting test questions in click-to-read scene, electronic equipment and storage medium |
CN112489120B (en) * | 2021-02-04 | 2021-04-27 | 中科长光精拓智能装备(苏州)有限公司 | Image recognition method and system for multi-angle image |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2163847A1 (en) * | 2007-06-15 | 2010-03-17 | Kabushiki Kaisha Toshiba | Instrument for examining/measuring object to be measured |
CN105373266A (en) * | 2015-11-05 | 2016-03-02 | 上海影火智能科技有限公司 | Novel binocular vision based interaction method and electronic whiteboard system |
CN107545260A (en) * | 2017-09-25 | 2018-01-05 | 上海电机学院 | A kind of talking pen character identification system based on binocular vision |
CN109598755A (en) * | 2018-11-13 | 2019-04-09 | 中国科学院计算技术研究所 | Harmful influence leakage detection method based on binocular vision |
CN109753554A (en) * | 2019-01-14 | 2019-05-14 | 广东小天才科技有限公司 | Searching method based on three-dimensional space positioning and family education equipment |
CN110096993A (en) * | 2019-04-28 | 2019-08-06 | 深兰科技(上海)有限公司 | The object detection apparatus and method of binocular stereo vision |
JP6573419B1 (en) * | 2018-09-26 | 2019-09-11 | 深セン市優必選科技股▲ふん▼有限公司Ubtech Pobotics Corp Ltd | Positioning method, robot and computer storage medium |
-
2019
- 2019-11-08 CN CN201911087426.4A patent/CN110969159B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2163847A1 (en) * | 2007-06-15 | 2010-03-17 | Kabushiki Kaisha Toshiba | Instrument for examining/measuring object to be measured |
CN105373266A (en) * | 2015-11-05 | 2016-03-02 | 上海影火智能科技有限公司 | Novel binocular vision based interaction method and electronic whiteboard system |
CN107545260A (en) * | 2017-09-25 | 2018-01-05 | 上海电机学院 | A kind of talking pen character identification system based on binocular vision |
JP6573419B1 (en) * | 2018-09-26 | 2019-09-11 | 深セン市優必選科技股▲ふん▼有限公司Ubtech Pobotics Corp Ltd | Positioning method, robot and computer storage medium |
CN109598755A (en) * | 2018-11-13 | 2019-04-09 | 中国科学院计算技术研究所 | Harmful influence leakage detection method based on binocular vision |
CN109753554A (en) * | 2019-01-14 | 2019-05-14 | 广东小天才科技有限公司 | Searching method based on three-dimensional space positioning and family education equipment |
CN110096993A (en) * | 2019-04-28 | 2019-08-06 | 深兰科技(上海)有限公司 | The object detection apparatus and method of binocular stereo vision |
Non-Patent Citations (1)
Title |
---|
Design and Testing Algorithm for Real Time Text Images: Rehabilitation Aid for Blind;Rutvi Prajapati;《International Journal of Science Technology & Engineering》;第275-278页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110969159A (en) | 2020-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11721073B2 (en) | Synchronized, interactive augmented reality displays for multifunction devices | |
CN108415705B (en) | Webpage generation method and device, storage medium and equipment | |
CN110969159B (en) | Image recognition method and device and electronic equipment | |
CN104199906A (en) | Recommending method and device for shooting region | |
CN111862349A (en) | Virtual brush implementation method and device and computer readable storage medium | |
CN112270242B (en) | Track display method and device, readable medium and electronic equipment | |
CN113784045B (en) | Focusing interaction method, device, medium and electronic equipment | |
CN110990728A (en) | Method, device and equipment for managing point of interest information and storage medium | |
CN113784046A (en) | Follow-up shooting method, device, medium and electronic equipment | |
CN113191257A (en) | Order of strokes detection method and device and electronic equipment | |
CN112231023A (en) | Information display method, device, equipment and storage medium | |
EP3863235B1 (en) | Method and apparatus for processing data | |
CN110619615A (en) | Method and apparatus for processing image | |
CN112991147B (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN112492381B (en) | Information display method and device and electronic equipment | |
CN114117092A (en) | Remote cooperation method, device, electronic equipment and computer readable medium | |
CN111429544A (en) | Vehicle body color processing method and device and electronic equipment | |
CN111460334A (en) | Information display method and device and electronic equipment | |
CN112346630B (en) | State determination method, device, equipment and computer readable medium | |
CN111354070A (en) | Three-dimensional graph generation method and device, electronic equipment and storage medium | |
CN110110695B (en) | Method and apparatus for generating information | |
CN112395826B (en) | Text special effect processing method and device | |
CN110070600B (en) | Three-dimensional model generation method, device and hardware device | |
CN117991967A (en) | Virtual keyboard interaction method, device, equipment, storage medium and program product | |
CN117784921A (en) | Data processing method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |