CN108095465A - A kind of image processing method and device - Google Patents
A kind of image processing method and device Download PDFInfo
- Publication number
- CN108095465A CN108095465A CN201810052882.4A CN201810052882A CN108095465A CN 108095465 A CN108095465 A CN 108095465A CN 201810052882 A CN201810052882 A CN 201810052882A CN 108095465 A CN108095465 A CN 108095465A
- Authority
- CN
- China
- Prior art keywords
- image
- replacement
- target object
- instruction
- keyword
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 31
- 238000011084 recovery Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 description 7
- 238000000034 method Methods 0.000 description 6
- 239000011449 brick Substances 0.000 description 5
- 239000013535 sea water Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010422 painting Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 241001122767 Theaceae Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47G—HOUSEHOLD OR TABLE EQUIPMENT
- A47G1/00—Mirrors; Picture frames or the like, e.g. provided with heating, lighting or ventilating means
- A47G1/14—Photograph stands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1815—Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Library & Information Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention provides a kind of image processing methods.It is instructed by obtaining the replacement to target object in image, and according to the description information for replacing object, obtains replacement object and obtain the location information of target image in the picture.Further according to the replacement object and the location information, display is replaced to the target object in the image.It is easy to operate so as to which the target object of present image is transformed to the replacement object that user specifies automatically according to the instruction of user, and improve the interest of image display.
Description
Technical Field
The present invention relates to the field of display technologies, and in particular, to an image processing method and apparatus.
Background
With the increasing development of digital image processing technology, electronic picture frames are becoming popular as a picture display device with new ideas, have the functions of picture display, storage, playing and the like, can be placed in various places to display pictures, and are popular with users.
The pictures or paintings displayed by the existing electronic picture frames usually have a certain theme background, however, the background is fixed and unchangeable, and if a user wonderfully wants to see the effect after replacing the background, the user can only imagine the picture or the paintings by himself.
Disclosure of Invention
The invention provides an image processing method and an image processing device, which are used for solving the problem that an object of an image in an electronic photo frame is not changeable.
In one aspect, an image processing method applied to an electronic picture frame is provided, including:
obtaining a replacement instruction of a target object in an image; the replacement instruction contains description information of a replacement object;
acquiring the replacing object according to the description information of the replacing object;
obtaining position information of the target image in the image;
and replacing and displaying the target object in the image according to the replacement object and the position information.
Further, before the obtaining of the replacement instruction for the target object in the image, the method further includes: identifying an object in the image, the object comprising the target object; and establishing and storing the description information of the object, wherein the description information comprises the object name.
Further, the step of obtaining a replacement instruction for the target object in the image includes: receiving a voice signal; performing semantic recognition on the voice signal, and determining a first keyword and a second keyword; determining a target object matched with the first keyword according to the pre-stored description information of the object; wherein the second keyword includes description information of the replacement object.
Further, after obtaining the replacement instruction of the target object in the image, the method further comprises the following steps: a target object in the image is identified.
Further, the step of obtaining a replacement instruction for the target object in the image includes: receiving a voice signal; performing semantic recognition on the voice signal, and determining a first keyword and a second keyword; wherein the first keyword includes description information of the target object, and the second keyword includes description information of the replacement object.
Further, the step of identifying the target object in the image comprises: identifying objects in the image, and determining description information of each object in the image, wherein the objects comprise the target object; and according to the description information of each object in the image, taking the object matched with the first keyword in the image as a target object.
Further, the step of obtaining the replacement object according to the description information of the replacement object includes: extracting the replacement object in a standby image library; or, acquiring the replacement object from the internet; the spare image library is stored with a replacement object and description information of the replacement object in advance.
Further, after the target object in the image is displayed instead, the method further includes: receiving a restoring instruction of a user to restore the replacing object to the target object; the recovery instruction comprises a preset recovery keyword.
In another aspect, an image processing apparatus is also provided, including:
the instruction acquisition module is used for acquiring a replacement instruction of a target object in the image; the replacement instruction contains description information of a replacement object;
the replacing object obtaining module is used for obtaining the replacing object according to the description information of the replacing object;
the position information acquisition module is used for acquiring the position information of the target image in the image;
and the replacing module is used for replacing and displaying the target object in the image according to the replacing object and the position information.
In another aspect, an electronic picture frame is provided, which includes a picture frame assembly, a processor, a memory, and a computer program stored in the memory and executable on the processor, and when the computer program is executed by the processor, the steps of the image processing method are implemented.
Compared with the prior art, the invention has the following advantages:
the invention provides an image processing method and device, wherein a replacement instruction of a target object in an image is obtained, the replacement object is obtained according to description information of the replacement object, and position information of the target image in the image is obtained. And then, replacing and displaying the target object in the image according to the replacing object and the position information. Therefore, the target object of the current image can be automatically changed into the replacement object appointed by the user according to the instruction of the user, the operation is convenient and quick, and the interestingness of image display is improved.
Drawings
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another image processing method provided by the embodiment of the invention;
FIG. 3 is a flow chart of another image processing method provided by the embodiment of the invention;
FIG. 4 is a schematic diagram of an image before a background of an electronic frame is changed according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an image with an altered background of an electronic frame according to an embodiment of the present invention;
fig. 6 is a block diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
In the description of the present invention, "a plurality" means two or more unless otherwise specified; the terms "upper", "lower", "left", "right", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing and simplifying the description, but do not indicate or imply that the machine or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The following detailed description of embodiments of the invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Referring to fig. 1, a flowchart of an image processing method according to an embodiment of the present invention is shown. The image processing method can be applied to an electronic picture frame, and comprises the following steps:
step 101, obtaining a replacement instruction for a target object in an image.
Specifically, the replacing instruction may be a voice signal, or a touch signal, a text signal, or other types of instruction signals. Preferably, the replacing instruction may be a voice signal, and the process of obtaining the replacing instruction for the target object in the image may be receiving the voice signal, performing semantic recognition on the voice signal, and determining the first keyword and the second keyword, so that the target object and the replacing object in the image may be determined according to the first keyword and the second keyword. For example, after obtaining the replacement instruction of the target object in the image, the target object matching the first keyword may be determined according to the pre-stored description information of each object. For example, an image displayed as an electronic picture frame shown in fig. 4 can be determined by image recognition that objects included in the image are a wall, a table, and a soccer ball. If the first keyword is a wall, an object in the image, which is the wall, can be used as a target object.
In practical applications, as a possible implementation manner, before obtaining the replacement instruction of the target object in the image, the object in the image may be identified in advance, and the object includes the target object. After identifying each object in the image, establishing and storing description information of each object, wherein the description information comprises an object name. Therefore, the graphic operation work with large calculation amount is completed in advance, so that after the replacement instruction of the target object in the image is obtained, the target object matched with the replacement instruction can be quickly determined according to the description information prestored in each object. That is, before a replacement instruction for a target object in an image is obtained, description information of each object included in the image may be confirmed in advance.
As another possible implementation, the target object in the image may be identified after obtaining the replacement instruction for the target object in the image. Thus, images in which description information of each object is not previously created and stored can be identified. For example, if the image is an image that has just been imported into an electronic frame, and description information has not yet been created and stored, the object included in the image can be identified in this manner. That is, after obtaining a replacement instruction for a target object in an image, description information of each object contained in the image is confirmed.
And 102, acquiring the replacing object according to the description information of the replacing object.
Specifically, after a replacement instruction for the target object in the image is obtained, a replacement object matched with the second keyword may be obtained according to the second keyword. In practical applications, the replacement object may be extracted in a library of alternative images. Alternatively, the replacement object is obtained from the internet. The spare image library is stored with a replacing object and description information of the replacing object in advance.
And 103, obtaining the position information of the target image in the image.
Specifically, the position information of the target image in the image can be obtained through an image recognition mode such as edge detection. For example, as shown in the image displayed by the electronic frame shown in fig. 4, if the target object is a wall, the position information of the target object, which is the wall, in the image can be determined according to the characteristics of the wall. So as to accurately replace the replacement object to the position of the target object.
And 104, replacing and displaying the target object in the image according to the replacing object and the position information.
Specifically, after the target object, the replacement object, and the position information of the target object are obtained, the target object may be displayed as the replacement object in a replacement manner according to the position information of the target object. For example, after replacing a wall in the electronic picture frame display image shown in fig. 4 with the sea, the effect shown in fig. 5 can be displayed.
In summary, the image processing method provided by the embodiment of the present invention obtains the replacement instruction for the target object in the image, obtains the replacement object according to the description information of the replacement object, and obtains the position information of the target image in the image. And then, replacing and displaying the target object in the image according to the replacing object and the position information. Therefore, the target object of the current image can be automatically changed into the replacement object appointed by the user according to the instruction of the user, the operation is convenient and quick, and the interestingness of image display is improved.
Referring to fig. 2, a flowchart of another image processing method provided by the embodiment of the invention is shown. The image processing method comprises the following steps:
step 201, identifying an object in the image, and establishing and storing description information of the object.
Specifically, before obtaining a replacement instruction for a target object in an image, the object in the image may be identified in advance, and after identifying each object in the image, description information of each object may be established and stored, where the description information includes an object name and location information. Therefore, the graphic operation work with large calculation amount is completed in advance, and the response speed after the replacement instruction of the target object in the image is obtained can be further improved. For example, an image displayed as an electronic picture frame shown in fig. 4 may be obtained by the electronic picture frame, and then an object in the image is identified, so as to establish and store description information of the object in advance. And the description information of each object is a standard entry.
In step 202, a replacement instruction for the target object in the image is obtained.
Specifically, when the replacement instruction for the target object in the image is the voice information, the replacement instruction may be a first instruction and a second instruction having a sequence. The first instruction is used for determining a first keyword, and the second instruction is used for determining a second keyword. The replacement instruction may also be a third instruction. Wherein the third instructions may determine the first keyword and the second keyword simultaneously. That is, the speech information may be two sets of nouns that are spoken sequentially, or may be a complete sentence including nouns and verbs.
In practical applications, as a possible implementation manner, the first instruction and the second instruction of the user may be received in sequence according to a preset sequence. Namely, the first keyword and the second keyword are distinguished according to a preset rule by receiving voice instructions of a user in a grading way. In practical applications, the voice command with the first order may be determined as the first command, and the voice command with the second order may be determined as the second command. For example, as shown in fig. 4, when the background in the image displayed by the electronic frame is a wall, if the user needs to convert the background into the sea, the user can speak the word representing the meaning of the wall first and then the word representing the meaning of the sea. When words which are spoken by the user and are used for representing the wall meaning are received, the words are used as a first instruction, and when words which are spoken by the user and are used for representing the sea meaning are received, the words are used as a second instruction.
As another possible implementation, a third instruction of the user may be received. Namely, the voice command of the user is received at one time, and the first keyword and the second keyword can be determined according to the overall semantic meaning of the third command. For example, as shown in fig. 4, when the background in the image displayed on the electronic frame is a wall, if the user needs to change the background to the sea, the third command may be "wall to sea".
Step 203, according to the replacing instruction, determining the target object and the replacing object.
Specifically, if the voice instruction of the user is received, the first instruction and the second instruction of the user are received in sequence according to a preset sequence. When a first instruction is received, recognizing the voice data corresponding to the first instruction as a text, performing word segmentation on the recognized text to obtain at least one first entry, and selecting a first keyword semantically matched with the first entry from the standard entries. And when a second instruction is received, recognizing the voice data corresponding to the second instruction as a text, segmenting the recognized text to obtain a second entry, and selecting a second keyword semantically matched with the second entry from the standard entries. Wherein, the first entry and the second entry are nouns. For example, if the received first instruction of the user is "i want to replace a brick wall", the first instruction is recognized as a text, and after the recognized text is segmented, the term "brick wall" can be obtained as a first entry, and then a "wall" semantically matched with the first entry can be selected from the standard entries as a first keyword. After that, if the received second instruction of the user is "change into a sea water bar", the second instruction is recognized as a text, and after the recognized text is segmented, the noun "sea water" can be obtained as a second entry, and then "sea" semantically matched with the second entry can be selected from the standard entries as a second keyword. After the first keyword is determined, a target object matching with the first keyword may be determined according to the description information of each object in the pre-stored image. And after the second keyword is determined, the replacement object matched with the second keyword can be extracted from the standby image library, or the replacement object matched with the second keyword can be retrieved and acquired from the internet.
If the voice instruction of the user is received, the third instruction of the user is received at one time. Then, when receiving the third instruction, recognizing the voice data corresponding to the third instruction as a text, and performing word segmentation on the recognized text to obtain at least one third vocabulary entry, where the vocabulary entry obtained by word segmentation includes a noun and a verb. And dividing each noun into a first vocabulary entry and a second vocabulary entry according to the verb in the at least one third vocabulary entry and the position of each noun relative to the occurrence of the verb. And then selecting a first keyword matched with the semantics of the first entry and a second keyword matched with the semantics of the second entry from the standard entries. For example, if the received third command of the user is "brick wall is changed to sea water", the voice data corresponding to the third command is recognized as a text, and the recognized text is segmented to obtain a verb "change" and nouns "brick wall" and "sea water". Thus, the verb can be used to determine that "brick wall" is the first entry and "sea water" is the second entry. And then selecting a first keyword 'wall' matched with the semantics of the first entry and a second keyword 'sea' matched with the semantics of the second entry from the standard entries. After the first keyword is determined, a target object matching with the first keyword may be determined according to the description information of each object in the pre-stored image. And after the second keyword is determined, the replacement object matched with the second keyword can be extracted from the standby image library, or the replacement object matched with the second keyword can be retrieved and acquired from the internet.
And step 204, obtaining the position information of the target image in the image.
Specifically, the position information of the target image in the image can be obtained through an image recognition mode such as edge detection.
And step 205, replacing and displaying the target object in the image according to the replacement object and the position information.
After the target object, the replacement object, and the position information of the target object are obtained, the target object at the position may be directly replaced with the replacement object. Or the original image can be replaced after the image is integrally processed according to the target object, the replacing object and the position information.
Specifically, the substitute object may be cut and scaled according to the size of the target object to obtain an image with the same size as the target object, and the cut substitute object may be used to replace and display the target object. Or, according to the position information, integrating the replacing object and the area except the target image in the original image to generate and display the replaced image. For example, after replacing a wall in the electronic picture frame display image shown in fig. 4 with the sea, the effect shown in fig. 5 can be displayed.
Step 206, receiving a recovery instruction of the user to recover the replacement object as the target object.
Specifically, the recovery instruction includes a preset recovery keyword. For example, the restoration keyword may be set to "restore original appearance" in advance. When the user sends the voice command, the electronic picture frame can resume displaying the image before the background is changed. In practical applications, the recovery of the image background can be controlled by setting a threshold time. When the image change time exceeds the threshold time, the replacement object may be automatically restored to the target object.
In summary, the image processing method provided in the embodiment of the present invention identifies the object in the image, establishes and stores the description information of the object, and obtains the replacement instruction for the target object in the image. And determining the target object, the replacement object and the position information of the target image in the image according to the replacement instruction. And then, replacing and displaying the target object in the image according to the replacing object and the position information. Therefore, the background of the current image can be automatically changed into the environment scene appointed by the user according to the instruction of the user, the operation is convenient and quick, and the interestingness of image display is improved. And the graphic operation work with large calculation amount is completed before the replacement instruction of the target object in the image is obtained in advance, so that the response speed of the replacement instruction can be effectively improved.
Referring to fig. 3, a flowchart of another image processing method according to an embodiment of the present invention is shown. The image processing method comprises the following steps:
in step 301, a replacement instruction for a target object in an image is obtained.
Specifically, when the replacement instruction for the target object in the image is the voice information, the replacement instruction may be a first instruction and a second instruction having a sequence. The first instruction is used for determining a first keyword, and the second instruction is used for determining a second keyword. The replacement instruction may also be a third instruction. Wherein the third instructions may determine the first keyword and the second keyword simultaneously.
In practical applications, before receiving the replacement instruction of the user, a trigger instruction of the user may be received to trigger the execution of the subsequent step. The triggering instruction comprises a preset triggering keyword. For example, after receiving the trigger instruction, the electronic frame may use a previously received voice instruction including a group of nouns as a first instruction, and use a subsequently received voice instruction including a noun as a second instruction.
Step 302, identify a target object in an image.
After the replacement instruction is subjected to semantic analysis to obtain the first keyword, the object in the image can be identified, and the description information of each object in the image is determined, wherein each object in the image comprises a target object. And according to the description information of each object in the image, taking the object matched with the first keyword in the image as a target object.
Specifically, the feature of the object corresponding to the first keyword may be obtained in the image. And further, the target object in the image and the position information of the target object can be determined according to the characteristics of the object. For example, the color of a wall in an image is typically light and relatively continuous. Thus, color, area, etc. may be used as a base feature for finding wall regions in the image. In addition, the color styles of the main scene and the background in the whole image are generally different greatly, so that the image can be divided into at least two parts according to the color styles, and the part which accords with the characteristics of the wall is selected as the wall. Thereby identifying an area belonging to the wall in the drawing as a target object.
Step 303, obtain the replacement object.
After obtaining a replacement instruction for the target object in the image, a replacement object matching the second keyword may be obtained according to the second keyword. In practical applications, the replacement object may be extracted in a library of alternative images. Alternatively, the replacement object is obtained from the internet. The spare image library is stored with a replacing object and description information of the replacing object in advance.
And 304, replacing and displaying the target object in the image according to the replacing object and the position information.
Specifically, when there are at least two target objects, the image of the target object is changed, and then the other target objects in the image are replaced with the same replacement objects, so that the image of the region other than each target object in the image can be re-recognized. And replacing each target object with a replacement object at one time by using the replacement object matched with the second keyword. Therefore, the continuity of the replacement object in the changed image can be effectively ensured, and the display effect of the image in the electronic picture frame is further improved.
Step 305, receiving a recovery instruction of the user to recover the replacement object as the target object.
Specifically, the recovery instruction includes a preset recovery keyword. For example, the restoration keyword may be set to "restore original appearance" in advance. When the user sends the voice command, the electronic picture frame can resume displaying the image before the background is changed. In practical applications, the recovery of the image background can be controlled by setting a threshold time. When the image change time exceeds the threshold time, the replacement object may be automatically restored to the target object.
In summary, the image processing method according to the embodiment of the present invention identifies the target object in the image according to the first keyword by obtaining the replacement instruction for the target object in the image, obtains the replacement object according to the second keyword, and performs replacement display on the target object in the image according to the replacement object and the position information. Therefore, the user can automatically change the background of the current image into the environment scene appointed by the user according to the instruction of the user by sequentially speaking the nouns corresponding to the target object and the replacing object, the operation is convenient and quick, and the interestingness of image display is improved. And the original appearance of the image with the changed background can be quickly restored through the restoration instruction, so that the user can compare the image before and after the view change.
Referring to fig. 6, a block diagram of an image processing apparatus according to an embodiment of the present invention is shown. The image processing apparatus includes: an instruction acquisition module 61, a replacement object acquisition module 62, a position information acquisition module 63, and a replacement module 64.
The instruction obtaining module 61 is configured to obtain a replacement instruction for the target object in the image; the replacement instruction contains description information of the replacement object;
a replacement object obtaining module 62, configured to obtain a replacement object according to description information of the replacement object;
a position information obtaining module 63, configured to obtain position information of the target image in the image;
and the replacing module 64 is used for replacing and displaying the target object in the image according to the replacing object and the position information.
In summary, the image processing apparatus according to the embodiment of the present invention obtains the replacement instruction of the target object in the image by the instruction obtaining module 61, obtains the replacement object by the replacement object obtaining module 62 according to the description information of the replacement object, and obtains the position information of the target object in the image by the position information obtaining module 63. Then, the replacing module 64 replaces and displays the target object in the image according to the replacing object and the position information. Therefore, the target object of the current image can be automatically changed into the replacement object appointed by the user according to the instruction of the user, the operation is convenient and quick, and the interestingness of image display is improved.
The embodiment of the invention also provides an electronic picture frame, which comprises a picture frame assembly, a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein when the computer program is executed by the processor, the steps of the image processing method are realized. The electronic picture frame can be placed in any suitable position. Such as a cabinet, a tea table, a desk, etc., or may be hung on a wall by a stand. Wherein, the picture frame assembly comprises an easel, a display screen and other assemblies. In practical use, the electronic picture frame has an image recognition function and a voice recognition function.
In addition, other display devices with display screens are used to replace electronic picture frames, and any products or components with display functions, such as mobile phones, tablet computers, televisions, notebook computers, digital picture frames, navigators, are within the scope of the embodiments of the present invention.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The image processing method and apparatus provided by the present invention are described in detail above, and the principle and the implementation of the present invention are explained in detail herein by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Claims (10)
1. An image processing method is applied to an electronic picture frame, and is characterized by comprising the following steps:
obtaining a replacement instruction of a target object in an image; the replacement instruction contains description information of a replacement object;
acquiring the replacing object according to the description information of the replacing object;
obtaining position information of the target image in the image;
and replacing and displaying the target object in the image according to the replacement object and the position information.
2. The image processing method according to claim 1, further comprising, before the obtaining of the replacement instruction for the target object in the image:
identifying an object in the image, the object comprising the target object;
and establishing and storing the description information of the object, wherein the description information comprises the object name.
3. The image processing method according to claim 2, wherein the step of obtaining a replacement instruction for the target object in the image comprises:
receiving a voice signal;
performing semantic recognition on the voice signal, and determining a first keyword and a second keyword;
determining a target object matched with the first keyword according to the pre-stored description information of the object;
wherein the second keyword includes description information of the replacement object.
4. The image processing method according to claim 1, further comprising, after obtaining the replacement instruction for the target object in the image:
a target object in the image is identified.
5. The image processing method according to claim 4, wherein the step of obtaining a replacement instruction for the target object in the image comprises:
receiving a voice signal;
performing semantic recognition on the voice signal, and determining a first keyword and a second keyword;
wherein the first keyword includes description information of the target object, and the second keyword includes description information of the replacement object.
6. The image processing method according to claim 5, wherein the step of identifying the target object in the image comprises:
identifying objects in the image, and determining description information of each object in the image, wherein the objects comprise the target object;
and according to the description information of each object in the image, taking the object matched with the first keyword in the image as a target object.
7. The image processing method according to claim 1, wherein the step of acquiring the replacement object based on the description information of the replacement object comprises:
extracting the replacement object in a standby image library; or,
acquiring the replacement object from the Internet;
the spare image library is stored with a replacement object and description information of the replacement object in advance.
8. The image processing method according to any one of claims 1 to 7, further comprising, after the replacement display of the target object in the image:
receiving a restoring instruction of a user to restore the replacing object to the target object; the recovery instruction comprises a preset recovery keyword.
9. An image processing apparatus characterized by comprising:
the instruction acquisition module is used for acquiring a replacement instruction of a target object in the image; the replacement instruction contains description information of a replacement object;
the replacing object obtaining module is used for obtaining the replacing object according to the description information of the replacing object;
the position information acquisition module is used for acquiring the position information of the target image in the image;
and the replacing module is used for replacing and displaying the target object in the image according to the replacing object and the position information.
10. An electronic picture frame comprising a picture frame assembly, characterized by comprising a picture frame assembly, a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810052882.4A CN108095465A (en) | 2018-01-19 | 2018-01-19 | A kind of image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810052882.4A CN108095465A (en) | 2018-01-19 | 2018-01-19 | A kind of image processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108095465A true CN108095465A (en) | 2018-06-01 |
Family
ID=62218713
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810052882.4A Pending CN108095465A (en) | 2018-01-19 | 2018-01-19 | A kind of image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108095465A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109377502A (en) * | 2018-10-15 | 2019-02-22 | 深圳市中科明望通信软件有限公司 | A kind of image processing method, image processing apparatus and terminal device |
CN111045618A (en) * | 2018-10-15 | 2020-04-21 | 广东美的白色家电技术创新中心有限公司 | Product display method, device and system |
CN112784090A (en) * | 2019-11-04 | 2021-05-11 | 阿里巴巴集团控股有限公司 | Image processing method, object searching method, computer device, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1570968A (en) * | 2004-01-18 | 2005-01-26 | 马堃 | Digital image synthesizing method by reusing electronic photo frame |
CN105160695A (en) * | 2015-06-30 | 2015-12-16 | 广东欧珀移动通信有限公司 | Picture processing method and mobile terminal |
CN106204435A (en) * | 2016-06-27 | 2016-12-07 | 北京小米移动软件有限公司 | Image processing method and device |
CN106331476A (en) * | 2016-08-18 | 2017-01-11 | 努比亚技术有限公司 | Image processing method and device |
CN106791370A (en) * | 2016-11-29 | 2017-05-31 | 北京小米移动软件有限公司 | A kind of method and apparatus for shooting photo |
CN107085823A (en) * | 2016-02-16 | 2017-08-22 | 北京小米移动软件有限公司 | Face image processing process and device |
-
2018
- 2018-01-19 CN CN201810052882.4A patent/CN108095465A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1570968A (en) * | 2004-01-18 | 2005-01-26 | 马堃 | Digital image synthesizing method by reusing electronic photo frame |
CN105160695A (en) * | 2015-06-30 | 2015-12-16 | 广东欧珀移动通信有限公司 | Picture processing method and mobile terminal |
CN107085823A (en) * | 2016-02-16 | 2017-08-22 | 北京小米移动软件有限公司 | Face image processing process and device |
CN106204435A (en) * | 2016-06-27 | 2016-12-07 | 北京小米移动软件有限公司 | Image processing method and device |
CN106331476A (en) * | 2016-08-18 | 2017-01-11 | 努比亚技术有限公司 | Image processing method and device |
CN106791370A (en) * | 2016-11-29 | 2017-05-31 | 北京小米移动软件有限公司 | A kind of method and apparatus for shooting photo |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109377502A (en) * | 2018-10-15 | 2019-02-22 | 深圳市中科明望通信软件有限公司 | A kind of image processing method, image processing apparatus and terminal device |
CN111045618A (en) * | 2018-10-15 | 2020-04-21 | 广东美的白色家电技术创新中心有限公司 | Product display method, device and system |
CN112784090A (en) * | 2019-11-04 | 2021-05-11 | 阿里巴巴集团控股有限公司 | Image processing method, object searching method, computer device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109688463B (en) | Clip video generation method and device, terminal equipment and storage medium | |
CN110446063B (en) | Video cover generation method and device and electronic equipment | |
JP6986187B2 (en) | Person identification methods, devices, electronic devices, storage media, and programs | |
CN112631437B (en) | Information recommendation method, device and electronic equipment | |
CN111241340A (en) | Video tag determination method, device, terminal and storage medium | |
CN104866308A (en) | Scenario image generation method and apparatus | |
CN110740389A (en) | Video positioning method and device, computer readable medium and electronic equipment | |
CN107608618B (en) | Interaction method and device for wearable equipment and wearable equipment | |
US20080094496A1 (en) | Mobile communication terminal | |
CN108538284A (en) | Simultaneous interpretation result shows method and device, simultaneous interpreting method and device | |
CN105786803B (en) | translation method and translation device | |
CN113343720A (en) | Subtitle translation method and device for subtitle translation | |
CN113272873A (en) | Method and apparatus for augmented reality | |
CN112381091A (en) | Video content identification method and device, electronic equipment and storage medium | |
CN108095465A (en) | A kind of image processing method and device | |
CN111046210A (en) | An information recommendation method, device and electronic device | |
CN112052784A (en) | Article searching method, device, equipment and computer readable storage medium | |
CN110858100B (en) | Method and device for generating association candidate words | |
CN112926300A (en) | Image searching method, image searching device and terminal equipment | |
CN108345625A (en) | A kind of information mining method and device, a kind of device for information excavating | |
CN108052506B (en) | Natural language processing method, device, storage medium and electronic device | |
CN114691926A (en) | Information display method and electronic equipment | |
CN113596352A (en) | Video processing method and device and electronic equipment | |
US9055161B2 (en) | Text processing method for a digital camera | |
US20140181672A1 (en) | Information processing method and electronic apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180601 |
|
RJ01 | Rejection of invention patent application after publication |