Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the application may be practiced otherwise than as specifically illustrated or described herein. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
According to an aspect of the embodiment of the present invention, there is provided an image processing method, optionally, as an optional implementation manner, the image processing method may be applied, but is not limited to, to an environment as shown in fig. 1, where fig. 1 is a schematic diagram of a hardware environment of the image processing method according to the embodiment of the present invention. As shown in fig. 1, the electronic terminal 10 includes an input part 100, an image capturing part 200, a processor 300, and a display part 400.
In this embodiment, the electronic terminal 10 obtains an image to be processed inputted through the input part 100 or the image acquisition part 200, displays the image to be processed in the first preview window 410 of the display part 400, determines the image characteristics of the image to be processed through the processor 300, displays at least one image template corresponding to the image characteristics in the second preview window 420 of the display part 400, receives a first input acting on the at least one image template, performs image processing on the image to be processed according to the image template selected by the processor 300 in response to the first input to obtain a target image, and displays the target image in the first preview window 410. According to the embodiment, the image processing method and the device for the image processing device, the image processing of the image to be processed is flexibly carried out according to the image characteristics, the manual dependency on a user is reduced, and the convenience of image processing is improved.
In this embodiment, the terminal system of the electronic terminal includes, but is not limited to, an Android system, an iOS system, a Linux system, and the like, and the terminal system of the electronic terminal is not limited in this embodiment.
According to an embodiment of the present application, an image processing method is provided. As shown in fig. 2, the method specifically may include the following steps:
s202, obtaining image characteristics of an image to be processed;
S204, displaying at least one image template corresponding to the image features;
s206, receiving a first input acting on at least one image template;
s208, responding to the first input, and performing image processing on the image to be processed according to the image template selected by the first input so as to obtain a target image;
S210, displaying the target image.
In this embodiment, the image to be processed may be an image or a video frame in any format, may be acquired in real time by an electronic terminal, or may be a designated image input into a preset application program, and in this embodiment, the acquiring mode and the acquiring mode of the image to be processed are not limited.
In this embodiment, the image features include, but are not limited to, gesture features, color features, facial features including, but not limited to, expression features, mouth shape features, and the like. The above is only one example, and no limitation is made on the image features in the present embodiment.
In this embodiment, image characteristics of the image to be processed are obtained by performing image recognition on the image to be processed, and in addition, recognition modes of the image characteristics of the image to be processed include, but are not limited to, object recognition, text recognition, expression recognition, and other image recognition modes, and specific image characteristic recognition methods are mature methods in the prior art, and are not described herein in detail.
As an optional technical solution, in this embodiment, performing image recognition on the image to be processed includes, but is not limited to, performing image recognition on a plurality of images to be processed to obtain identical image features of the plurality of images to be processed, and determining an image template corresponding to the identical image features. The term "plurality of images" as used herein refers to two or more images to be processed. Specifically, determining similar image features of a plurality of images to be processed as the same image features, then determining an image template corresponding to the same image features, and performing unified processing or batch processing on the plurality of images to be processed. The plurality of images to be processed include, but are not limited to, a plurality of photos taken in a continuous shooting mode of a camera, or a plurality of photos in a photo album selected by a user, and a plurality of video frames in succession in a video.
It should be noted that, the image template is determined according to the image characteristics of the image to be processed, and the image template is associated with the image characteristics. For example, in one example, the expression feature in the image to be processed is identified as "happy", the image template of the image to be processed is determined as the relevant image template corresponding to "happy", and in another example, the gesture feature to be processed is identified as "loving", the image template of the image to be processed is determined as the relevant image template corresponding to "loving".
In a specific application scene, the image features of the image to be processed displayed in a first preview window are acquired, wherein the first preview window can be a preview interface of a mobile phone camera or an editing preview interface of the image to be processed in a preset application program and is used for displaying all or part of the image to be processed. And displaying at least one image template corresponding to the image features in a second preview window after the image features of the image to be processed are acquired, wherein the second preview window is used for displaying the image template, and the image template can be a template of a target image, such as a jigsaw template, a component template, a clipping template and the like. The jigsaw template is a template formed by splicing a plurality of images, the component template is a template for adding preset components (components such as flowers, sun, characters and the like) on the images, and the cutting template is a template for cutting the images according to preset shapes or sizes. The above image template is only one example, and no limitation is made on the image template in the present embodiment.
In this embodiment, the first input includes, but is not limited to, a click, a long press, a slide, or the like selection operation acting on a virtual key or a preset control in the touchable display screen. Or clicking, long-pressing or sliding operation on the physical key. In particular, the first input is directed to an image template displayed in the second preview window. The user selected image template may be determined by receiving the first input and the image to be processed may be image processed by the user selected image template.
Specifically, the image processing modes include, but are not limited to, modes of stitching, adding components, cutting, adding filters, and the like, and in this embodiment, the image processing mode of the image to be processed is determined mainly according to an image template.
And finally, displaying a target image in the first preview interface, wherein the target image is an image obtained by performing image processing on the image to be processed according to a corresponding image template, and the first preview interface also comprises a cancel and save control for saving the target image or canceling editing of the image to be processed.
According to the embodiment, the image characteristics of the image to be processed are determined, at least one image template corresponding to the image characteristics is displayed, first input acting on the at least one image template is received, image processing is conducted on the image to be processed according to the image template selected by the first input in response to the first input, so that a target image is obtained, and the target image is displayed. The method and the device realize flexible image processing of the image to be processed according to the image characteristics, reduce the manual dependency on users and improve the convenience of image processing.
Optionally, in the embodiment, acquiring the image features of the image to be processed includes, but is not limited to, determining an image acquisition mode of a preset application program in the electronic terminal, performing image acquisition according to the image acquisition mode, displaying the image to be processed, and performing image recognition on the image to be processed to obtain the image features.
Specifically, in this embodiment, image acquisition is performed after the image acquisition mode is selected to obtain an image to be processed, the image to be processed is displayed in the first preview window, and image recognition is performed on the image to be processed. In one example, for example, a photographing mode is selected in a camera application program of a mobile phone, a photo is previewed on a photo preview interface after photographing, and image recognition is performed on the previewed photo to obtain image features of the previewed photo.
After the image acquisition mode is set, the image can be acquired, the image to be processed can be directly identified to obtain the image characteristics, the user does not need to manually select the image, the image characteristics of the image to be processed can be directly acquired, and the convenience of image processing is improved.
Optionally, in the present embodiment, the image features include at least one of gesture features, mouth shape features, expression features.
Specifically, image recognition is performed on an image to be processed to obtain image features including, but not limited to, gesture recognition is performed on the image to be processed to obtain gesture features of the image to be processed, or mouth shape recognition is performed on the image to be processed to obtain mouth shape features of the image to be processed, or face recognition is performed on the image to be processed to obtain expression features of the image to be processed. Specifically, in the embodiment, the image recognition of the image to be processed includes, but is not limited to, gesture recognition and mouth shape recognition of the image to be processed to obtain mouth shape features and face recognition to obtain corresponding image features respectively. According to the embodiment, different image characteristics can be obtained based on different recognition modes respectively, so that different image templates can be obtained, and the diversity of image processing is increased.
Optionally, in the embodiment, displaying at least one image template corresponding to the image feature includes, but is not limited to, querying an image template matching the image feature in a preset database, displaying at least one image template if at least one image template matching the image feature exists, and displaying prompt information if the image template matching the image feature does not exist, wherein the prompt information is used for indicating that the image template matching the image feature does not exist.
Specifically, in this embodiment, after determining the image features of the image to be processed, an image template matching the image features is queried in a preset database, and the image template is queried based on the image features. The preset database may be a database of a cloud server, or may be a local database of the electronic terminal, which is not limited in this embodiment. After finding that there is an image template matching the image features, the photo template is displayed in a second preview window, for example, in a photo template preview interface of the camera. If not, displaying prompt information in the second preview window, wherein the prompt information is used for prompting that no image template matched with the image features exists. In this embodiment, the number of image templates displayed in the second preview window is not limited, and may be specifically set according to practical experience.
By the embodiment, the image template matched with the image features is queried in the preset database, and the query result is displayed, so that the matching degree of the image template for image processing and the image to be processed can be improved, and further the flexible processing of the image to be processed is realized.
Optionally, in this embodiment, after the image templates matching with the image features are queried in the preset database, the method further includes, but is not limited to, acquiring the use frequencies corresponding to the plurality of image templates in the preset database if the image templates matching with the image features do not exist, selecting a preset number of target image templates according to the use frequencies corresponding to the plurality of image templates, and displaying the preset number of target image templates.
Specifically, in this embodiment, if there is no image template matching with the image feature in the preset database, the image templates are ordered according to the frequency of use of the image templates in the preset database, and then the order of the image templates is obtained to obtain a preset number of target image templates, and the target image templates are displayed in the second preview window. It should be noted that the preset number may be set according to actual experience, and this is not limited in this embodiment.
In another example, in the absence of an image template matching the image feature, the image template matching the image feature is displayed by determining similar image features having a similarity to the image feature greater than a preset threshold, querying a preset database for the image template matching the similar image features.
By the embodiment, under the condition that the image template matched with the image features does not exist, the image template is pushed according to the use frequency of the root image template, so that the pushed image template can adapt to the use habit of a user, and the pushing accuracy of the image template is improved.
Optionally, in the embodiment, the image template comprises a component template, at least one preset component is included in the component template, and image processing of the image to be processed according to the image template selected by the first input comprises, but is not limited to, receiving a second input acting on the at least one preset component, wherein the second input is used for adjusting the position of the preset component, and determining that the preset component is located at the position of the image to be processed according to the second input in response to the second input.
Specifically, in this embodiment, in the case where the image template selected by the first input is a component template, at least one preset component is included in the component template. For example, the image processing interface shown in fig. 3a is schematically shown, in the graphical user interface of the electronic terminal, the image 302 to be processed is displayed in a first preview window 310, the image template 304, the image template 306 and the image template 308 are displayed in a second preview window 320, and the first input selects the image template to be the image template 306. In fig. 3b, a second input is received that acts on the preset component 3060 in the image template 306, the preset component 3060 is selected and dragged, and the positions of the preset component 3060 and the image 302 to be processed are adjusted.
Through the embodiment, the positions of the preset component and the image to be processed are adjusted by receiving the second input acting on the preset component, so that the image processing according to the input of the user can be realized, and the use experience of the user is improved.
Optionally, in this embodiment, the image template includes a jigsaw template, the jigsaw template includes N image windows distributed according to preset positions, the image windows are used for filling images, and image processing of the image to be processed according to the image template selected by the first input includes, but is not limited to, receiving a third input acting on the jigsaw template, and filling the image to be processed into the N image windows respectively in response to the third input, where N is a positive integer greater than 1.
Specifically, in the present embodiment, taking the image feature as the gesture feature as an example, as shown in fig. 4a, image recognition is performed on the image 40 to be processed in the first preview window 410 to determine that the image feature of the image 40 to be processed is the gesture feature. As shown in fig. 4b, the matching mosaic template is queried according to the gesture features of the image 40 to be processed, and the mosaic template 402 (heart template consisting of several image windows) and the mosaic template 404 (double heart template consisting of several image windows) matching the gesture features are displayed in the second preview window 420, receiving the third input acting on the image template 402. As shown in fig. 4c, the target image 42 processed by the tile template 402 is displayed in the first preview window, and the image windows in the target image 42 are respectively filled with the image 40 to be processed.
Through the embodiment, the jigsaw template is selected through the third input, and the image to be processed is processed according to the jigsaw template, so that the image processing according to the input of the user can be realized, and the use experience of the user is improved.
Optionally, in the embodiment, the filling of the to-be-processed image into the N image windows respectively in response to the third input includes, but is not limited to, receiving a fourth input acting on the M image windows, filling the first to-be-processed image into the M image windows in response to the fourth input, the first to-be-processed image being a current to-be-processed image, receiving a fifth input acting on the P image windows, controlling the electronic terminal to collect Q second to-be-processed images in response to the fifth input, and filling the Q second to-be-processed images into the P image windows, wherein M, P and Q are positive integers, a sum of M and P is N, and Q is less than or equal to P.
In the above example, when the image template obtained according to the image feature matching is a jigsaw template, the image to be processed may be respectively filled into a plurality of image windows of the jigsaw template. In this embodiment, different images to be processed may be respectively filled into different image windows. After the jigsaw template is determined for the current image to be processed, a pre-stored image can be selected or a real-time acquisition image can be filled into the image window by selecting different image windows. In this example, the number of image windows to be filled with the image to be processed is not limited, and may be set according to actual experience.
In one example, such as the image processing interface schematic shown in fig. 5, a selected tile template 500 is displayed in the first preview window 50, where the tile template includes an image window 502 and an image window 504. The photo A is dragged to the image window 502 through a dragging operation to fill the photo A into the image window 502, the photographing window 52 and the import window 54 are displayed in the first preview window through clicking the image window 504, a camera is called to photograph through selecting the photographing window 52, then a new photo obtained through photographing is filled into the window 504, the photo stored in the photo album is accessed through selecting the photographing window 54, and the selected photo is filled into the image window 504.
Through the embodiment, the flexible processing of the image window in the jigsaw template is realized, so that the diversity in the image processing process is increased.
In this embodiment, the tile template is determined according to first image features of a third to-be-processed image, the third to-be-processed image is a current to-be-processed image, and the step of filling the to-be-processed image into N image windows respectively in response to a third input further includes, but is not limited to, obtaining R fourth to-be-processed images, obtaining second image features corresponding to the R fourth to-be-processed images respectively, and determining a target tile template in the tile templates according to a sum r+1 of the third to-be-processed image and the R second to-be-processed images, the first image features, and the R second image features, wherein R is a positive integer.
Specifically, after determining a plurality of jigsaw templates through gesture features of the first photo, if R photos continue to be taken, after the taking is finished, the matched jigsaw templates can be further screened by combining the gesture features, the image features of the R photos and the number r+1, so that the number of picture windows required by the jigsaw templates corresponds to the total number r+1 of photos actually taken by the user.
Optionally, in this embodiment, responsive to the third input, filling the to-be-processed image into the N image windows, respectively, includes, but is not limited to, receiving a sixth input, wherein the sixth input acts on two of the N image windows, and responsive to the sixth input, exchanging the to-be-processed image in the two image windows.
In a specific application scenario, as shown in an image processing interface schematic diagram in fig. 6, a target image 62 edited by a jigsaw template is displayed in a first preview window 60, where the target image includes an image window 610 and an image window 612, an image filled in the image window 610 is an image 620, and an image filled in the image window 612 is an image 622. In one example, by selecting the image window 610 and the image window 612, a preset virtual button 64 is clicked to effect the exchange of images in the two image windows. In another example, the image to be processed in image window 610 may be selected, and the image to be processed in image window 610 may be dragged into image window 620 to effect the exchange of the images to be processed in the two image windows.
Through the embodiment, the position exchange adjustment of the images to be processed in the two image windows is realized through the sixth input on the two image windows in the jigsaw template.
Optionally, in this embodiment, in response to the first input, the image to be processed is processed according to the image template selected by the first input to obtain the target image, and further includes, but is not limited to, performing image processing on the image to be processed according to the image template selected by the first input to obtain the first image, receiving a seventh input acting on a preset control, wherein the seventh input is used for selecting a filter of the first image, and in response to the seventh input, processing the first image according to the filter to obtain the target image.
In a specific application scenario, as shown in an image processing interface schematic diagram in fig. 7, a first image 710 obtained after processing based on an image template is displayed in the first preview window 70, then a filter is selected in the third preview window 72 through a seventh input, and then the first image 710 is processed according to the filter.
Through the embodiment, the first image is obtained after the image to be processed is processed based on the image template, and further image processing can be performed on the first image according to the filter effect selected by the seventh input, so that the processing modes of image processing are enriched.
Optionally, in the embodiment, after the image to be processed is subjected to image processing according to the image template selected by the first input in response to the first input to obtain the target image, the method further comprises, but is not limited to, setting the image characteristics and the image template selected by the first input as a common template.
Specifically, in this embodiment, after the user finishes processing the image to be processed according to the image template, the image features of the user and the image template are saved as a common template, so as to realize learning of habits of the user. Specifically, facial features, gesture features, expression features and the like of the user can be saved, and in the subsequent image processing process, the corresponding common templates are quickly searched according to the image features, so that the searching speed of the image templates is improved, and the user experience is improved.
Optionally, in the embodiment, after the image feature and the image template selected by the first input are set as the common template, the method further includes, but is not limited to, displaying the common template in the second preview window when the image feature corresponding to the third to-be-processed image is the same as the image feature corresponding to the common template.
In a specific application scene, after the image features and the image template selected by the first input are set as the common template, in a subsequent image processing process, if the image features of the image to be processed are the same as the corresponding image features in the common template, the common template is directly displayed in the second preview window. It should be noted that, in this embodiment, the third to-be-processed image is used to indicate that the image processing is completed for the other to-be-processed images that have the same image characteristics as the third to-be-processed image before the image processing is performed for the third to-be-processed image after the image template is saved as the common template.
Alternatively, in this embodiment, when the image features corresponding to the third to-be-processed image are the same as the image features corresponding to the common template, the common template is preferentially displayed or highlighted in the second preview window, so as to improve the pushing efficiency of the image template.
Under the condition that the image features corresponding to the third to-be-processed image are the same as the image features corresponding to the common template, the common template is displayed in the second preview window, so that the pushing efficiency of the common template is improved, and the user experience is improved.
According to the embodiment, the image characteristics of the image to be processed are determined, at least one image template corresponding to the image characteristics is displayed in a second preview window, first input acting on the at least one image template is received, image processing is conducted on the image to be processed according to the image template selected by the first input in response to the first input, so that a target image is obtained, and the target image is displayed in the first preview window. The method and the device realize flexible image processing of the image to be processed according to the image characteristics, reduce the manual dependence on users, improve the convenience of image processing, and further solve the problem that the image editing is inconvenient because the images are manually selected and edited by the users in the related technology, and the images cannot be flexibly edited and typeset according to the characteristics in the images.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing a method for loading image processing. In the embodiment of the present application, a method for performing loading image processing by an image processing apparatus is taken as an example, and the method for image processing provided in the embodiment of the present application is described.
According to another aspect of an embodiment of the present application, an image processing apparatus is provided, as shown in fig. 8, which may specifically include:
1) An acquiring unit 802, configured to acquire an image feature of an image to be processed;
2) A first display unit 804, configured to display at least one image template corresponding to the image feature;
3) A receiving unit 806 for receiving a first input acting on the at least one image template;
4) A processing unit 808, configured to respond to the first input, and perform image processing on the image to be processed according to an image template selected by the first input, so as to obtain a target image;
5) And a second display unit 810 for displaying the target image.
Optionally, in the present embodiment, the image features include at least one of gesture features, mouth shape features, expression features
Alternatively, in the present embodiment, the first display unit 804 includes:
1) The query module is used for querying an image template matched with the image characteristics in a preset database;
2) A second display module, configured to display at least one image template in the second preview window if there is at least one image template matching the image feature;
3) And the prompt module is used for displaying prompt information on the preset window if the image template matched with the image feature does not exist, wherein the prompt information is used for indicating that the image template matched with the image feature does not exist.
Optionally, in this embodiment, the first display unit 804 further includes:
1) The second determining module is used for acquiring the use frequency corresponding to each of a plurality of image templates in the preset database if the image template matched with the image feature does not exist after the image template matched with the image feature is queried in the preset database;
2) The selecting module is used for selecting a preset number of target image templates according to the use frequencies respectively corresponding to the plurality of image templates;
3) And the third display module is used for displaying the target image templates with the preset number in the second preview window.
Optionally, in this embodiment, the image template includes a preset component, where the processing unit 808 includes:
1) The first receiving module is used for receiving a second input acting on the preset component, wherein the second input is used for adjusting the position of the preset component;
2) And the third determining module is used for responding to the second input and determining the position of the preset component in the image to be processed according to the second input.
Optionally, in this embodiment, the image template includes a jigsaw template, where the jigsaw template includes N image windows distributed according to preset positions, and the image windows are used to fill an image, and the processing unit 808 includes:
1) A second receiving module for receiving a third input acting on the puzzle template;
2) And the filling module is used for respectively filling the images to be processed into the N image windows in response to the third input, wherein N is a positive integer greater than 1.
Optionally, in this embodiment, the filling module includes:
1) A first receiving sub-module for receiving a fourth input acting on M of said image windows;
2) The first filling submodule is used for responding to the fourth input and filling a first image to be processed into the M image windows, wherein the first image to be processed is a current image to be processed;
3) A second receiving sub-module for receiving a fifth input acting on P of said image windows;
4) The image acquisition sub-module is used for responding to the fifth input and controlling the electronic terminal to acquire Q second images to be processed;
5) A second filling sub-module, configured to fill the Q second images to be processed into the P image windows,
Wherein M, P and Q are positive integers, wherein the sum of M and P is N, and Q is less than or equal to P.
The image processing device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing device provided in the embodiment of the present application can implement each process implemented by the image processing device in the method embodiments of fig. 1 to fig. 7, and in order to avoid repetition, a description is omitted here.
The image processing device provided by the embodiment is used for determining the image characteristics of the image to be processed, displaying at least one image template corresponding to the image characteristics, receiving first input acting on the at least one image template, responding to the first input, performing image processing on the image to be processed according to the image template selected by the first input to obtain a target image, and displaying the target image. The method and the device realize flexible image processing of the image to be processed according to the image characteristics, reduce the manual dependence on users, improve the convenience of image processing, and further solve the problem that the image editing is inconvenient because the images are manually selected and edited by the users in the related technology, and the images cannot be flexibly edited and typeset according to the characteristics in the images.
Optionally, the embodiment of the present application further provides an electronic device, including a processor 910, a memory 909, and a program or an instruction stored in the memory 909 and capable of running on the processor 910, where the program or the instruction implements each process of the embodiment of the image processing method when executed by the processor 910, and the same technical effects can be achieved, and for avoiding repetition, a description is omitted herein.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to, a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910.
Those skilled in the art will appreciate that the electronic device 900 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 910 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
A processor 910, configured to acquire an image feature of an image to be processed;
a display unit 906, configured to display at least one image template corresponding to the image feature;
a user input unit 907 for receiving a first input acting on the at least one image template;
A processor 910, configured to respond to the first input, and perform image processing on the image to be processed according to an image template selected by the first input, so as to obtain a target image;
And a display unit 906 for displaying the target image.
It should be appreciated that in embodiments of the present application, the input unit 904 may include a graphics processor (Graphics Processing Unit, GPU) 9041 and a microphone 9042, with the graphics processor 9041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. Touch panel 9071, also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 909 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 910 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 910.
The electronic equipment provided by the embodiment is used for acquiring image characteristics of an image to be processed, displaying at least one image template corresponding to the image characteristics, receiving first input acting on the at least one image template, responding to the first input, performing image processing on the image to be processed according to the image template selected by the first input to obtain a target image, and displaying the target image. The method and the device realize flexible image processing of the image to be processed according to the image characteristics, reduce the manual dependence on users, improve the convenience of image processing, and further solve the problem that the image editing is inconvenient because the images are manually selected and edited by the users in the related technology, and the images cannot be flexibly edited and typeset according to the characteristics in the images.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the method embodiment of the image processing method, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the method embodiment of the image processing method can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.