Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 is a flowchart illustrating steps of an image processing method according to an embodiment of the present invention, where as shown in fig. 1, the method may include:
and 101, acquiring a frame of preview image acquired by the camera.
In the embodiment of the invention, a user can use the mobile terminal comprising the camera to shoot, specifically, when the user opens the camera application of the mobile terminal, the user can acquire the picture in the current environment through the camera of the mobile terminal, at this time, one frame of picture acquired by the camera is a preview image, and in addition, when multi-frame preview images continuously acquired by the camera are combined, a previewed dynamic picture can also be obtained.
For example, in one implementation, when a user opens a camera application of the mobile terminal and clicks a photographing button, a self-photographing image is taken and stored, the self-photographing image may be a frame of preview image collected by a camera, and subsequent image processing may be performed on the self-photographing image, or a history image is extracted from an album of the mobile terminal as a preview image for subsequent image processing.
In addition, in another implementation manner, a user opens a camera application of the mobile terminal to perform self-shooting, and before pressing a shooting button, the camera continuously collects images, the user can adjust the posture or the shooting angle at the moment, the picture displayed by the camera application is similar to dynamic shooting, the user can see the effect of adjusting the posture or the shooting angle in the dynamic picture, at the moment, the camera continuously collects multiple frames of pictures, and each collected frame of picture can be a preview image, so that subsequent image processing can be performed on each collected frame of preview image until the user presses the shooting button, and the last frame of image displayed by the camera application is stored as a final picture, namely, a picture obtained by self-shooting of the user.
And step 102, determining a target area in the preview image.
In the embodiment of the present invention, after the preview image is acquired, the user may further determine a target area in the preview image to perform a correction process on the target area, where the target area may be a certain area in the preview image or the entire preview image.
Specifically, there may be various implementations for determining the target area in the preview image, and in one implementation, the preview image includes the shooting subject and the shooting background. The shooting subject is a part that the user wants to stand out when shooting, and the shooting subject can be a person, an animal, a building or the like. In practical application, the number of the shooting subjects in the preview image can be one or more, the camera application of the mobile terminal can automatically separate the shooting subjects from the shooting background so that a user can modify the shooting subjects or the shooting background, for example, the camera application can automatically identify faces and provide corresponding correction schemes such as face thinning, the camera application can also automatically identify human eyes in the faces and provide corresponding correction schemes such as large eyes or beautiful pupils, and the camera application can also automatically identify the shooting background and perform correction schemes such as blurring on the shooting background.
Further, in another implementation manner, the user may also select the preview image by himself, the mobile terminal selects the target area in the corresponding preview image by receiving a selection operation of the user on the flexible display screen, and the specific selection operation may include that the user selects by pulling with a finger, a frame that can increase or decrease the range, or divide the preview image into a plurality of sub-areas, and the user selects one of the sub-areas as the correction drive by touch operation.
Step 103, receiving a first bending input of the user to the flexible display screen.
In the embodiment of the invention, the display screen of the mobile terminal is a flexible display screen, the flexible display screen is a deformable and bendable display device made of soft materials, the flexible display screen adopts a plastic substrate instead of a common glass substrate, and a protective film is adhered to the back surface of the flexible display screen by means of a thin film packaging technology, so that the flexible display screen is bendable and is not easy to break. Compared with a hard display screen, the flexible display screen can further realize the functions of bending, folding and the like on the basis of realizing the display function and the touch function.
Specifically, in the embodiment of the present invention, the flexible display screen may generally support a three-dimensional Touch technology (Force-Touch), where the Force-Touch technology is a process of sensing generation and change of pressure to change electrical data, generating an instruction through the electrical data, and finally reaching the pressure to indirectly realize the instruction, and the flexible display screen may sense bending strength of a user on the screen and call out different corresponding functions through the Force-Touch technology.
Wherein, the below of flexible display screen can be provided with a plurality of pressure sensor by the symmetry, and pressure sensor comprises a plurality of electric capacity module. The pressure sensor can be understood as attached below the flexible display screen and a gap exists between the pressure sensor and the metal support below the pressure sensor. The flexible display screen possesses certain toughness, when the user uses the finger to crooked flexible display screen, can produce slight deformation, change with the clearance between the metal support of below, and then arouse the change of self-inductance electric capacity, through crooked position, dynamics and the difference of clearance change, make the change of electric capacity also different, pressure sensor is through discernment and calculation such change, thereby judge crooked position and dynamics, further calculate the bending coefficient who reachs the flexible display screen through crooked dynamics, bending coefficient is the coefficient that shows the material degree of buckling, the degree of buckling (the angle of buckling) of material is big more, the value of bending coefficient is big more.
And 104, determining a target adjustment parameter corresponding to the first bending parameter input by the first bending.
In this step, when a user performs a first bending input on the flexible display screen, a pressure sensor arranged below the flexible display screen starts a pressure touch detection function, bending operation of the user generates force and slight deformation in a bending direction below the flexible display screen, the pressure sensor can accurately sense a pressure value and a bending direction of the current bending, the pressure value and the bending direction are transmitted to a CPU processing module of the mobile terminal, the CPU processing module calculates the received pressure value to obtain a corresponding bending coefficient, the corresponding bending coefficient is matched with a preset corresponding relationship between the bending coefficient and an adjustment parameter, and a corresponding target adjustment parameter is determined.
The mobile terminal stores the corresponding relation between the preset bending coefficient and the adjustment parameter, the corresponding relation between the bending coefficient and the adjustment parameter can be a linear corresponding relation, the adjustment parameter linearly increases along with the increase of the bending coefficient, when the image processing method applied to the flexible display screen is carried out, the embodiment of the invention utilizes the first bending input aiming at the flexible display screen to accurately determine the corresponding target adjustment parameter through the bending degree of the first bending input, the target adjustment parameter can be applied to the target area in real time for correction, the correction effect can be displayed to a user in real time, the user observes the correction effect in real time while carrying out the first bending input, when the correction effect reaches the requirement of the user, the first bending input to the flexible display screen can be stopped, at the moment, the last adjustment parameter mapped before stopping the first bending input is taken as the target adjustment parameter, and displaying or storing the corresponding corrected image as a final correction effect.
For example, referring to fig. 2, which shows a processing interface diagram of an image processing method according to an embodiment of the present invention, a user takes a self-portrait through a mobile terminal including a camera and a flexible display screen 10, a camera application of the mobile terminal automatically identifies a face area 201 as a target area through a face recognition function with respect to a frame of a preview image 20 acquired, and presets a flexible display screen 10 to be bent toward a user for face slimming adjustment and a flexible display screen 10 to be bent away from the user for wide face adjustment, at this time, referring to a scheme a, the user bends the flexible display screen 10 through a first bending input toward the user, at this time, a first target adjustment parameter is mapped to perform face slimming correction on the face area 201, further referring to a scheme b, at this time, the user increases a bending degree of the flexible display screen 10 through the first bending input toward the user, therefore, the bending coefficient of the first bending input in the scheme b is larger than that of the first bending input in the scheme a, and the value of the second target adjustment parameter mapped by the corresponding scheme b is larger than that of the first target adjustment parameter, so that the face-thinning effect of the face area 201 in the scheme b is more obvious, and similarly, if the user bends the flexible display screen 10 in the direction departing from the user, the size of the cheek of the face area 201 can be increased on the original basis, and when the flexible display screen 10 recovers the original shape, the unprocessed face area 201 can be displayed.
And 105, performing image processing on the target area according to the target adjustment parameter, and outputting a target image.
In the embodiment of the invention, corresponding parameters in the target area can be correspondingly adjusted according to the target adjustment parameters, the target area to which the target adjustment parameters are applied and the whole preview image are displayed through the flexible display screen, a user judges whether a satisfactory effect is achieved or not, if the satisfactory effect is achieved, the user continues to adjust through the first bending input with larger or smaller amplitude until the satisfactory effect is achieved, and the target image is output.
If the preview image is a shot photo or a history photo in an album, the user can store the corrected photo after correcting and satisfying the preview image through the first bending input; if the preview image comprises multiple frames of pictures acquired by the camera in real time, the final target adjustment parameter can be applied to each frame of preview image after the user corrects and satisfies the preview image through the first bending input until the user presses a photographing button, and the last frame of image displayed by the camera application is used as a final picture to be stored, namely the picture obtained by self-photographing of the user.
In summary, in the image processing method provided in the embodiment of the present invention, a frame of preview image acquired by a camera is acquired; determining a target area in the preview image; receiving a first bending input of a user to the flexible display screen; determining a target adjustment parameter corresponding to a first bending parameter input by the first bending; according to the method, the target area is subjected to image processing according to the target adjustment parameters, and the target image is output.
Fig. 3 is a flowchart of steps of another image processing method according to an embodiment of the present invention, and as shown in fig. 3, the method may include:
step 201, acquiring a frame of preview image acquired by the camera.
The implementation manner of this step is similar to the implementation process of step 101 described above, and the embodiment of the present invention is not described in detail here.
Step 202, determining a target area in the preview image.
The implementation manner of this step is similar to the implementation process of step 102 described above, and the embodiment of the present invention is not described in detail here.
Specifically, in one implementation of the present invention, step 202 can be implemented by the following sub-steps 2021 to 2022:
sub-step 2021, receiving a touch input of a user on the preview image.
In the substep 2022, the touch position of the touch input is obtained.
In the embodiment of the present invention, for a specific implementation manner of selecting a target area in a preview image, the embodiment of the present invention may further determine a touch position corresponding to a click operation by receiving a touch input of a user for the preview image, where the touch input is usually a click operation, and the touch position may accurately position a position of the target area that the user wants to determine.
In sub-step 2023, a region including the preset range of the touch position is determined as a target region.
Further, in this step, the touch position determined in the substep 2021 may further take the touch position as a center to establish a target area in a preset range, so that a subsequent correction operation is performed on the target area.
For example, referring to fig. 4, a processing interface diagram of another image processing method provided by an embodiment of the present invention is shown, where, for a preview image displayed on the flexible display screen 10, a user may perform a click operation on the preview image, the flexible display screen 10 records a touch point n of the click operation through a pressure sensor and establishes a circular target area 101 with a radius of nm, so that a subsequent correction operation is performed on the circular target area 101, and further performs a press touch operation after the click operation, the flexible display screen 10 may record a pressing force of the press touch operation through the pressure sensor and determine a length of the radius nm of the circular target area 101 through the pressing force, for example, the larger the pressing force, the longer the length of the radius nm of the circular target area 101 is, so that the user may further perform the press touch operation, the circular target areas 101 with different sizes are determined according to different pressing degrees, and the expansibility for selecting the circular target areas is increased.
Specifically, in another implementation manner of the present invention, the preview image includes N sub-regions, where N is an integer greater than 1, and step 202 can be implemented by the following sub-steps 2024 to 2025:
in another implementation manner of the present invention, the whole preview image may be divided into N selectable sub-areas, each of the sub-areas is independent from another and can be selected through a selection operation, and in addition, for a preview image of a currently common rectangular structure, the shape of the sub-area may also be a rectangle, and the number of the sub-areas may be increased or decreased according to the complexity of the preview image, which is not limited in the embodiment of the present invention.
Substep 2024, receiving a second bend input to the flexible display screen.
Substep 2025, determining the region in the preview image corresponding to the second bending parameter of the second bending input as the target region.
Specifically, in this step, after the preview image is divided into a plurality of sub-regions, an operation of selecting a sub-region may be further performed, where the operation of selecting a sub-region may be a second bending input for the flexible display screen, and the second bending parameter may be a bending direction, and since the flexible display screen may sense a position, a direction, and a Force of a user on a bending operation of the screen in the flexible display screen by a Force-Touch technology, in this embodiment of the present invention, a preset corresponding relationship between the second bending direction and the sub-region may be stored in the mobile terminal, so that the bending direction corresponding to the second bending parameter detected by the flexible display screen is matched with the preset corresponding relationship to be mapped to the corresponding target sub-region, and the target sub-region is used as the target region.
For example, referring to fig. 5, a processing interface diagram of another image processing method according to an embodiment of the present invention is shown, in which a preview image 20 displayed on a flexible display screen 10 is divided into six sub-areas c, d, e, f, g, and h, the bending directions corresponding to the second bending input are upper left, middle upper, upper right, lower left, middle lower, and lower right, and if a user wants to select a sub-area c including a portrait, the user may perform the second bending input toward the upper left direction, so that the sub-area c is correspondingly selected. It should be noted that the same effect can be achieved regardless of whether the second bending input is specifically bent toward the user or bent away from the user, which is not limited in the embodiment of the present invention.
Further, after the preview image is divided into a plurality of sub-regions, the operation of selecting a sub-region may also be a click operation, and the user only needs to click a certain sub-region, so that the selection of the sub-region can be completed.
Specifically, in another implementation manner of the present invention, step 202 can be implemented by the following sub-steps 2026 to 2027:
sub-step 2026, obtain the type of adjustment for the preview image.
In another implementation manner of the embodiment of the present invention, a fixed number of adjustment types are generally included, so that the preview image is modified according to the adjustment types, for example, the modification types may include: leg slimming, face slimming, eye enlarging, skin whitening and the like. Each adjustment type comprises a set of selection logic for a target area in an image, for example, a leg-slimming correction type can automatically identify the legs of a human body in the image, a face-slimming and whitening correction type can automatically identify the face of the human body in the image, a large-eye correction type can further automatically identify the eyes in the face of the human body in the image, and the automatic feature identification technology in the image is developed more mature at present.
Therefore, in a specific implementation, the correction type for the preview image may be obtained, for example, a correction type selection menu pops up when the user enters the correction interface, and the corresponding correction type is determined by the selection of the user.
Sub-step 2027, determining the region corresponding to the adjustment type in the preview image as a target region.
In this step, according to the correction type determined by the user, a target area corresponding to the correction type may be further determined.
For example, if the user selects a correction type of background blurring, the camera application of the mobile terminal may automatically separate the shooting subject from the shooting background, and use the shooting background as a target area for the user to blur the shooting background, and if the user selects a correction type of face thinning, the camera application may automatically recognize a human face, use the automatically recognized human face as the target area, and provide a corresponding correction scheme of face thinning and the like.
Step 203, receiving a first bending input of the user to the flexible display screen.
The implementation manner of this step is similar to the implementation process of step 103 described above, and the embodiment of the present invention is not described in detail here.
And 204, determining the adjustment direction of the target parameter according to the bending direction.
Optionally, the first bending parameter includes a bending direction, and in this step, because the flexible display screen can sense the position, direction, and Force of the bending operation of the screen by the user through the Force-Touch technology in the flexible display screen, in an embodiment of the present invention, a corresponding bending direction may be determined for the first bending input, where the bending direction may be a bending direction away from the user or a bending direction toward the user, and further, an adjustment direction of the target parameter may be determined through the bending direction, and in an embodiment of the present invention, the adjustment direction of the target parameter may be an enhancement adjustment or a reduction adjustment.
Step 205, determining the adjustment degree of the target parameter according to the bending degree.
Optionally, the first bending parameter includes a bending degree, the adjustment degree of the target parameter may be a correction amplitude when the target area is corrected according to an adjustment direction, and if a first bending input for the flexible display screen is received, the adjustment degree of the corresponding target adjustment parameter may be determined according to a bending coefficient corresponding to the first bending input and a preset corresponding relationship between the bending coefficient and the adjustment parameter.
Optionally, the first bending parameter and the adjustment parameter are in a linear corresponding relationship;
wherein the value of the adjustment parameter increases with the increase of the value of the first bending parameter.
In the embodiment of the present invention, the bending coefficient and the adjustment parameter may be in a linear correspondence relationship, where if the bending coefficient is α and the adjustment parameter is β, β is k α + b, where k is a linear correlation degree and b is a correction coefficient, after repeated adjustment for multiple times, a user may adjust the effect to a satisfactory degree, that is, determine a target correction reference, and at this time, may store data of the bending coefficient corresponding to the target correction reference; the linear correlation degree k and the correction coefficient b are continuously corrected by storing a large number of target correction references beta satisfied by a user and combining with the bending coefficient alpha adaptive to the screen, and finally the correction effect can reach the degree satisfied by the user when the user bends the screen to a proper extent.
And step 206, performing image processing on the target area according to the target adjustment parameter, and outputting a target image.
In this step, because the flexible display screen can sense the position, direction and Force of the screen bending operation performed by the user through the Force-Touch technology in the flexible display screen, in an embodiment of the present invention, a bending direction corresponding to the first bending input may be determined, where the bending direction may be a bending direction away from the user or a bending direction toward the user, so as to determine a corrected direction, and further determine an adjustment degree of a target adjustment parameter corresponding to the first bending input, so as to determine a corrected Force, and perform image processing on the target area according to the corrected direction and the corrected Force, and output a target image.
Optionally, step 206 may further include:
in sub-step 2061, if the adjustment direction is toward the first direction, the target area is corrected according to the first correction rule according to the adjustment degree of the target parameter, and the corrected image is obtained.
And a substep 2062, if the adjustment direction is toward the second direction, correcting the target area according to the second correction rule according to the adjustment degree of the target parameter, so as to obtain the corrected image.
Wherein the first direction is opposite to the second direction.
For example, referring to fig. 6, a processing interface diagram of another image processing method according to an embodiment of the present invention is shown, where, referring to an i scheme, where the adjustment direction is a first direction toward the user's bending, where a first correction rule may be set to perform face thinning correction on the face area 201 according to a target adjustment parameter, referring to a j scheme, where the adjustment direction is a second direction away from the user's bending, where the first correction rule may be set to perform wide face correction on the face area 201 according to the target adjustment parameter, and according to the i scheme and the j scheme, different correction effects triggered by first bending input in different bending directions are achieved.
Therefore, the embodiment of the invention can set one bending direction to be corresponding to the correction of the enhancement effect, and set the other opposite bending direction to be the correction of the weakening effect, thereby achieving the controllability of the correction effect.
In summary, in another image processing method provided in the embodiment of the present invention, a frame of preview image acquired by a camera is acquired; determining a target area in the preview image; receiving a first bending input of a user to the flexible display screen; determining a target adjustment parameter corresponding to a first bending parameter input by the first bending; according to the method, the target area is subjected to image processing according to the target adjustment parameters, and the target image is output, touch operation aiming at a flexible display screen of the mobile terminal is not needed, the target area in the preview image is corrected through first bending input aiming at the flexible display screen, corresponding different target adjustment parameters are determined according to different bending amplitudes of the flexible display screen through the first bending input, and further corresponding different adjustment directions are determined according to the bending direction of the first bending input, so that different correction effects are achieved, multiple angles of the image and all-around adjustment processing are improved, the operation mode of the first bending input is simple and convenient, and the correction processing process is simplified.
Fig. 7 is a block diagram of a mobile terminal according to an embodiment of the present invention, and as shown in fig. 7, the mobile terminal 30 includes:
an obtaining module 301, configured to obtain a frame of preview image acquired by the camera.
A first determining module 302, configured to determine a target area in the preview image.
The receiving module 303 is configured to receive a first bending input of the flexible display screen from a user.
A second determining module 304, configured to determine a target adjustment parameter corresponding to the first bending parameter of the first bending input.
And a modification module 305, configured to perform image processing on the target area according to the target adjustment parameter, and output a target image.
In summary, in the mobile terminal provided in the embodiment of the present invention, the obtaining module may obtain a frame of preview image collected by the camera; the first determination module may determine a target region in the preview image; the receiving module can receive a first bending input of a user to the flexible display screen; the second determining module may determine a target adjustment parameter corresponding to a first bending parameter input by the first bending; the correction module can process the image of the target area according to the target adjustment parameter and output the target image, and the invention realizes the correction of the target area in the preview image by aiming at the first bending input of the flexible display screen, and determines the corresponding different target adjustment parameters according to the different bending amplitudes of the flexible display screen by the first bending input, thereby achieving different correction effects, improving the multi-angle and omnibearing adjustment processing of the image, and the operation mode of the first bending input is simpler and more convenient, and simplifying the correction processing process.
Fig. 8 is a block diagram of another mobile terminal according to an embodiment of the present invention, and as shown in fig. 8, the mobile terminal 40 includes:
an obtaining module 401, configured to obtain a frame of preview image acquired by the camera;
a first determining module 402, configured to determine a target area in the preview image;
optionally, the first determining module 402 includes:
the first receiving sub-module 4021 is configured to receive a touch input of a user on the preview image;
a position obtaining sub-module 4022, configured to obtain a touch position of the touch input;
the first determining sub-module 4023 is configured to determine an area including a preset range of the touch position as a target area.
Optionally, the preview image includes N sub-regions, where N is an integer greater than 1; the first determining module 402, comprising:
the second receiving sub-module 4024 is used for receiving a second bending input to the flexible display screen;
a second determining sub-module 4025, configured to determine, as a target region, a region in the preview image corresponding to the second bending parameter of the second bending input.
Optionally, the first determining module 402 includes:
an obtaining sub-module 4026, configured to obtain an adjustment type for the preview image;
a third determining sub-module 4027, configured to determine a region corresponding to the adjustment type in the preview image as a target region.
A receiving module 403, configured to receive a first bending input of the flexible display screen from a user;
a second determining module 404, configured to determine a target adjustment parameter corresponding to the first bending parameter input by the first bending;
optionally, the first bending parameter includes a bending direction; the second determining module 404 may further include:
and a direction determining submodule 4041, configured to determine an adjustment direction of the target parameter according to the bending direction.
Optionally, the first bending parameter includes a bending degree; the second determining module 404 may further include:
and an adjustment degree determining submodule 4042, configured to determine an adjustment degree of the target parameter according to the bending degree.
And a correction module 405, configured to perform image processing on the target area according to the target adjustment parameter, and output a target image.
Optionally, the first bending parameter and the adjustment parameter are in a linear corresponding relationship;
wherein the value of the adjustment parameter increases with the increase of the value of the first bending parameter.
In summary, the mobile terminal provided in the embodiment of the present invention includes: acquiring a frame of preview image acquired by a camera; determining a target area in the preview image; if a first bending input aiming at the flexible display screen is received, determining a corresponding target adjustment parameter according to a bending coefficient corresponding to the first bending input and a corresponding relation between a preset bending coefficient and the adjustment parameter; further determining the bending direction corresponding to the first bending input, determining the corresponding correction rule according to the bending direction, finally adjusting the parameters according to the target through the determined correction rule, the target area is corrected to obtain a corrected image, the touch operation aiming at the flexible display screen of the mobile terminal is not required to be carried out in the invention, but through the first bending input aiming at the flexible display screen, the correction of the target area in the preview image is realized, and determines corresponding different target adjustment parameters according to different bending amplitudes of the flexible display screen input by the first bending, and further determines corresponding different adjustment directions according to the bending direction input by the first bending, therefore, different correction effects are achieved, the adjustment processing of the images in multiple angles and in all directions is improved, the operation mode of the first bending input is simple and convenient, and the correction processing process is simplified.
An embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the image processing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Fig. 9 is a block diagram of a mobile terminal according to another embodiment of the present invention. The mobile terminal 500 shown in fig. 9 includes: at least one processor 501, memory 502, at least one network interface 504, a user interface 503, and a camera 506. The various components in the mobile terminal 500 are coupled together by a bus system 505. It is understood that the bus system 505 is used to enable connection communications between these components. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 505 in FIG. 9.
The user interface 503 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or flexible screen, among others.
It is to be understood that the memory 502 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a Read-only memory (ROM), a programmable Read-only memory (PROM), an erasable programmable Read-only memory (erasabprom, EPROM), an electrically erasable programmable Read-only memory (EEPROM), or a flash memory. The volatile memory may be a Random Access Memory (RAM) which functions as an external cache. By way of example, but not limitation, many forms of RAM are available, such as static random access memory (staticiram, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (syncronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (DDRSDRAM ), Enhanced Synchronous DRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DRRAM). The memory 502 of the subject systems and methods described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 502 stores elements, executable modules or data structures, or a subset thereof, or an expanded set thereof as follows: an operating system 5021 and application programs 5022.
The operating system 5021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application 5022 includes various applications, such as a media player (MediaPlayer), a Browser (Browser), and the like, for implementing various application services. The program for implementing the method according to the embodiment of the present invention may be included in the application program 5022.
In the embodiment of the present invention, the processor 501 is configured to obtain a frame of preview image acquired by the camera by calling a program or an instruction stored in the memory 502, specifically, a program or an instruction stored in the application 5022; determining a target area in the preview image; receiving a first bending input of a user to the flexible display screen; determining a target adjustment parameter corresponding to a first bending parameter input by the first bending; and according to the target adjustment parameter, carrying out image processing on the target area and outputting a target image.
The method disclosed by the above-mentioned embodiments of the present invention may be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 501. The processor 501 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 502, and the processor 501 reads the information in the memory 502 and completes the steps of the method in combination with the hardware.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The mobile terminal 500 can implement the processes implemented by the mobile terminal in the foregoing embodiments, and in order to avoid repetition, the detailed description is omitted here.
In the embodiment of the present invention, the mobile terminal 500 may obtain a frame of preview image collected by the camera; determining a target area in the preview image; receiving a first bending input of a user to the flexible display screen; determining a target adjustment parameter corresponding to a first bending parameter input by the first bending; according to the method, the target area is subjected to image processing according to the target adjustment parameters, and the target image is output.
Fig. 10 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention.
The mobile terminal includes: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method described above.
The mobile terminal further includes: the readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method described above.
Specifically, the mobile terminal 600 in fig. 10 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 600 in fig. 10 includes a Radio Frequency (RF) circuit 610, a memory 620, an input unit 630, a display unit 640, a processor 660, an audio circuit 670, a wireless local area network (wireless fidelity) module 680, a power supply 690, and a camera 6110.
The input unit 630 may be used, among other things, to receive numeric or character information input by a user and to generate signal inputs related to user settings and function control of the mobile terminal 600. Specifically, in the embodiment of the present invention, the input unit 630 may include a touch panel 631. The touch panel 631 may collect touch operations performed by a user (e.g., operations performed by the user on the touch panel 631 by using any suitable object or accessory such as a finger or a stylus) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 631 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 660, and can receive and execute commands sent by the processor 660. In addition, the touch panel 631 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 631, the input unit 630 may also include other input devices 632, and the other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among other things, the display unit 640 may be used to display information input by a user or information provided to the user and various menu interfaces of the mobile terminal 600. The display unit 640 may include a display panel 641, and optionally, the display panel 641 may be configured in the form of an LCD or an organic light-emitting diode (OLED).
It should be noted that the touch panel 631 may cover the display panel 641 to form a touch display screen, and when the touch display screen detects a touch operation thereon or nearby, the touch display screen is transmitted to the processor 660 to determine the type of the touch event, and then the processor 660 provides a corresponding visual output on the touch display screen according to the type of the touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 660 is a control center of the mobile terminal 600, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile terminal 600 and processes data by operating or executing software programs and/or modules stored in the first memory 621 and calling data stored in the second memory 622, thereby integrally monitoring the mobile terminal 600. Optionally, processor 660 may include one or more processing units.
In the embodiment of the present invention, the processor 660 is configured to obtain a frame of preview image acquired by the camera by calling a software program and/or a module stored in the first memory 621 and/or data stored in the second memory 622; determining a target area in the preview image; receiving a first bending input of a user on a flexible display screen, and determining a target adjustment parameter corresponding to the first bending parameter of the first bending input; and according to the target adjustment parameter, carrying out image processing on the target area and outputting a target image.
It can be seen that, in the embodiment of the present invention, the mobile terminal may include: acquiring a frame of preview image acquired by a camera; determining a target area in the preview image; receiving a first bending input of a user to the flexible display screen; determining a target adjustment parameter corresponding to a first bending parameter input by the first bending; according to the method, the target area is subjected to image processing according to the target adjustment parameters, and the target image is output.
For the above device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As is readily imaginable to the person skilled in the art: any combination of the above embodiments is possible, and thus any combination between the above embodiments is an embodiment of the present invention, but the present disclosure is not necessarily detailed herein for reasons of space.
The image processing methods provided herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The structure required to construct a system incorporating aspects of the present invention will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of the method of identifying background music in video according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.