CN113986407A - Cover generation method and device and computer storage medium - Google Patents
Cover generation method and device and computer storage medium Download PDFInfo
- Publication number
- CN113986407A CN113986407A CN202010735047.8A CN202010735047A CN113986407A CN 113986407 A CN113986407 A CN 113986407A CN 202010735047 A CN202010735047 A CN 202010735047A CN 113986407 A CN113986407 A CN 113986407A
- Authority
- CN
- China
- Prior art keywords
- original image
- cover
- video
- gallery
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 230000000694 effects Effects 0.000 claims description 136
- 238000012545 processing Methods 0.000 claims description 65
- 238000005286 illumination Methods 0.000 claims description 64
- 230000008859 change Effects 0.000 claims description 26
- 238000006073 displacement reaction Methods 0.000 claims description 21
- 230000003044 adaptive effect Effects 0.000 abstract description 9
- 230000006870 function Effects 0.000 description 13
- 238000004590 computer program Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 7
- 238000013136 deep learning model Methods 0.000 description 6
- 230000016776 visual perception Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 241000872198 Serjania polyphylla Species 0.000 description 1
- 241000384512 Trachichthyidae Species 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 239000003643 water by type Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application discloses a cover generation method, a cover generation device and a computer storage medium, wherein the method comprises the following steps: acquiring an original image corresponding to a video or a gallery and a first object, wherein the original image is derived from the video or the gallery, and the first object comprises one or more of the following objects: text or images; determining a placement position of the first object according to an attribute of the original image, the attribute including information indicating a position of a subject object in the original image; and generating the video or the cover of the gallery by superposing the first object on the original image according to the placement position. The method and the device for generating the cover page dynamically determine the placement position of the first object according to the attribute of the original image and generate the cover page of the video or the gallery according to the placement position, so that the placement position of the first object in the cover page generated by the method and the device for generating the cover page is flexible and adaptive to the original image.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a cover generation method and apparatus, and a computer storage medium.
Background
In the era of mobile internet, content-based products represented by short videos are emerging, and the content is richer and more attractive. With the fifth generation mobile communication technology (5)thGeneration Mobile Communication Technology, 5G), a large video age is coming, users not only meet the demand of watching high-quality video content on terminals, but also put forward higher expectations and requirements on the experience of watching video content, and therefore the video content and the extreme experience have a tendency to move towards large screens (including product forms of televisions, smart screens, and the like).
For video content, a film source side often has high-quality video movie posters to be provided for online and offline channels, but the format is usually vertical posters for facilitating propaganda of various large courtyards, the content is mostly propaganda around online of the film source, the content and the size are not suitable for being directly applied to terminal equipment with a large screen as a representative in a transverse screen ratio, and the content and the size cannot be directly applied to operation of various daily activities of the terminal, but the terminal equipment such as the large screen needs the propaganda form which has strong visual impact force and attraction on users like posters, so that how to automatically generate the video posters on the terminal equipment such as the large screen is solved, the video posters are displayed in a better mode, and the users are attracted to improve the conversion rate, and a valuable proposition is formed.
Moreover, with the proliferation of terminal equipment such as mobile phones and tablet computers and the like, and media, a great amount of rich media content is generated by users, and it is becoming more and more important to help users generate high-quality artistic layout covers based on pictures and videos.
Disclosure of Invention
The technical problem to be solved by the present application is how to generate a high quality cover page for a video or a gallery.
In a first aspect, the present application provides a cover generation method, including: and determining a placement position of the first object according to the attribute of the original image, and superposing the first object on the original image according to the placement position to generate the video or the front cover of the gallery.
The "cover" in this application includes a cover of a photo album, gallery, or electronic book (or magazine), etc., and also includes a poster, an advertisement figure, or other type of promotional figure, etc., as described in the background. The video format, the image format contained in the gallery and the content in the video or the gallery are not limited in the application, for example, if the electronic book is stored in the form of pictures, the electronic book can also be regarded as a gallery. The original image is based on a video or gallery of the cover that needs to be generated.
In one implementation, the original image may be a frame or a picture in the video or gallery; in another implementation, the original image and an original frame or a picture are not completely consistent, but the content of the original image is derived from the frame or the picture; still alternatively, the original image is a prototype of a poster, which is not necessarily a picture of the real appearance in a movie, but is an actor in a movie and a figure similar to a movie, or the original image may be an image based on an abstract representation of the movie content.
The first object may comprise text or an image illustrating the video or cover page, said text comprising text content displayed superimposed on the original image, including but not limited to different types of text of title, subtitle, time, price, etc.; the image includes graphics or pictures superimposed on the original image, including but not limited to line patterns, icons, graphics, etc. matching the content of the image. In addition, the first object may also include decorative text or images, such as lines, stamps, etc., or the first object may be a combination of 2 or more of the above. The text can be displayed in a manner of being superimposed on the image in a specific style matching with the content, including but not limited to, text displayed in different fonts, text displayed in a graphical manner, such as stamp-style text, time stamp-style text, text with different texture styles, such as metal and mottle, text in different fonts, character sizes, character weights, and different text arrangement manners, and the like.
The properties of the original image comprise information indicating the position of the subject object in the original image, that is to say the placement position of the first object is determined from the position of the subject object in the original image. Compared with the prior art that the position of the first object is fixed and unchanged in the cover (for example, the first object is fixedly positioned at the lower left corner of the cover), the position of the first object in the cover generated by the embodiment of the application is flexible and is adaptive to the original image due to the uncertainty of the position of the main object in different original images, the quality of the cover is improved, and the display effect of the cover is optimized.
In one implementation, the first object may change dynamically as a function of time or as a function of the content of the original image. For example, the attributes of the text or image are dynamically changed according to time, or replaced with different text or images in a specified style.
In one implementation, the first object is superimposed on the original image at a location that does not overlap with the subject object. The non-overlapping with the original image includes but is not limited to the following cases: the first object is completely non-overlapping with the subject object; the first object is completely non-overlapping with the key features of the subject object (e.g., the subject object is a person, the key features are faces); the first object overlaps a portion of the subject object, but does not affect the presentation of critical features.
As an alternative embodiment, the position of the subject object in the original image is determined according to the first contour of the subject object in the original image, and the position of the subject object includes two-dimensional coordinates of the subject object. In this embodiment, the cover generation apparatus, when determining the placement position of the first object based on the attribute of the original image, includes: the placement position of the first object is determined from a first contour of the subject object in the original image. In one implementation, the cover page generating apparatus may determine a region (for example, the region is a rectangle) where the main object is located in the original image by performing image object detection on the original image, and may determine a first contour of the main object in the original image by performing image contour detection on the region where the main object is located, where a position occupied by the first contour in the original image is a position of the main object in the original image. If the position of the subject object is measured in two-dimensional coordinates of the subject object, the placement position of the first object may be a position that does not completely overlap with the two-dimensional coordinates of the subject object. Specifically, the position not completely overlapping with the two-dimensional coordinates of the subject object may include: a position that is not overlapped at all with the two-dimensional coordinates of the subject object, or a position that is not overlapped with the two-dimensional coordinates of the key region of the subject object, which may be a region where the key feature of the subject object is located.
It can be seen that, by implementing the embodiment, the first object is superimposed on the original image without blocking at least the key region of the main object in the original image, or even completely blocking the main object in the original image, that is, the cover generated by implementing the embodiment can preserve the integrity of the main object in the original image.
On the basis of the previous embodiment, as an alternative embodiment, the cover generating apparatus when performing the determination of the placement position of the first object according to the first contour of the main body object includes:
determining a first contour of a subject object in the original image and a second contour of the first object; and determining the placement position of the first object according to the coincidence rate of the first contour and the second contour when the first object is superposed on each position of the original image.
The determination of the first contour of the subject object in the original image may refer to the related description in the previous embodiment. Since the first object mentioned in the present application refers to the content of the first object, and the effect of the first object in the cover is related to not only the content but also the layout and size of the first object, the second outline of the first object needs to be determined according to the layout and size of the first object.
By setting the position where the first object is superimposed on the original image as the placement position of the first object when the coincidence ratio of the first contour and the second contour is small, the present embodiment can ensure that the placement position of the first object does not completely overlap with the two-dimensional coordinates of the subject object.
On the basis of any one of the foregoing embodiments, as an alternative embodiment, the cover generation apparatus when performing the determination of the placement position of the first object according to the attribute of the original image includes:
the placement position of the first object is determined based on the attributes of the original image and the aesthetic effect of the second object obtained when the first object is superimposed on the respective positions on the original image.
Wherein the aesthetic effect of the second object comprises symmetry and/or a sense of stability of the second object. It will be appreciated that the second object comprises a cover of the video or the gallery, i.e. the cover that is ultimately generated is determined from the second object. In one implementation, the second object is a cover to be finally generated, and the cover may be referred to as a target image relative to the original image.
In the embodiment, the position of the second object with better aesthetic effect, in which the first object is superposed on the original image, is used as the placement position of the first object, that is, the second object with better aesthetic effect is used as the cover of the video or the gallery, so that the generated cover has better aesthetic effect.
On the basis of any one of the foregoing embodiments, as an optional embodiment, the method further includes:
acquiring a content tag of the video or the gallery, and determining an object format corresponding to the content tag of the video or the gallery as a target object format of the first object according to a corresponding relation between the content tag and the object format;
correspondingly, the generating of the cover of the video or the gallery by the cover generating device by superposing the first object on the original image according to the placing position comprises:
and superposing the first object on the original image in the target object format according to the placement position to generate the video or the cover of the gallery.
Wherein the content tag is used for indicating the type of the content of the video or gallery of which the cover page needs to be generated, and the target object format is used for indicating the layout of a plurality of sub-objects included in the first object and one or more of the following attributes of the first object: size, font, or color.
Compared with the prior art that the object format of the first object is fixed in the cover, because the object format of the first object in the embodiment is determined according to the content tag of the video or the gallery, the object format of the first object in the cover generated by the embodiment is flexible and is adaptive to the type of the content of the video or the gallery needing to generate the cover, the quality of the cover is improved, and the display effect of the cover is optimized.
On the basis of any one of the foregoing embodiments, as an alternative embodiment, the cover generation apparatus when performing the determination of the placement position of the first object according to the attribute of the original image includes:
acquiring the illumination intensity of the environment where the terminal equipment is located, and detecting whether the first color attribute of the original image is matched with the illumination intensity; and if the first color attribute of the original image is matched with the illumination intensity, determining the placement position of the first object according to the attribute of the original image.
The terminal device is configured to display the generated cover, that is, the cover generated by implementing the method is displayed on the terminal device. The first color attribute of the original image includes any one or more of hue, saturation, and lightness of the original image.
For any image, under the same illumination environment, any one or more color attributes of hue, saturation and brightness of the image are transformed, the image can show different presentation effects, and a user can obtain different visual feelings. Therefore, the embodiment continues to execute the steps of determining the placement position of the first object according to the attribute of the original image and the like of the method when the first color attribute of the original image is matched with the illumination intensity, so as to ensure that a cover generated based on the original image can show a better presentation effect when being displayed on the terminal device, and ensure that a user can obtain a good visual experience when watching the cover generated based on the original image on the terminal device.
On the basis of the previous embodiment, as an optional embodiment, the method further includes:
and if the first color attribute of the original image is not matched with the illumination intensity, performing image processing on the original image according to the illumination intensity to obtain a new original image, and determining the placement position of the first object according to the attribute of the original image.
Even if the first color attribute of the original image does not match the illumination intensity, the embodiment may obtain a new original image by performing image processing on the original image according to the illumination intensity, so that the first color attribute of the new original image matches the illumination intensity, and then, based on the new original image, the step of determining the placement position of the first object according to the attribute of the original image and the like of the method is continuously performed, so as to ensure that a cover generated based on the new original image can exhibit a better presentation effect when displayed on the terminal device, and ensure that a user can obtain a good visual experience when viewing the cover generated based on the new original image on the terminal device.
As an alternative embodiment, when the target object format is not used to indicate the size of the first object, the cover generation apparatus when executing generating the video or the cover of the gallery by superimposing the first object on the original image according to the placement position includes:
acquiring size information of a main body object in the original image, determining an object size corresponding to the size information of the main body object in the original image as a target object size of the first object according to a corresponding relation between the image size and the object size, and overlapping the first object on the original image in the target object size according to the placement position to generate the video or the front cover of the gallery.
Wherein the target object size is used to indicate the size of the image in the first object and/or the font size of the text in the first object.
Compared with the prior art that the size of the first object is fixed and unchanged in the front cover, because the target object size of the first object in the embodiment is determined according to the size information of the main body object in the original image, the target object size of the first object in the front cover generated by the embodiment is flexible and is adaptive to the size of the main body object in the original image, the quality of the front cover is improved, and the display effect of the front cover is optimized.
As an alternative embodiment, when the target object format is not used for indicating the font of the text object in the first object, the cover generation apparatus executing the generating of the video or the cover of the gallery by superimposing the first object on the original image according to the placement position includes:
acquiring a content label of the video or the gallery, determining an object font corresponding to the content label of the video or the gallery as a target object font corresponding to the first object according to the corresponding relation between the content label and the object font, and overlapping the first object on the original image in the target object font according to the placement position to generate a cover of the video or the gallery.
Wherein the content tag is used for indicating the type of the video or the content of the gallery, and the target object font is used for indicating the font of the text in the first object.
Compared with the prior art that the font of the first object is fixed and unchanged in the front cover, because the target object font of the first object in the embodiment is determined according to the content tag of the video or the gallery of which the front cover needs to be generated, the target object font of the first object in the front cover generated by the embodiment is flexible and is adaptive to the type of the content of the video or the gallery of which the front cover needs to be generated, the quality of the front cover is improved, and the display effect of the front cover is optimized.
As an alternative embodiment, when the target object format is not used for indicating the color of the first object, the cover generation apparatus executing generating the video or the cover of the gallery by superimposing the first object on the original image according to the placement position includes:
determining a second color attribute of the original image, determining a target object color corresponding to the first object according to the second color attribute, and overlaying the first object on the original image in the target object color according to the placement position to generate the video or the front cover of the gallery.
Wherein the second color attribute comprises a dominant color of the original image.
Compared with the prior art that the color of the first object is fixed and unchanged in the cover, because the target object color of the first object is determined according to the second color attribute of the original image in the embodiment, the target object color of the first object in the cover generated by the embodiment is flexible and is adaptive to the main color of the original image, the quality of the cover is improved, and the display effect of the cover is optimized.
On the basis of any one of the foregoing embodiments, as an optional embodiment, the video or the front cover of the gallery is used for being displayed in the terminal device according to a first dynamic effect parameter, and the first dynamic effect parameter is used for indicating an entry effect and/or an exit effect of the video or the front cover of the gallery.
Compared with the prior art in which the cover is statically displayed in the terminal equipment, the embodiment dynamically displays the cover by setting the entering effect and/or the exiting effect, thereby optimizing the display effect of the cover and the visual perception of the user watching the cover.
On the basis of the previous embodiment, as an alternative embodiment, the first dynamic effect parameter is determined according to the content tag of the video or the gallery.
The embodiment dynamically determines the first dynamic effect parameter according to the content tag, so that the dynamic effect (including the entry effect and/or the exit effect) of the front cover is flexible and is adaptive to the type of the video or the content of the gallery in which the front cover needs to be generated, and the display effect of the front cover and the visual perception of the user watching the front cover are further optimized.
On the basis of any one of the foregoing embodiments, as an optional embodiment, before the cover generation apparatus performs the determination of the placement position of the first object according to the attribute of the original image, the method further includes:
acquiring an original image set, wherein the original image set comprises a plurality of original images;
correspondingly, the cover generation device executes the steps of determining the placement position of a first object according to the attribute of the original image, and generating the target file by superposing the first object on the original image according to the placement position, wherein the steps comprise:
determining a plurality of placing positions of the first object according to the attributes of the plurality of original images, respectively superposing the first object on the plurality of original images according to the placing positions to generate a plurality of covers of the video or the gallery, and determining a target cover from the plurality of covers.
Wherein the object cover is displayed in the terminal device. Compared with the prior art in which the cover is generated only according to the selected only one original image, the method and the device can enlarge the selection range of the cover which is finally displayed by respectively generating a plurality of covers according to a plurality of original images and then determining the target cover which is finally displayed from the plurality of covers.
On the basis of the previous embodiment, as an alternative embodiment, the cover generation device, when determining the target cover from the plurality of covers, includes:
a target cover is determined from the plurality of covers based on the aesthetic effects of the plurality of covers.
Wherein the aesthetic effect of the plurality of covers comprises symmetry and/or a sense of stability of the plurality of covers. The embodiment determines the target cover from the covers according to the aesthetic effect, and can select the cover with better aesthetic effect from the covers to be finally displayed in the terminal equipment, so that the quality of the cover can be improved, and the display effect of the cover can be optimized.
On the basis of the first two embodiments, as an optional embodiment, the method further includes:
acquiring the display duration of a currently displayed target cover; and if the display time length of the currently displayed target cover is greater than a preset time length threshold value, determining a new target cover from the covers.
Wherein the new object cover is displayed in the terminal device. Specifically, when the display time length of the currently displayed target cover exceeds a preset time length threshold, the cover generation device may switch a new target cover to display.
Compared with the prior art in which only one cover is generated and displayed, the embodiment displays different covers by switching the covers at regular time, thereby optimizing the display effect of the covers and the visual perception of the user watching the covers.
On the basis of any one of the preceding embodiments, as an alternative embodiment, after the video or the front cover of the gallery is displayed in the terminal device, the first object in the front cover is dynamically displayed according to the second dynamic effect parameter.
Wherein, the dynamic display of the first object in the cover according to the second dynamic effect parameter means: during the process that the user views the cover in the terminal device, the display effect (including the display position and the like) of the first object in the cover is changed.
For the prior art, once the front cover is generated, the display effect of the front cover is fixed, and since the display effect of the first object in the front cover can be changed after the front cover is generated in the embodiment, the embodiment can optimize the display effect of the whole front cover.
On the basis of the previous embodiment, as an optional embodiment, the second dynamic effect parameter is determined according to user displacement information, the second dynamic effect parameter includes a moving direction and/or a moving speed of the first object relative to the original image, and the user displacement information includes any one or more of an angle change amount, a distance change amount, and a time length for forming a change of the eyes of the user relative to the terminal device.
Since the second dynamic effect parameter is determined according to the user displacement information, that is, the dynamic display effect of the first object is determined according to the user displacement information, the dynamic display effect of the first object observed by the user is adaptive to the displacement information, so that the visual perception of the user viewing the front cover can be optimized in the embodiment.
As an alternative embodiment, the first object is dynamically changed. In particular, the first object dynamically changes according to time or according to the content of the original image.
In a second aspect, the present application further provides a cover generation method, including: determining a format of a first object from the content of the original image, the first object comprising one or more of: text or an image, the format being indicative of a layout of one or more sub-objects comprised by the first object or one or more of the following properties of the first object: size, font, or color, the original image being derived from a video or gallery; and superposing the first object with the original image in the format to generate the video or the cover of the gallery.
In one implementation, determining a format of the first object from content of the original image includes: acquiring a content label of the original image, the video or the gallery, wherein the content label is used for indicating the type of the content of the original image, the video or the gallery; and determining the format of the first object according to the corresponding relation between the content tag and the object format.
In another implementation, determining a format of the first object according to the content of the original image includes: performing image recognition on the original image to determine the content of the original image; and determining the format of the first object according to the corresponding relation between the content and the object format.
In one implementation, the first object dynamically changes. In particular, the first object dynamically changes according to time or according to the content of the original image.
Advantageous effects and still more other implementations refer to the aforementioned first aspect.
In a third aspect, the present application provides a cover generation apparatus, which may be any one of a terminal device, a chip in the terminal device, a network device, and a chip in the network device. In particular, the cover generation apparatus includes one or more modules for implementing the method of any of the foregoing aspects or implementations. Illustratively, the cover page generating apparatus includes a processing module for determining a placement position of a first object based on attributes of an original image, the attributes including information indicating a position of a subject object in the original image, the original image originating from a video or a gallery, the first object including one or more of: text or images; the processing module is further used for generating the video or the cover of the gallery by superposing the first object on the original image according to the placement position.
As an alternative embodiment, the position of the subject object in the original image is determined according to the first contour of the subject object in the original image, and the position of the subject object includes two-dimensional coordinates of the subject object. In this embodiment, when the processing module determines the placement position of the first object according to the attribute of the original image, the processing module is specifically configured to:
and determining the placement position of the first object according to the first contour of the main object in the original image, wherein the placement position is a position which is not completely overlapped with the two-dimensional coordinates of the main object.
As an optional implementation manner, when the processing module determines the placement position of the first object according to the first contour of the main object, the processing module is specifically configured to:
determining a first contour of a subject object in the original image and a second contour of the first object, the second contour being determined according to a layout and a size of the first object;
and determining the placement position of the first object according to the coincidence rate of the first contour and the second contour when the first object is superposed on each position of the original image.
As an optional implementation manner, when the processing module determines the placement position of the first object according to the attribute of the original image, the processing module is specifically configured to:
and determining the placement position of the first object according to the attributes of the original image and the aesthetic effect of a second object obtained when the first object is superposed on each position on the original image, wherein the aesthetic effect of the second object comprises the symmetry and/or stability of the second object, and the second object comprises the video or the front cover of the gallery.
As an optional implementation, the apparatus further comprises:
the receiving and sending module is used for acquiring a content tag of the video or the gallery, wherein the content tag is used for indicating the type of the content of the video or the gallery;
the processing module is further configured to determine, according to a correspondence between content tags and object formats, an object format corresponding to the content tags of the video or the gallery as a target object format of the first object, where the target object format is used to indicate a layout of a plurality of sub-objects included in the first object and one or more of the following attributes of the first object: size, font, or color;
correspondingly, when the processing module superimposes the first object on the original image according to the placement position to generate the video or the front cover of the gallery, the processing module is specifically configured to:
and superposing the first object on the original image in the target object format according to the placement position to generate the video or the cover of the gallery.
As an optional implementation manner, the transceiver module is further configured to obtain an illumination intensity of an environment where the terminal device is located, where the terminal device is configured to display the video or a cover of the gallery;
when the processing module determines the placement position of the first object according to the attribute of the original image, the processing module is specifically configured to:
detecting whether a first color attribute of the original image is matched with the illumination intensity, wherein the first color attribute comprises any one or more of hue, saturation and brightness of the original image;
determining the placement location of the first object according to the attributes of the original image is performed when the first color attribute of the original image matches the illumination intensity.
As an optional implementation manner, the processing module is further configured to, when the first color attribute of the original image does not match the illumination intensity, perform image processing on the original image according to the illumination intensity to obtain a new original image, and perform determining the placement position of the first object according to the attribute of the original image.
As an optional implementation manner, the transceiver module is further configured to obtain size information of a subject object in the original image;
the processing module is further configured to determine, according to a correspondence between an image size and an object size, an object size corresponding to size information of a main object in the original image as a target object size of the first object, where the target object size is used to indicate a size of an image in the first object and/or a font size of a text in the first object;
correspondingly, when the processing module superimposes the first object on the original image according to the placement position to generate the video or the front cover of the gallery, the processing module is specifically configured to:
and superposing the first object on the original image according to the placement position in the target object size to generate the video or the cover of the gallery.
As an optional implementation manner, the transceiver module is further configured to obtain a content tag of the video or the gallery, where the content tag is used to indicate a type to which the content of the video or the gallery belongs;
the processing module is further configured to determine, according to a correspondence between a content tag and an object font, an object font corresponding to the content tag of the video or the gallery as a target object font corresponding to the first object, where the target object font is used to indicate a font of a text in the first object;
correspondingly, when the processing module superimposes the first object on the original image according to the placement position to generate the video or the front cover of the gallery, the processing module is specifically configured to:
and superposing the first object on the original image in the target object font according to the placement position to generate the video or the cover of the gallery.
As an optional implementation manner, when the processing module superimposes the first object on the original image according to the placement position to generate the video or the front cover of the gallery, the processing module is specifically configured to:
determining a second color attribute of the original image, and determining a target object color corresponding to the first object according to the second color attribute, wherein the second color attribute comprises a dominant color of the original image;
and superposing the first object on the original image in the target object color according to the placement position to generate the video or the cover of the gallery.
As an alternative implementation manner, the video or the front cover of the gallery is used for displaying in the terminal device according to a first dynamic effect parameter, and the first dynamic effect parameter is used for indicating an entering effect and/or an exiting effect of the video or the front cover of the gallery.
As an alternative embodiment, the first dynamic parameter is determined according to the content tag of the video or the gallery.
As an optional implementation manner, the transceiver module is further configured to acquire an original image set, where the original image set includes a plurality of original images;
correspondingly, the processing module determines a placement position of a first object according to the attribute of the original image, and when the first object is superimposed on the original image according to the placement position to generate a target file, the processing module is specifically configured to:
determining a plurality of placing positions of the first object according to the attributes of the plurality of original images, and respectively superposing the first object on the plurality of original images according to the placing positions to generate a plurality of covers of the video or the gallery;
an object cover is determined from the plurality of covers, the object cover for display in the terminal device.
As an optional implementation manner, the transceiver module is further configured to obtain a display duration of a currently displayed object cover;
the processing module is further used for determining a new object cover from the covers when the display time length of the currently displayed object cover is larger than a preset time length threshold, and the new object cover is used for being displayed in the terminal equipment.
As an alternative embodiment, after the video or the front cover of the gallery is displayed in the terminal device, the first object in the front cover is dynamically displayed according to the second dynamic effect parameter.
As an optional implementation manner, the second dynamic effect parameter is determined according to user displacement information, the second dynamic effect parameter includes a moving direction and/or a moving speed of the first object relative to the original image, and the user displacement information includes any one or more of an angle change amount, a distance change amount, and a time length for forming a change of the user's eyes relative to the terminal device.
As an alternative embodiment, the first object is dynamically changed. In particular, the first object dynamically changes according to time or according to the content of the original image.
It should be noted that, when the apparatus is a terminal device or a network device, the processing module may be a processor, and the transceiver module may be a transceiver; the terminal device or the network device may further include a storage module, which may be a memory; the storage module is used for storing instructions, and the processing module executes the instructions stored by the storage module. When the apparatus is a chip in a terminal device or a chip in a network device, the processing module may be a processor, and the transceiver module may be an input/output interface, a pin, a circuit, or the like; the processing module executes instructions stored in a storage module, which may be a chip in the terminal device or a chip in the network device (e.g., a register, a cache, etc.), or a storage module located outside the chip in the terminal device or the network device (e.g., a Read-Only Memory (ROM), a Random Access Memory (RAM), etc.). Based on the same inventive concept, as the principle and the advantages of the cover generation apparatus for solving the problems can be referred to the method of the first aspect and each possible implementation manner of the first aspect and the advantages brought thereby, the implementation of the cover generation apparatus can be referred to the method of the first aspect and each possible implementation manner of the first aspect, and repeated details are not repeated.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores instructions, and when the instructions are executed, the instructions cause a terminal device or a network device to execute the method of any aspect and each possible implementation manner of any aspect and beneficial effects brought by any aspect, and repeated details are not repeated.
In a fifth aspect, the present application provides a computer program or a computer program product, where the computer program or the computer program product stores instructions, and when the instructions are executed, the instructions enable a terminal device or a network device to execute the method of any aspect, each possible implementation manner of any aspect, and beneficial effects brought by any aspect, and repeated details are not repeated.
Drawings
FIG. 1 is a schematic flow chart of a cover generation method provided in an embodiment of the present application;
FIG. 2 is an exemplary raw image provided by an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating different effects exhibited when brightness of an image varies from strong to weak according to an embodiment of the present disclosure;
fig. 4(a) to fig. 4(f) are schematic diagrams illustrating the presentation effect of the object format according to the embodiment of the present application;
FIG. 5 is a first mask image corresponding to the original image shown in FIG. 2 according to an embodiment of the present disclosure;
FIGS. 6(a) and 6(b) are comparative illustrations of an exemplary first object and its corresponding second mask image provided by an embodiment of the present application;
FIG. 7 is an exemplary cover provided in embodiments of the present application;
FIG. 8 is another exemplary cover provided in accordance with an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a switching effect between a currently displayed object cover and a new object cover according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a dynamic layer dislocation effect of a first object moving relative to an original image according to an embodiment of the present disclosure;
FIG. 11 is a block diagram of a cover creation device according to an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of a frame of another cover producing apparatus according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described in detail below with reference to the drawings.
Referring to fig. 1, fig. 1 is a schematic flow chart of a cover generation method provided in an embodiment of the present application. The cover generation method is executed by a cover generation device and is used for generating a video or a cover of a gallery, and the generated cover is displayed by a terminal device. The cover generation device may be any one of a network device, a chip in the network device, the terminal device, and a chip in the terminal device. The cover generation apparatus is described schematically only by taking a network device and a terminal device as examples, that is, an operation performed when the cover generation apparatus is referred to as a network device may be understood as an operation performed when the cover generation apparatus is a chip in the network device, and an operation performed when the cover generation apparatus is referred to as a terminal device may be understood as an operation performed when the cover generation apparatus is a chip in the terminal device.
The network device may be a server, such as a background server capable of providing high-performance computing power and storage power.
The terminal device can be a User Equipment (UE) with a display screen, such as a large screen, a smart screen, a full screen, an intelligent screen, a mobile phone, a television, a tablet computer, a watch, a sound box, and a vehicle-mounted terminal.
In the embodiment of the present application, the terminal device may have a plurality of communication interface capabilities such as a sensor, a camera, a processor, and a speaker. The sensor may include, but is not limited to, an illumination sensor and an infrared sensor, the illumination sensor is configured to sense illumination of an environment where the terminal device is located to obtain illumination intensity of the environment where the terminal device is located, and the infrared sensor is configured to sense information such as a position of a user in front of the terminal device.
In addition, the terminal equipment can also support a user to control the terminal equipment through voice, vision, a remote controller or other modes, and can identify the user action based on the camera. The camera can be used for identifying the limb transformation and the position (such as the position of a hand of a user) of the user so as to identify the action of the user, and can be used for detecting the position (such as right front, left and right) of the user relative to the screen.
The cover generation method shown in fig. 1 may include steps S11 to S13, which are described in detail below.
S11: and acquiring an original image and a first object corresponding to the video or the gallery.
When the cover page generating device is the terminal device, the video or the gallery may be a local video or gallery. The local video or gallery may be a video or gallery stored in a local memory, such as an internal memory of the terminal device. Specifically, the local video may include a video taken by the terminal device or a video downloaded by the user from the internet (server) and stored in the terminal device, and the local gallery may include a photograph taken by the terminal device or a picture downloaded by the user from the internet (server) and stored in the terminal device.
As an alternative, when the video or gallery is stored in the local memory, the terminal device may automatically generate a cover page generation instruction, which is used to instruct the terminal device to generate a cover page of the video or gallery. For example, when the terminal device finishes shooting a video or downloads a gallery, the terminal device may automatically generate a cover page generation instruction.
As another alternative, the terminal device may generate a cover page generation instruction based on a user operation, where the cover page generation instruction is used to instruct the terminal device to generate a cover page of a video or gallery indicated by the user operation. For example, when a cover of a local album needs to be generated, the user may touch a cover generation button corresponding to the local album on the album directory interface of the terminal device. When a touch operation for the cover generation button is detected, the terminal device may generate a cover generation instruction.
When the cover page generation device is the network device, the video or the gallery may be a video or a gallery in a cloud. The video or the gallery in the cloud may be a video or a gallery stored in a network device such as a server.
In this case, the network device may receive a cover generation instruction from the terminal device via the network, and the cover generation instruction may include an identification of a video or gallery (e.g., a video ID, a gallery ID) for instructing the network device to generate a cover of the video or gallery. For example, when a user wants to browse information related to a movie on a web page, since a cover page of the movie is displayed on the web page, when the user clicks a button for opening the web page, the terminal device may generate a cover page generation instruction and send the cover page generation instruction to the network device, where the cover page generation instruction includes an identifier of the movie.
The cover generation instruction received by the network device from the terminal device may further include input information that may be needed to generate a cover. The input information may include, but is not limited to, names of videos or galleries (e.g., movie names, album names), cover size information, cover type information, illumination intensity of an environment where the terminal device is located, user preference information, and the like.
When the cover of the video or the gallery needs to be generated, the cover generation device acquires an original image and a first object corresponding to the video or the gallery.
In the embodiment of the present application, the original image is derived from the video or gallery, and one representation of the gallery may be a photo album. Specifically, when the cover page generation method is used for generating a cover page of a video, the original image may be a screenshot of the video or a cover page image provided by a third-party service provider for the video; when the cover page generation method is used for generating a cover page of a gallery, the original image may be an image contained in the gallery or a cover page image provided by a third-party service provider for the gallery. Generally, the original image does not include any text, as shown in fig. 2.
When the cover page generating device is the terminal device, the acquiring, by the terminal device, the original image corresponding to the video or the gallery may specifically include: if a cover page of the video needs to be generated, the terminal device may use an image captured from the video by the user or a cover page image provided by a third-party service provider for the video as an original image corresponding to the video, or the terminal device may capture a screenshot of the video according to a preset rule (for example, capture an image with vivid characters in the video) and use the captured image as the original image corresponding to the video; if a cover page of the gallery needs to be generated, the terminal device may use a cover page image marked by a user in the gallery or a cover page image provided by a third-party service provider for the gallery as an original image corresponding to the gallery, or the terminal device may select a designated image (e.g., a first image or a last image in the gallery) in the gallery as the original image corresponding to the gallery according to a preset rule.
When the cover page generating device is the network device, the acquiring, by the network device, the original image corresponding to the video or the gallery may specifically include: and sending the identification of the video or the gallery to a multimedia file resource library, and receiving an original image corresponding to the video or the gallery from the multimedia file resource library. When receiving the identifier of the video or the gallery, the multimedia file repository may determine an original image corresponding to the video or the gallery according to the correspondence between the stored file identifier and the image and send the original image to the network device.
It will be appreciated that for any one image, one or more of its color attributes are transformed in the same lighting environment, and the image will exhibit different presentation effects. For example, in the same lighting environment, the lightness of an image is changed while the hue and saturation of the image are kept unchanged, and the image shows different presentation effects, as shown in fig. 3. Therefore, as an optional implementation manner, when an original image corresponding to a video or a gallery is acquired, the cover generation device may further acquire the illumination intensity of the environment where the terminal device is located and the first color attribute of the original image; detecting whether the first color attribute of the original image is matched with the illumination intensity of the environment where the terminal equipment is located; if so, the cover page generation apparatus may continue to step S12.
The illumination of the environment where the terminal device is located may be, for example, outdoor natural environment illumination, illumination of indoor and outdoor lamps, illumination made by people, or the like. If the cover generation device is the terminal device, the illumination intensity of the environment where the terminal device is located may be obtained by the terminal device through an illumination sensor. If the cover generation apparatus is the network device, the manner of acquiring the illumination intensity of the environment where the terminal device is located by the network device may include: acquiring the illumination intensity of the environment where the terminal equipment is located from the input information; or, the network device sends an illumination information acquisition request to the terminal device, responds to the illumination information acquisition request, and the terminal device acquires the illumination intensity of the environment where the terminal device is located through the illumination sensor and sends the illumination intensity to the network device.
Wherein the first color attribute may include any one or more of Hue (Hue), Saturation (Saturation), and Value (Value). The acquiring, by the cover generation apparatus, the first color attribute of the original image may specifically include: reading the original image in an HSV mode to obtain H (Hue), S (Saturation) and V (Value) components of each pixel in the original image, and then summing and averaging H, S and V components of each pixel to obtain the Hue, Saturation and Value of the original image.
The cover generation apparatus detecting whether the first color attribute of the original image matches with the illumination intensity of the environment where the terminal device is located may specifically include: and detecting whether the first color attribute of the original image is matched with the illumination intensity of the environment where the terminal equipment is located according to the corresponding relation between the preset color attribute and the illumination intensity.
The preset corresponding relationship between the color attribute and the illumination intensity is obtained based on experimental data, and is used for explaining that if a first range of the color attribute corresponds to a second range of the illumination intensity, when the illumination intensity of the environment where the terminal device is located is in the second range, if the first color attribute of the original image belongs to the first range, a user can obtain good visual perception when watching the original image on the terminal device, namely the first color attribute of the original image is matched with the illumination intensity of the environment where the terminal device is located.
For example, if the first range of brightness corresponds to the second range of illumination intensity, the first range is 700 to 900lux, and the second range is 70 to 90%, when the illumination intensity of the environment where the terminal device is located is 800lux, if the brightness of the original image is 80%, the brightness of the original image matches with the illumination intensity of the environment where the terminal device is located; if the average brightness value of the original image is 30%, the brightness of the original image does not match the illumination intensity of the environment where the terminal device is located.
If the first color attribute of the original image does not match the illumination intensity of the environment in which the terminal device is located, the cover generation apparatus may perform image processing on the original image according to the illumination intensity to obtain a new original image, and then proceed to step S12.
The original image acquired by the cover page generation device may be one or more. When the number of the acquired original images is multiple, the cover generation device may generate the cover of the video or the gallery only according to one or more original images of which the first color attribute is matched with the illumination intensity of the environment where the terminal device is located.
In embodiments of the present application, the first object may comprise text and/or images that illustrate the video or gallery.
When the cover page generating device is the terminal device, the acquiring, by the terminal device, the first object corresponding to the video or the gallery may specifically include: and acquiring a first object corresponding to the video or the gallery from a local memory.
For example, when the cover page generation method is used to generate a cover page of a local video, the terminal device may store, in the local memory, the video chinese name of the local video: travel, video english name: "Travel", photographer name: "Xiaoming", model name: "hill", shooting time: "2020.05.07", video duration: "01: 25", shooting location: "Chongqing City southern Shore" and video description: "person, landscape" is the first object corresponding to the local video.
When the cover page generating device is the network device, the acquiring, by the network device, the first object corresponding to the video or the gallery may specifically include: and sending the identification of the video or the gallery to a multimedia file resource library, and receiving a first object corresponding to the video or the gallery from the multimedia file resource library. When receiving the identifier of the video or the gallery, the multimedia file repository may determine a first object corresponding to the video or the gallery according to the correspondence between the stored file identifier and the object and send the first object to the network device.
For example, when the cover page generation method is used to generate a cover page of a movie, the network device may convert a chinese title received from the multimedia file repository: "shadow", english heading: "Shadow", actor name: "dun ho", director name: "Zhang artists", subtitle: "the shadow of princess is not delicate, is aogu", praise: "Ying" is a Buddha who explores personal consciousness and changes from Zen to Wu Xian. The shadow is moist, as if growing out in water, and has a strong feeling of water country in the south of the Yangtze river. ", winning prize information: "the 55 th taiwan movie golden horse awards the best director, the best visual effect, the best art design, the best sculpt design, the best lead actor and the best lead actor for a woman" and the movie brief introduction: "the piece tells a puppet who is surreptitiously imprisoned from eight years old, and serves as a puppet, and goes through diligence to try to retrieve a free human story" as the first object corresponding to the movie.
Further, the cover generation apparatus may determine a target object format of the first object.
As an optional implementation, the target object format may be a preset unified object format.
As another optional implementation, the determining, by the cover generation apparatus, the target object format of the first object may specifically include: and acquiring the content tag of the video or the gallery, and determining the object format corresponding to the content tag of the video or the gallery as the target object format of the first object according to the corresponding relation between the content tag and the object format.
Wherein the content tag is used to indicate the type of the content of the video or gallery. For example, content tags for videos may include, but are not limited to, drama, inspirations, suspense, youth, antique, etc., and content tags for galleries may include, but are not limited to, characters, travel, mountains and waters, graduations, birthdays, etc.
Wherein the target object format is used to indicate the layout of the first object and one or more of the following properties of the first object: size, font, or color. Alternatively, the layout of the first object may be a layout of a plurality of sub-objects included in the first object. Decorative attachments may be included in the target object format, which may include, but are not limited to, line styles, stamps, and attachment styles.
The use of the decorative attachment is mainly based on the content label matched with the relevance in the picture or the video, and different attachment types determine whether the attachment needs to be used or not by establishing different mapping relations with the content label and placing the screen space of the first object, and how the attachment is applied. The correspondence between the graphic icon attachments and the content labels can be shown in table 1.
The different types of decorative accessories include but are not limited to three types of graphic icons, line types, stamp accessories and the like;
common graphical icon attachments including, but not limited to, location, time, weather, object identification, scene identification, and the like.
TABLE 1 correspondence between graphic icon attachments and content tags
When it is recognized that the scene tag + the typical content tag is included in the picture or video, the graphic image decorative attachment may be used in superimposition in the first object.
The scene label is used for describing picture themes expressed in the current picture or video, including but not limited to birthday, graduation, night, party, action, daily life, wedding, holiday, weekend, new year, portrait and the like; for example, when the scene label of the party is matched and the content label "wineglass" is matched, the picture or video can be displayed by being overlapped by using the picture attachment of the "wineglass".
The placement position of the decorative attachment is displayed at a set position based on a preset attachment template, and the display size of the attachment is determined according to a preset size in the attachment template (see fig. 4(a) left).
If the precondition of the matched scene tag + content tag is not satisfied, the corresponding attachment is not displayed in the generated first object (see right part of fig. 4 (a)).
As an optional implementation manner, when the cover page generation method is used to generate a cover page of a video, the correspondence between the content tag and the object format may be a correspondence between a video main tag and the object format, as shown in table 2. The main label of the video is the content label arranged at the first position in all the content labels of the video.
Table 2 correspondence between video main label and object format
Video main label | Object format |
<Plot of a scene>Or<Classic> | First object format |
<Youth>Or<Love> | Second object format |
<Make a fun>Or<Comedy> | Third object format |
<Literature and art>Or<Human feeling> | Fourth object format |
According to the correspondence between the video main tag and the object format as shown in table 2, when the main tag of the video is < scenario >, the cover sheet generating apparatus may determine a first object format set in advance as a target object format of the first object.
For example, the first object format, the second object format, the third object format and the fourth object format may be as shown in fig. 4(b) to 4(e), respectively.
In other implementations, if the original image is a dynamically changing image or video, the text or image (i.e., the first object) placed thereon may also change according to the content of the original image. Such variations include, but are not limited to: text content and text style changes, image morphology and image style changes. The text styles include, but are not limited to: font, font size, font weight, line height, color, etc.; image patterns include, but are not limited to, size, color, transparency, and the like.
The original image change may be triggered by different external factors including, but not limited to, a sliding switch of the original picture in different directions, a different shading effect exhibited by the same original picture over time, a visually different background of the original picture, etc.
When the original image changes, the detection may be based on image recognition technology to detect different backgrounds in the original image, or may be based on matching of content tags, such as matching to time tags. When the trigger condition is satisfied, the text or image attribute can be changed correspondingly. When the content tag of "time" is detected, the text content and the image of the placement are changed accordingly, which is shown in fig. 4(f), and this example shows the change process of the text content from "dawn" to "night curtain" and the decorative accessory from "sun" to "moon". The change can be embodied in the same or different positions of the same original image, can also be embodied between different original images in the same scene, or a new cover containing the changed image is generated and the original cover is deleted.
As another optional implementation, the determining, by the cover generation apparatus, the target object format of the first object may specifically include: and determining a first data type corresponding to the text object in the first object, and determining an object format corresponding to the first data type as a target object format of the first object according to the corresponding relation between the data type and the object format.
For example, when the first object includes a chinese title: "shadow", english heading: "Shadow", subtitle: "the shadow of the princess is not delicate, is proud" and the name of the director: the cover page generating means may determine a chinese title, an english title, a subtitle, and a director name as the first data type when a text object such as "sheet art colludes".
The correspondence between the data type and the object format may be as shown in table 3, for example.
TABLE 3 correspondence between data types and object formats
Data type | Object format |
Chinese title, English title, actor name and director name | First object format |
Chinese title, English title and director name | Second object format |
Chinese title, English title and actor name | Third object format |
Chinese title, actor name and subtitle | Fourth object format |
According to the correspondence between the data types and the object formats as shown in table 3, since the first data type includes the data type corresponding to the second object format, the cover page generating apparatus may determine the second object format as the target object format of the first object.
As an optional implementation, when the target object format is not used to indicate the size of the first object, the cover page generating apparatus may further obtain size information of the main object in the original image, and determine an object size corresponding to the size information of the main object in the original image as the target object size of the first object according to a corresponding relationship between the image size and the object size. Wherein the target object size is used to indicate the size of the image object in the first object and/or the font size of the text object in the first object.
It should be noted that, because the objects and/or the layouts of the objects corresponding to the respective object formats are different, the correspondence between the image size and the object size may also be different in the respective object formats. Thus, the determining, by the cover page generating apparatus, the object size corresponding to the size information of the main object in the original image as the target object size of the first object according to the corresponding relationship between the image size and the object size may specifically include: and determining the object size corresponding to the size information of the main object in the original image as the target object size of the first object according to the corresponding relation between the image size and the object size in the target object format.
The correspondence between the image size and the object size may include a correspondence between the image size and a font size of the text object and/or a correspondence between the image size and a size of the image object.
For example, the correspondence between the image size and the object size in the target object format as shown in fig. 4(c) may be as shown in table 4. Thus, the cover page generating apparatus can directly determine the font size of the text object in the first object according to the correspondence as shown in table 4. For example, when the size of the subject object in the original image is in the first size range, the cover generation apparatus may determine the font sizes of the chinese title, the english title, and the director's name as second size, fifth size, and fifth size, respectively.
TABLE 4 correspondence between image size and object size
Subject object size in original image | Chinese title word number | English heading number | Director name and word size |
First sizeRange | Number two | Fifth generation | Fifth generation |
Second size range | Xiao Er | Fifth generation | Xiaowu |
As an optional implementation manner, when the target object format is not used to indicate the font of the text object in the first object, the cover page generating apparatus may further obtain the content tag of the video or the gallery, and determine the object font corresponding to the content tag of the video or the gallery as the target object font corresponding to the first object according to the correspondence between the content tag and the object font. Wherein the content tag is used for indicating the type of the content of the video or the gallery, and the target object font is used for indicating the font of the text object in the first object.
It should be noted that, because the objects corresponding to the respective object formats and/or the layouts of the objects are different, the correspondence between the content tags and the object fonts may also be different in the respective object formats. Thus, the determining, by the cover generation apparatus, the object font corresponding to the content tag of the video or the gallery as the target object font of the first object according to the corresponding relationship between the content tag and the object font may specifically include: and determining the object font corresponding to the content label of the video or the gallery as the target object font of the first object according to the corresponding relation between the content label and the object font in the target object format.
It should be further noted that the correspondence between the content tag and the object font may include a correspondence between the content tag corresponding to each content tag and the object font.
For example, the correspondence between the content tag corresponding to the content tag < scenario > and the object font in the target object format as shown in fig. 4(c) may be as shown in table 5. In table 5, the content tag matching the content tag set of the first column is determined according to the matching rule of the second column. For example, for the content tag set { < scenario >, < youth >, < love >, < china > } of the second line, the matching rule is < scenario > + all, so the content tags matched therewith include < scenario >, < youth >, < love > and < china >; for the content tag set of the third row { < scenario >, < fun >, < comedy > }, the matching rule thereof is either < scenario > + so that the content tag matched therewith includes < scenario > and at least one of < fun > and < comedy >; for the content tag set { < scenario > } of the eighth line, the matching rule is all, so the content tag matching therewith includes < scenario >.
TABLE 5 correspondence between content tags and object fonts
The chinese fonts in the table are exemplary fonts, and similar fonts can be selected according to all different font copyrights during actual generation, for example, the chinese instrument spring script fonts can also be similar style handwriting fonts, and the chinese instrument reyi song can also be similar style sons, generally speaking, the fonts of the same type are guaranteed, and the similar character weights are guaranteed.
In addition, the content tags ranked at the first position, the second position, … …, and the nth position in the content tags of the video or the gallery are the main tag, the second tag, … …, and the nth tag of the video or the gallery, respectively, according to the ranking order.
The cover generation device can match the content tag of the video or the gallery with the content tag set corresponding to the main tag in the target object format, and if the matching is successful, the font of the target object corresponding to the first object is determined according to the matching result; if the matching fails, the matching is performed backwards in sequence (namely, the content labels of the video or the gallery are sequentially matched with the content label sets corresponding to the second label and the third label … …) until the matching is successful, and then the target object font corresponding to the first object is determined according to the matching result; and if the content label is not matched with the content label set corresponding to each content label, determining a preset default font as a target object font corresponding to the first object. And the content label set corresponding to a certain content label is a content label set included in the corresponding relation between the content label corresponding to the content label and the object font.
For example, when the content tag of the video includes < drama > and < inspiration >, the cover generation apparatus may match the content tag with eight sets of content tag information as shown in table 5 according to the matching rule as shown in table 5, and since < drama > and < inspiration > are matched with { < drama >, < classics >, < inspirations >, < emotions >, < affectors >, < affection > }, the cover generation apparatus may determine fonts of a chinese title, an english title, and a director name as "sharp house yung roughy", "siyuan song body", and "regular body", respectively.
Or, the cover page generating device may match the content tag of the video or the gallery with the content tag set corresponding to each content tag in the target object format at the same time, and if the content tag of the video or the gallery is only matched with one content tag set, determine the font corresponding to the content tag set as the target object font corresponding to the first object; and if the content label is not matched with the content label set corresponding to each content label, determining a preset default font as a target object font corresponding to the first object.
If the content tag matches with at least two content tag sets, the cover page generating device may determine, as the target object font corresponding to the first object, a font with the most corresponding content tag sets among fonts corresponding to the at least two content tag sets. For example, if the content tag is successfully matched with the three content tag sets, and the fonts of the chinese caption corresponding to the three content tag sets are the font a, and the font B, respectively, the cover page generating device may determine the font a as the font of the chinese caption in the first object.
If the content tag is matched with at least two content tag sets, and multiple fonts exist in the fonts corresponding to each of the at least two content tag sets, and the content tag sets corresponding to the multiple fonts are the same in number and the largest in number, the cover generation device may determine the target object font corresponding to the first object from the multiple fonts according to the sequence of occurrence of each content tag. For example, if the content tag includes < drama > < youth > < love > < china >, the content tag is successfully matched with two sets of content tags, which are { < drama > < youth > } and { < drama > < love > < china > }, the fonts of chinese titles corresponding to the two sets of content tags are font a and font B, respectively, and since < youth > appears earlier in the content tag than < love >, the cover generation apparatus may determine font a as the font of the chinese title in the first object.
As an optional implementation manner, when the target object format is not used to indicate the color of the first object, the cover generation apparatus may further determine a second color attribute of the original image, and determine a background color of the placement area of the first object in the original image and a target object color corresponding to the first object according to the second color attribute. Wherein the second color attribute comprises a dominant color of the original image.
The determining, by the cover generation apparatus, the background color of the placement area of the first object in the original image according to the second color attribute may specifically include: and determining the dominant color of the original image as the background color of the placement area of the first object in the original image.
The main color of the original image can be determined by the cover generation device by color extraction of the original image. Specifically, the cover generation device may perform color sampling on the original image by using an online color sampler ColorPicker, obtain a color whose proportion in the original image is eight (Top8), and determine a color whose proportion in the original image is the highest as a dominant color of the original image.
Wherein the target object color may comprise a first word color of the title text and/or a second word color of other text in the first object. The title text includes a Chinese title and/or an English title in the first object, and the other text includes text objects other than the title text in the first object.
The determining, by the cover generation apparatus, the target object color corresponding to the first object according to the second color attribute may specifically include: according to the dominant color of the original image and the background color of the placement area of the first object in the original image, determining a first character color of a title text in the first object, and according to the first character color, determining a second character color of other texts in the first object.
For example, when the original image has a significant tendency to be a hue, the cover sheet generation apparatus may generate a first color and a second color, wherein the first color is a bright color whose hue is the dominant hue and saturation is 20%, lightness is 100%, and the second color is a dark color whose hue is the dominant hue and saturation is 100%, lightness is 20%. Further, the cover generation device can compare the first color and the second color with the background color respectively to obtain contrast, and select a bright color or a dark color as the first character color according to the contrast.
Wherein when the lightness of the dominant color of the original image is less than 25% or the saturation is less than 10%, the original image is considered to have no obvious tendency of hue, otherwise the original image is considered to have obvious tendency of hue.
When the original image has no obvious hue tendency, if the brightness of the dominant color of the original image is less than 40%, or the brightness of the dominant color of the original image is greater than 40% and the saturation difference is greater than 60%, the cover generation device can determine the white with brightness of 100% as the first character color; if the lightness of the dominant color of the original image is greater than 40% and the saturation difference is less than 60%, the cover generation device can determine the black with lightness of 100% as the first character color.
When the first text color is determined, the cover page generation device can further generate the second text color according to the first text color. In an alternative embodiment, the hue and saturation of the second text color are the same as the first text color, and the lightness of the second text color is 80%.
S12: the placement position of the first object is determined according to the attributes of the original image.
In an embodiment of the present application, the attribute of the original image includes information indicating a position of the subject object in the original image. Wherein the main body object may be determined by the cover generation apparatus through image object detection of the original image.
As an alternative embodiment, the position of the subject object in the original image is determined according to the first contour of the subject object in the original image, and the position of the subject object includes two-dimensional coordinates of the subject object. In this embodiment, in order to reduce the coverage of the main object in the original image by the first object, the determining, by the cover generation apparatus, the placement position of the first object according to the attribute of the original image may specifically include: and determining the placement position of the first object according to the first contour of the main object in the original image, wherein the placement position is a position which is not completely overlapped with the two-dimensional coordinates of the main object. For example, the placement position may be a position that does not overlap with the two-dimensional coordinates of the subject object in the original image at all, or the placement position may be a position that does not overlap with the two-dimensional coordinates of the key region of the subject object in the original image. The key area of the subject object may be an area where a key feature of the subject object is located.
The determining, by the cover generating device, the placement position of the first object according to the first contour of the main body object may specifically include: determining a first contour of a subject object in the original image and a second contour of the first object, the second contour being determined according to a layout and a size of the first object; and determining the placement position of the first object according to the coincidence rate of the first contour and the second contour when the first object is superposed on each position of the original image.
It should be noted that, since the first object mentioned in this application refers to the content of the first object, and the rendering effect of the first object in the cover is related to not only the content but also the layout and size of the first object, the second outline of the first object is determined according to the layout and size of the first object.
The determining, by the cover generating device, a first contour of a main object and a second contour of the first object in the original image, and determining a placement position of the first object according to a coincidence rate of the first contour and the second contour when the first object is superimposed at each position on the original image may specifically include: determining a first mask image corresponding to the original image and a second mask image corresponding to the first object, and determining the placement position of the first object according to the coincidence rate of the first mask image and the second mask image when the second mask image is superposed on each position of the first mask image.
Specifically, the cover generation device may determine a position where the second mask image is located when the coincidence ratio of the first mask image and the second mask image is minimum as the placement position of the first object.
The determining, by the cover generation apparatus, the first mask image corresponding to the original image may specifically include: recognizing a person, an animal or other types of subject objects in the original image based on image subject recognition technology and face recognition technology; and obtaining a main body outline of the main body object in the original image based on a morphological processing technology, and obtaining a first mask image corresponding to the original image according to the main body outline. The first mask image is a weighted mask with base map layout information. For example, for the original image shown in FIG. 2, the corresponding first mask image is shown in FIG. 5.
The determining, by the cover generation apparatus, the second mask image corresponding to the first object may specifically include: and performing marginalization fitting on the outer contour of the first object with the target object format to obtain a second mask image corresponding to the first object. Wherein the second Mask image is an element Mask of the first object. For example, for a first object as shown in fig. 6(a), its corresponding second mask image is as shown in fig. 6 (b).
The first mask image and the second mask image are respectively marked as T and S, the cover generation device needs to find a coordinate (x, y) in T, and place S on (x, y), so that the coverage rate F of S to T is minimum (i.e. the coincidence rate F of T and S is minimum). Thus, the cover generation apparatus can construct an objective function: f. of1(x, y) ═ min { F }. By traversing all (x, y) in T, the cover generation apparatus can calculate F for each (x, y).
Wherein, the calculating F corresponding to each (x, y) by the cover generation apparatus may specifically include: and (3) placing S on (x, y), calculating a unit intersection pixel of the T and the S, and taking the unit intersection pixel as a value of F, wherein the value range of F is 0-1, and the lower the value of F is, the lower the coverage rate of the S on the T is.
As another optional implementation, the determining, by the cover generation apparatus, the placement position of the first object according to the attribute of the original image may specifically include: the placement position of the first object is determined based on the aesthetic effect of the second object obtained when the first object is superimposed at various positions on the original image.
Specifically, the cover generating apparatus may determine a position where the first object is located when the aesthetic effect of the second object is best as the placement position of the first object.
It will be appreciated that the second object comprises a cover page of the video or gallery. The aesthetic effect of the second object may include, but is not limited to, symmetry (or balance), stability, contrast, white space, visual guidance, golden ratio, etc. of the second object.
Wherein, the symmetry of the second object may include, but is not limited to, left-right symmetry, up-down symmetry, central symmetry, diagonal symmetry, etc. for ensuring the sense of symmetry balance between the subject object and the first object in the original image. The stable feeling of the second subject refers to a visually stable feeling brought by a centripetal force and a tension force of a triangular composition formed by the subject and the first subject in the original image.
For example, when the original image itself has symmetry (e.g., central symmetry), the cover generation apparatus may superimpose the first object on the original image in a symmetric arrangement such that the second object obtained by the superimposition has the same symmetry (i.e., central symmetry) as the original image.
For another example, when the original image itself does not have symmetry, the cover generation apparatus may superimpose a first object on the original image so that, in a second object obtained by the superimposition, the position of the subject object in the original image is relatively symmetrical to the position of the first object. For example, the cover generation apparatus may superimpose the first object on a lower half of the original image when the subject image in the original image is on the upper half of the original image.
For another example, the cover production apparatus may superimpose a first object on the original image so that, in a second object obtained by the superimposition, a subject object in the original image and the first object may form a triangular composition.
In one particular embodiment, the aesthetic effect may be measured by an aesthetic score, D. Thus, the cover generation apparatus can construct an objective function: f. of2(x, y) ═ max { D }. By traversing all (x, y) in the original image, the cover generation apparatus can calculate D for each (x, y).
Wherein, the calculating D corresponding to each (x, y) by the cover generation apparatus may specifically include: superposing the first object with the target object format on the (x, y) position of the original image to obtain a second object, inputting the second object into a pre-trained deep learning model (such as a neural network model) so that the deep learning model performs aesthetic scoring on the second object, and receiving a score D output by the deep learning model. And performing aesthetic scoring on the second object refers to evaluating the aesthetic effects of symmetry, stability and the like of the second object, wherein the value range of D is 0-1, and the higher the value of D is, the better the aesthetic effect of the second object is.
It is understood that, if the coverage rate of the main object in the original image by the first object and the aesthetic effect of the second object obtained by the first object superimposed on the original image are considered at the same time, the cover generation device may construct an objective function: f. of3(x, y) ═ max { D-F }. Wherein f is3The function value of (x, y) can be calculated by referring to the above pair f1(x,y)、f2(x, y) is described in relation to the calculation of the function value.
It should be noted that, if the object format of the first object determined by the cover generation apparatus before the position of the first object is determined is multiple, the cover generation apparatus may calculate the objective function value corresponding to each object format in step S12, and determine the object format used for generating the cover according to the objective function value corresponding to each object format.
When the placement position of the first object is determined, the cover page generating device may apply the background color to the original image to update the original image.
S13: and generating a cover of the video or the gallery by superposing the first object on the original image according to the placement position.
In an embodiment of the application, the generating, by the cover generating device, the cover of the video or the gallery by superimposing the first object on the original image according to the placement position may specifically include: and superposing the first object on the original image according to the placement position in any one or more object attributes of the target object format, the target object size, the target object font and the target object color to generate a cover page of the video or the gallery. Wherein, the cover of the video can be shown in fig. 7, and the cover of the gallery can be shown in fig. 8.
The representation form of the cover of the video or the gallery can be an image, namely, the original image and the first object are located in the same layer in the cover of the video or the gallery. Alternatively, the first object and the original image may be distributed in an upper layer and a lower layer in the cover of the video or the gallery.
Further, the cover generation means may output the cover of the video or gallery to cause the cover to be displayed in the terminal device. The form of the cover displayed in the terminal device includes, but is not limited to, full-screen display, card form display, partial area display on the display screen of the terminal device, and the like.
As an optional implementation manner, the outputting, by the cover generating device, the cover to be displayed in the terminal device may specifically include: and outputting the cover according to the first dynamic effect parameter so that the cover is displayed in the terminal equipment through the dynamic effect indicated by the first dynamic effect parameter. Wherein the first dynamic effect parameter is used for indicating an entry effect and/or an exit effect of the video or the front cover of the gallery. The entry effect may include, but is not limited to, a shutter effect, a fly-in effect, a box effect, a diamond effect, a chessboard effect, etc., and the exit effect may include, but is not limited to, the fade-out effect, shrink effect, flip-over near-far effect, sink effect, etc.
The first dynamic effect parameter may be a preset uniform dynamic effect parameter. Alternatively, the first dynamic effect parameter may be determined from a content tag of the video or gallery. For example, the cover page generating device may determine, as the first dynamic effect parameter, the dynamic effect parameter corresponding to the content tag of the video or the gallery according to the correspondence between the content tag and the dynamic effect parameter.
As an optional implementation manner, the acquiring, by the cover page generation apparatus, an original image corresponding to a video or a gallery may specifically include: and acquiring an original image set corresponding to the video or the gallery. Wherein, the original image set comprises a plurality of original images. In this embodiment, by performing steps S12 and S13 a plurality of times, the cover generating apparatus can generate a plurality of covers from the plurality of original images and the first object, respectively. Specifically, the cover page generating device may determine a plurality of placement positions of the first object according to the attributes of the plurality of original images, and superimpose the first object on the plurality of original images according to the plurality of placement positions to generate a plurality of cover pages of the video or the gallery.
Further, the cover generation means may determine an object cover from the plurality of covers and output the object cover to cause the object cover to be displayed in the terminal device.
The determining, by the cover generation apparatus, a target cover from the plurality of covers may specifically include: the target cover is determined from the plurality of covers based on the aesthetic effects of the plurality of covers, which may include, but are not limited to, symmetry, stability, contrast, whiteout, visual guidance, golden ratio, etc. of the plurality of covers.
In one particular embodiment, the aesthetic effect may be measured in terms of an aesthetic score. Therefore, the cover generation equipment can perform aesthetic scoring on the covers to obtain the scores corresponding to the covers respectively, and determine the target cover from the covers according to the scores corresponding to the covers respectively.
Wherein, this front cover generation device carries out the score that aesthetic score obtained this many front covers and correspond respectively to these many front covers and can specifically include: inputting the covers into a pre-trained deep learning model (such as a neural network model) to enable the deep learning model to perform aesthetic scoring on the covers, and receiving scores corresponding to the covers output by the deep learning model. The aesthetic scoring of the covers means to evaluate the aesthetic effects of symmetry, stability and the like of the covers, and the higher the score is, the better the aesthetic effects of the covers are.
Wherein, this front cover generation device confirms that the target cover can specifically include from these many front covers according to the score that these many front covers correspond respectively: and determining the cover with the highest score in the covers as the target cover.
As an optional embodiment, further, the cover generation apparatus may repeatedly perform: acquiring the display duration of a currently displayed target cover; and if the display time length of the currently displayed target cover is greater than a preset time length threshold value, determining a new target cover from the multiple covers, and outputting the new target cover to display the new target cover in the terminal equipment.
Wherein, the cover generation device determining a new target cover from the plurality of covers may specifically include: and determining a new target cover from the covers according to the scores corresponding to the covers respectively.
In this case, the cover generation means may automatically cyclically output the plurality of covers in order of scores from high to low. Specifically, when the number of the plurality of covers is n, if the score of the currently displayed target cover is ranked in the ith name, when i ≠ n, the cover generation means may determine the cover ranked in the (i + 1) th name as the new target cover; when i is equal to n, the cover generation means may determine the cover with the highest score as the new target cover.
As an alternative implementation, when the target cover is displayed in the terminal device, the user may perform an image switching operation on the display interface of the target cover. When detecting an image switching operation of the user for the currently displayed target cover, the cover generation device may determine a new target cover from the plurality of covers according to the scores corresponding to the plurality of covers, and output the new target cover to display the new target cover in the terminal device. Wherein, the user can perform the image switching operation by touching the image switching button on the display interface of the object cover or clicking the side button of the terminal device.
In the embodiment of the present application, when a new object cover is switched with a currently displayed object cover, the currently displayed object cover and the new object cover may respectively present an exit effect and an entry effect indicated by the first dynamic effect parameter, as shown in fig. 9.
As an alternative embodiment, after the cover of the video or the gallery is displayed in the terminal device, further, the cover generating means may acquire user displacement information; and if the user displacement information meets the preset condition that the first object moves relative to the original image, determining a second dynamic effect parameter according to the user displacement information. Wherein the second dynamic effect parameter comprises a moving direction and/or a moving speed of the first object relative to the original image. Thus, the first object can be dynamically displayed in the terminal device according to the second dynamic effect parameter, and a dynamic split-level effect of the first object moving relative to the original image is presented, as shown in fig. 10.
In this embodiment of the application, the acquiring, by the cover generation apparatus, the user displacement information may specifically include: the method comprises the steps of obtaining an initial position and a current position of a user, and determining displacement information of the user according to the initial position and the current position. Wherein the cover generation means may acquire the initial position of the user while outputting the cover, and acquire the current position of the user after outputting the cover.
Wherein the user displacement information is used for indicating the change situation of the current position of the user relative to the initial position of the user. The user position (including the current position of the user and the initial position of the user) refers to the distance and angle of the eyes of the user relative to a sensing device in the terminal device, wherein the sensing device comprises a camera or an infrared sensor. The change condition comprises any one or more of angle change amount, distance change amount and time length for forming change of the eyes of the user relative to the sensing equipment.
As an alternative embodiment, the cover generation means may periodically acquire the current position of the user after outputting the cover (or the object cover). In this embodiment, the time length for forming the change is a period for acquiring the current location of the user.
As an optional implementation manner, the determining, by the cover generating device, the second dynamic effect parameter according to the user displacement information may specifically include: inputting the user displacement information (including any one or more of angle variation, distance variation and time length for forming variation) into a preset dynamic effect parameter generating function to obtain a function value, and determining the function value as the second dynamic effect parameter.
The method and the device for generating the cover page dynamically determine the placement position of the first object according to the attribute of the original image and generate the cover page of the video or the gallery according to the placement position, so that the placement position of the first object in the cover page generated by the method and the device for generating the cover page is flexible and adaptive to the original image. Optionally, the object format, the object size, the object font and the object color of the first object may be dynamically determined, and a cover of the video or the gallery is generated according to the object format, the object size, the object font and the object color, so that the above-mentioned attribute of the first object in the cover generated by the embodiment of the present application may also be flexible and adapted to the original image.
Referring to fig. 11, fig. 11 is a schematic frame diagram of a cover generation apparatus according to an embodiment of the present application. The cover page generation apparatus 1100 may be any one of a terminal device, a chip in the terminal device, a network device, and a chip in the network device. As shown in fig. 11, the cover sheet generating apparatus 1100 may include a transceiver module 1101 and a processing module 1102.
In a specific embodiment, the processing module 1102 is configured to determine a placement position of a first object according to an attribute of an original image, the attribute including information indicating a position of a subject object in the original image, the original image being derived from a video or a gallery, the first object including one or more of the following objects: text or images;
the processing module 1102 is further configured to generate the video or the cover of the gallery by superimposing the first object on the original image according to the placement position.
As an alternative embodiment, the position of the subject object in the original image is determined according to the first contour of the subject object in the original image, and the position of the subject object includes two-dimensional coordinates of the subject object. In this embodiment, when the processing module 1102 determines the placement position of the first object according to the attribute of the original image, it is specifically configured to:
and determining the placement position of the first object according to the first contour of the main object in the original image, wherein the placement position is a position which is not completely overlapped with the two-dimensional coordinates of the main object.
As an optional implementation manner, when the processing module 1102 determines the placement position of the first object according to the first contour of the main object, it is specifically configured to:
determining a first contour of a subject object in the original image and a second contour of the first object, the second contour being determined according to a layout and a size of the first object;
and determining the placement position of the first object according to the coincidence rate of the first contour and the second contour when the first object is superposed on each position of the original image.
As an optional implementation manner, when the processing module 1102 determines the placement position of the first object according to the attribute of the original image, it is specifically configured to:
and determining the placement position of the first object according to the attributes of the original image and the aesthetic effect of a second object obtained when the first object is superposed on each position on the original image, wherein the aesthetic effect of the second object comprises the symmetry and/or stability of the second object, and the second object comprises the video or the front cover of the gallery.
As an optional implementation manner, the transceiver module 1101 is configured to obtain a content tag of the video or the gallery, where the content tag is used to indicate a type to which the content of the video or the gallery belongs;
the processing module 1102 is further configured to determine, according to a correspondence between content tags and object formats, an object format corresponding to the content tags of the video or the gallery as a target object format of the first object, where the target object format is used to indicate a layout of a plurality of sub-objects included in the first object and one or more of the following attributes of the first object: size, font, or color;
correspondingly, when the processing module 1102 superimposes the first object on the original image according to the placement position to generate the video or the front cover of the gallery, the processing module is specifically configured to:
and superposing the first object on the original image in the target object format according to the placement position to generate the video or the cover of the gallery.
As an optional implementation manner, the transceiver module 1101 is further configured to obtain an illumination intensity of an environment where a terminal device is located, where the terminal device is configured to display the video or a cover of the gallery;
when the processing module 1102 determines the placement position of the first object according to the attribute of the original image, it is specifically configured to:
detecting whether a first color attribute of the original image is matched with the illumination intensity, wherein the first color attribute comprises any one or more of hue, saturation and brightness of the original image;
determining the placement location of the first object according to the attributes of the original image is performed when the first color attribute of the original image matches the illumination intensity.
As an optional implementation manner, the processing module 1102 is further configured to, when the first color attribute of the original image does not match the illumination intensity, perform image processing on the original image according to the illumination intensity to obtain a new original image, and perform determining the placement position of the first object according to the attribute of the original image.
As an optional implementation manner, the transceiver module 1101 is further configured to obtain size information of the subject object in the original image;
the processing module 1102 is further configured to determine, according to a corresponding relationship between an image size and an object size, an object size corresponding to size information of a main object in the original image as a target object size of the first object, where the target object size is used to indicate a size of an image in the first object and/or a font size of a text in the first object;
correspondingly, when the processing module 1102 superimposes the first object on the original image according to the placement position to generate the video or the front cover of the gallery, the processing module is specifically configured to:
and superposing the first object on the original image according to the placement position in the target object size to generate the video or the cover of the gallery.
As an optional implementation manner, the transceiver module 1101 is further configured to obtain a content tag of the video or the gallery, where the content tag is used to indicate a type to which the content of the video or the gallery belongs;
the processing module 1102 is further configured to determine, according to a correspondence between a content tag and an object font, an object font corresponding to the content tag of the video or the gallery as a target object font corresponding to the first object, where the target object font is used to indicate a font of a text in the first object;
correspondingly, when the processing module 1102 superimposes the first object on the original image according to the placement position to generate the video or the front cover of the gallery, the processing module is specifically configured to:
and superposing the first object on the original image in the target object font according to the placement position to generate the video or the cover of the gallery.
As an optional implementation manner, when the processing module 1102 superimposes the first object on the original image according to the placement position to generate the video or the front cover of the gallery, the processing module is specifically configured to:
determining a second color attribute of the original image, and determining a target object color corresponding to the first object according to the second color attribute, wherein the second color attribute comprises a dominant color of the original image;
and superposing the first object on the original image in the target object color according to the placement position to generate the video or the cover of the gallery.
As an alternative implementation manner, the video or the front cover of the gallery is used for displaying in the terminal device according to a first dynamic effect parameter, and the first dynamic effect parameter is used for indicating an entering effect and/or an exiting effect of the video or the front cover of the gallery.
As an alternative embodiment, the first dynamic parameter is determined according to the content tag of the video or the gallery.
As an optional implementation manner, the transceiver module 1101 is further configured to acquire an original image set, where the original image set includes a plurality of original images;
correspondingly, the processing module 1102 determines a placement position of a first object according to the attribute of the original image, and when the first object is superimposed on the original image according to the placement position to generate a target file, the processing module is specifically configured to:
determining a plurality of placing positions of the first object according to the attributes of the plurality of original images, and respectively superposing the first object on the plurality of original images according to the placing positions to generate a plurality of covers of the video or the gallery;
an object cover is determined from the plurality of covers, the object cover for display in the terminal device.
As an optional implementation manner, the transceiver module 1101 is further configured to obtain a display duration of a currently displayed object cover;
the processing module 1102 is further configured to determine a new object cover from the plurality of covers when the display duration of the currently displayed object cover is greater than a preset duration threshold, where the new object cover is used for displaying in the terminal device.
As an alternative embodiment, after the video or the front cover of the gallery is displayed in the terminal device, the first object in the front cover is dynamically displayed according to the second dynamic effect parameter.
As an optional implementation manner, the second dynamic effect parameter is determined according to user displacement information, the second dynamic effect parameter includes a moving direction and/or a moving speed of the first object relative to the original image, and the user displacement information includes any one or more of an angle change amount, a distance change amount, and a time length for forming a change of the user's eyes relative to the terminal device.
In another specific embodiment, the processing module 1102 is configured to determine a format of a first object according to the content of the original image, where the first object includes one or more of the following objects: text or an image, the format being indicative of a layout of one or more sub-objects comprised by the first object or one or more of the following properties of the first object: size, font, or color, the original image being derived from a video or gallery.
The processing module 1102 is further configured to overlay the first object with the original image in the format to generate a cover of the video or the gallery.
As an optional implementation manner, the transceiver module 1101 is configured to obtain a content tag of the original image, the video or the gallery, where the content tag is used to indicate a type to which the content of the original image, the video or the gallery belongs.
In this embodiment, when the processing module 1102 determines the format of the first object according to the content of the original image, it is specifically configured to:
and determining the format of the first object according to the corresponding relation between the content tag and the object format.
As another optional implementation manner, when the processing module 1102 determines the format of the first object according to the content of the original image, it is specifically configured to:
performing image recognition on the original image to determine the content of the original image;
and determining the format of the first object according to the corresponding relation between the content and the object format.
As an alternative embodiment, the first object is dynamically changed. In particular, the first object dynamically changes according to time or according to the content of the original image.
Based on the same inventive concept, the principle and the beneficial effects of the cover generation apparatus 1100 provided in the embodiment of the present application for solving the problems are similar to those of the embodiment of the cover generation method shown in fig. 1 of the present application, so the implementation of the cover generation apparatus 1100 can refer to the implementation of the cover generation method shown in fig. 1, and repeated details are not repeated.
Referring to fig. 12, fig. 12 is a schematic frame diagram of another cover generation apparatus provided in the embodiment of the present application. The cover generation apparatus 1200 may be specifically a network device or a terminal device. As shown in fig. 12, the cover generation apparatus 1200 may include: a bus 1201, a processor 1202, a memory 1203, and an input/output interface 1204. The bus 1201 is used for interconnecting the processor 1202, the memory 1203 and the input/output interface 1204, and allows the above elements to communicate with each other. The memory 1203 is used to store one or more computer programs comprising computer instructions. The input/output interface 1204 is used for controlling communication connection between the cover producing apparatus 1200 and other devices.
In a particular embodiment, the processor 1202 is configured to invoke the computer instructions to perform:
determining a placement location of a first object based on attributes of an original image, the attributes including information indicating a location of a subject object in the original image, the original image originating from a video or a gallery, the first object including one or more of: text or images;
and generating the video or the cover of the gallery by superposing the first object on the original image according to the placement position.
As an alternative embodiment, the position of the subject object in the original image is determined according to the first contour of the subject object in the original image, and the position of the subject object includes two-dimensional coordinates of the subject object. In this embodiment, the processor 1202 is configured to invoke the computer instructions to perform the following steps in particular when determining the placement position of the first object according to the attributes of the original image:
and determining the placement position of the first object according to the first contour of the main object in the original image, wherein the placement position is a position which is not completely overlapped with the two-dimensional coordinates of the main object.
As an alternative embodiment, the processor 1202 is configured to invoke the computer instructions to perform the steps of determining the placement position of the first object according to the first contour of the subject object, in particular:
determining a first contour of a subject object in the original image and a second contour of the first object, the second contour being determined according to a layout and a size of the first object;
and determining the placement position of the first object according to the coincidence rate of the first contour and the second contour when the first object is superposed on each position of the original image.
As an alternative embodiment, the processor 1202 is configured to invoke the computer instructions to perform the steps of determining the placement position of the first object according to the first contour of the subject object, in particular:
and determining the placement position of the first object according to the attributes of the original image and the aesthetic effect of a second object obtained when the first object is superposed on each position on the original image, wherein the aesthetic effect of the second object comprises the symmetry and/or stability of the second object, and the second object comprises the video or the front cover of the gallery.
As an alternative embodiment, the processor 1202 is configured to invoke the computer instructions to further perform:
acquiring a content tag of the video or the gallery, wherein the content tag is used for indicating the type of the content of the video or the gallery;
determining an object format corresponding to the content tag of the video or the gallery as a target object format of the first object according to a corresponding relationship between the content tag and the object format, where the target object format is used to indicate a layout of a plurality of sub-objects included in the first object and one or more of the following attributes of the first object: size, font, or color;
correspondingly, the processor 1202 is configured to invoke the computer instructions to perform the following steps in particular when the first object is superimposed on the original image according to the placement position to generate the video or the cover of the gallery:
and superposing the first object on the original image in the target object format according to the placement position to generate the video or the cover of the gallery.
As an alternative embodiment, the processor 1202 is configured to invoke the computer instructions to perform in particular when generating the video or the gallery cover by superimposing the first object on the original image according to the placement location:
acquiring the illumination intensity of the environment where the terminal equipment is located, wherein the terminal equipment is used for displaying the video or the cover of the gallery;
detecting whether a first color attribute of the original image is matched with the illumination intensity, wherein the first color attribute comprises any one or more of hue, saturation and brightness of the original image;
and if the first color attribute of the original image is matched with the illumination intensity, determining the placement position of the first object according to the attribute of the original image.
As an alternative embodiment, the processor 1202 is configured to invoke the computer instructions to further perform:
and if the first color attribute of the original image is not matched with the illumination intensity, performing image processing on the original image according to the illumination intensity to obtain a new original image, and determining the placement position of the first object according to the attribute of the original image.
As an alternative embodiment, the processor 1202 is configured to invoke the computer instructions to perform in particular when generating the video or the gallery cover by superimposing the first object on the original image according to the placement location:
acquiring size information of a main object in the original image;
determining the object size corresponding to the size information of the main object in the original image as the target object size of the first object according to the corresponding relation between the image size and the object size, wherein the target object size is used for indicating the size of the image in the first object and/or the font size of the text in the first object;
and superposing the first object on the original image according to the placement position in the target object size to generate the video or the cover of the gallery.
As an alternative embodiment, the processor 1202 is configured to invoke the computer instructions to perform in particular when generating the video or the gallery cover by superimposing the first object on the original image according to the placement location:
acquiring a content tag of the video or the gallery, wherein the content tag is used for indicating the type of the content of the video or the gallery;
determining an object font corresponding to the content label of the video or the gallery as a target object font corresponding to the first object according to the corresponding relation between the content label and the object font, wherein the target object font is used for indicating the font of the text in the first object;
and superposing the first object on the original image in the target object font according to the placement position to generate the video or the cover of the gallery.
As an alternative embodiment, the processor 1202 is configured to invoke the computer instructions to perform in particular when generating the video or the gallery cover by superimposing the first object on the original image according to the placement location:
determining a second color attribute of the original image, and determining a target object color corresponding to the first object according to the second color attribute, wherein the second color attribute comprises a dominant color of the original image;
and superposing the first object on the original image in the target object color according to the placement position to generate the video or the cover of the gallery.
As an alternative implementation manner, the video or the front cover of the gallery is used for displaying in the terminal device according to a first dynamic effect parameter, and the first dynamic effect parameter is used for indicating an entering effect and/or an exiting effect of the video or the front cover of the gallery.
As an alternative embodiment, the first dynamic parameter is determined according to the content tag of the video or the gallery.
As an alternative embodiment, before determining the placement position of the first object according to the attributes of the original image, the processor 1202 is configured to invoke the computer instructions to further perform:
acquiring an original image set, wherein the original image set comprises a plurality of original images;
correspondingly, the processor 1202 is configured to invoke the computer instructions to perform determining a placement position of a first object according to the attribute of the original image, and specifically perform, when a target file is generated by superimposing the first object on the original image according to the placement position:
determining a plurality of placing positions of the first object according to the attributes of the plurality of original images, and respectively superposing the first object on the plurality of original images according to the placing positions to generate a plurality of covers of the video or the gallery;
an object cover is determined from the plurality of covers, the object cover for display in the terminal device.
As an alternative embodiment, the processor 1202 is configured to invoke the computer instructions to further perform:
acquiring the display duration of a currently displayed target cover;
and if the display time length of the currently displayed object cover is greater than a preset time length threshold value, determining a new object cover from the plurality of covers, wherein the new object cover is used for being displayed in the terminal equipment.
As an alternative embodiment, after the video or the front cover of the gallery is displayed in the terminal device, the first object in the front cover is dynamically displayed according to the second dynamic effect parameter.
As an optional implementation manner, the second dynamic effect parameter is determined according to user displacement information, the second dynamic effect parameter includes a moving direction and/or a moving speed of the first object relative to the original image, and the user displacement information includes any one or more of an angle change amount, a distance change amount, and a time length for forming a change of the user's eyes relative to the terminal device.
As an alternative embodiment, the first object is dynamically changed. In particular, the first object dynamically changes according to time or according to the content of the original image.
In a particular embodiment, the processor 1202 is configured to invoke the computer instructions to perform:
determining a format of a first object from the content of the original image, the first object comprising one or more of: text or an image, the format being indicative of a layout of one or more sub-objects comprised by the first object or one or more of the following properties of the first object: size, font, or color, the original image being derived from a video or gallery;
and superposing the first object with the original image in the format to generate the video or the cover page of the gallery.
As an alternative embodiment, the processor 1202 is configured to invoke the computer instructions to further perform:
acquiring a content label of the original image, the video or the gallery, wherein the content label is used for indicating the type of the content of the original image, the video or the gallery;
in this embodiment, the processor 1202 is configured to invoke the computer instructions to perform the following specifically when determining the format of the first object according to the content of the original image:
and determining the format of the first object according to the corresponding relation between the content tag and the object format.
As another alternative embodiment, the processor 1202 is configured to invoke the computer instructions to perform the following specifically when determining the format of the first object according to the content of the original image:
performing image recognition on the original image to determine the content of the original image;
and determining the format of the first object according to the corresponding relation between the content and the object format.
As an alternative embodiment, the first object is dynamically changed. In particular, the first object dynamically changes according to time or according to the content of the original image.
The processor 1202 may be a Central Processing Unit (CPU). The memory 1203 may be any type of memory such as ROM, RAM, non-volatile random access memory, and the like.
Based on the same inventive concept, the principle and the beneficial effects of the cover generation apparatus 1200 provided in the embodiment of the present application for solving the problems are similar to those of the embodiment of the cover generation method shown in fig. 1 of the present application, so the implementation of the cover generation apparatus 1200 can refer to the implementation of the cover generation method shown in fig. 1, and repeated details are not repeated.
It is to be understood that the drawings of the embodiments of the present application show only a simplified design of the cover producing apparatus described above. In practical applications, the cover generating device is not limited to the above structure.
It should be noted that the Processor referred to in the foregoing embodiments of the present Application may be a CPU, a general-purpose Processor, a Digital Signal Processor (DSP), an Application-specific integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a DSP and a microprocessor, or the like. The memory may be integrated in the processor or may be provided separately from the processor.
The embodiment of the present application further provides a chip, which may be connected to the memory, and is configured to read and execute the software program stored in the memory, so as to implement any one of the methods in the foregoing method embodiments.
The embodiment of the present application further provides a computer storage medium, where computer-readable instructions are stored, and when the computer reads and executes the computer-readable instructions, any one of the methods in the foregoing method embodiments may be completed.
Embodiments of the present application further provide a computer program product containing a software program, which, when run on a computer, causes the computer to perform any one of the methods according to the above-mentioned method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer instructions embodied therein.
In the above embodiments, the purpose, technical means, and advantageous effects of the present application are explained in detail. It should be understood that the above description is only exemplary of the present application, and the scope of the present application is not limited thereto. Any modification and variation made on the basis of the technical solution of the present application shall be included in the protection scope of the present application.
Claims (27)
1. A cover generation method, comprising:
determining a placement location of a first object based on attributes of an original image, the attributes including information indicating a location of a subject object in the original image, the original image originating from a video or a gallery, the first object including one or more of: text or images;
generating a cover of the video or the gallery by superimposing the first object on the original image according to the placement position.
2. The method of claim 1, wherein the location of the subject object in the original image is determined from a first contour of the subject object in the original image, the location of the subject object comprising two-dimensional coordinates of the subject object; the determining the placement position of the first object according to the attributes of the original image comprises:
and determining the placement position of the first object according to the first contour of the main object in the original image, wherein the placement position is a position which is not completely overlapped with the two-dimensional coordinates of the main object.
3. The method of claim 2, wherein determining the placement location of the first object from the first contour of the subject object comprises:
determining a first contour of a subject object in the original image and a second contour of the first object, the second contour being determined according to a layout and a size of the first object;
and determining the placement position of the first object according to the coincidence rate of the first contour and the second contour when the first object is superposed on each position on the original image.
4. The method of any of claims 1 to 3, wherein determining the placement location of the first object based on the attributes of the original image comprises:
determining a placement position of the first object according to attributes of the original image and aesthetic effects of a second object obtained when the first object is superimposed on each position on the original image, wherein the aesthetic effects of the second object comprise symmetry and/or stability of the second object, and the second object comprises the video or a cover of the gallery.
5. The method according to any one of claims 1 to 4, further comprising:
acquiring a content tag of the video or the gallery, wherein the content tag is used for indicating the type of the content of the video or the gallery;
determining an object format corresponding to the content tag of the video or the gallery as a target object format of the first object according to a corresponding relationship between the content tag and the object format, wherein the target object format is used for indicating a layout of a plurality of sub-objects included in the first object and one or more of the following attributes of the first object: size, font, or color;
the generating of the video or the cover of the gallery by superimposing the first object on the original image according to the placement position includes:
and overlaying the first object on the original image in the target object format according to the placement position to generate a cover of the video or the gallery.
6. The method of any of claims 1 to 5, wherein determining the placement location of the first object based on the attributes of the original image comprises:
acquiring the illumination intensity of the environment where the terminal equipment is located, wherein the terminal equipment is used for displaying the video or the cover of the gallery;
detecting whether a first color attribute of the original image matches the illumination intensity, the first color attribute comprising any one or more of hue, saturation and lightness of the original image;
and if the first color attribute of the original image is matched with the illumination intensity, determining the placement position of the first object according to the attribute of the original image.
7. The method of claim 6, further comprising:
and if the first color attribute of the original image is not matched with the illumination intensity, performing image processing on the original image according to the illumination intensity to obtain a new original image, and determining the placement position of the first object according to the attribute of the original image.
8. The method of any one of claims 1 to 5, wherein the generating the video or the cover of the gallery by superimposing the first object on the original image according to the placement position comprises:
acquiring size information of a main object in the original image;
determining an object size corresponding to size information of a main object in the original image as a target object size of the first object according to a corresponding relation between the image size and the object size, wherein the target object size is used for indicating the size of an image in the first object and/or the font size of a text in the first object;
generating a cover of the video or the gallery by superimposing the first object on the original image in the target object size according to the placement position.
9. The method of any one of claims 1 to 5, wherein the generating the video or the cover of the gallery by superimposing the first object on the original image according to the placement position comprises:
acquiring a content tag of the video or the gallery, wherein the content tag is used for indicating the type of the content of the video or the gallery;
determining an object font corresponding to the content label of the video or the gallery as a target object font corresponding to the first object according to a corresponding relation between the content label and the object font, wherein the target object font is used for indicating a font of a text in the first object;
and generating a cover of the video or the gallery by overlaying the first object on the original image in the target object font according to the placement position.
10. The method of any one of claims 1 to 5, wherein the generating the video or the cover of the gallery by superimposing the first object on the original image according to the placement position comprises:
determining a second color attribute of the original image, and determining a target object color corresponding to the first object according to the second color attribute, wherein the second color attribute comprises a dominant color of the original image;
generating a cover of the video or the gallery by superimposing the first object on the original image in the target object color according to the placement position.
11. The method according to any one of claims 1 to 10, wherein the video or the front cover of the gallery is used for displaying in a terminal device according to a first dynamic parameter, and the first dynamic parameter is used for indicating an entry effect and/or an exit effect of the video or the front cover of the gallery.
12. The method of claim 11, wherein the first dynamic effect parameter is determined according to a content tag of the video or the gallery.
13. The method according to any of claims 1 to 12, wherein prior to said determining the placement position of the first object from the properties of the original image, the method further comprises:
acquiring an original image set, wherein the original image set comprises a plurality of original images;
the determining a placement position of a first object according to the attribute of the original image, and superimposing the first object on the original image according to the placement position to generate a target file includes:
determining a plurality of placing positions of the first object according to the attributes of the plurality of original images, and respectively superposing the first object on the plurality of original images according to the placing positions to generate a plurality of covers of the video or the gallery;
determining an object cover from the plurality of covers, the object cover for display in a terminal device.
14. The method of claim 13, wherein the determining a target cover from the plurality of covers comprises:
determining a target cover from the plurality of covers based on an aesthetic effect of the plurality of covers, the aesthetic effect of the plurality of covers including symmetry and/or a sense of stability of the plurality of covers.
15. The method according to claim 13 or 14, characterized in that the method further comprises:
acquiring the display duration of a currently displayed target cover;
and if the display time length of the currently displayed target cover is greater than a preset time length threshold value, determining a new target cover from the covers, wherein the new target cover is used for being displayed in the terminal equipment.
16. The method according to any one of claims 1 to 10, wherein after the video or a cover of the gallery is displayed in a terminal device, the first object in the cover is dynamically displayed according to a second dynamic effect parameter.
17. The method according to claim 16, wherein the second dynamic effect parameter is determined according to user displacement information, the second dynamic effect parameter comprises a moving direction and/or a moving speed of the first object relative to the original image, and the user displacement information comprises any one or more of an angle change amount, a distance change amount and a time length for forming a change of the eyes of the user relative to the terminal device.
18. The method of any one of claims 1 to 17, wherein the first object is dynamically varied.
19. The method of claim 18, wherein the first object dynamically changes as a function of time or as a function of the content of the original image.
20. A cover generation method, comprising:
determining a format of a first object from the content of the original image, the first object comprising one or more of: text or an image, the format being indicative of a layout of one or more sub-objects comprised by the first object or one or more of the following properties of the first object: size, font, or color, the original image being derived from a video or gallery;
superimposing the first object with the original image in the format to generate a cover page of the video or the gallery.
21. The method of claim 20, wherein determining the format of the first object based on the content of the original image comprises:
acquiring a content tag of the original image, the video or the gallery, wherein the content tag is used for indicating a type of the content of the original image, the video or the gallery;
and determining the format of the first object according to the corresponding relation between the content tag and the object format.
22. The method of claim 20, wherein determining the format of the first object based on the content of the original image comprises:
performing image recognition on the original image to determine the content of the original image;
and determining the format of the first object according to the corresponding relation between the content and the object format.
23. The method of any of claims 20 to 22, wherein the first object is dynamically varied.
24. The method of claim 23, wherein the first object dynamically changes as a function of time or as a function of the content of the original image.
25. A cover creation apparatus for performing the cover creation method of any one of claims 1 to 24.
26. A cover creation apparatus comprising a processor and a storage medium storing instructions that, when executed by the processor, cause the apparatus to perform a cover creation method as claimed in any one of claims 1 to 24.
27. A computer-readable storage medium storing instructions that, when executed, cause a terminal device or a network device to perform the cover generation method of any one of claims 1 to 24.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010735047.8A CN113986407A (en) | 2020-07-27 | 2020-07-27 | Cover generation method and device and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010735047.8A CN113986407A (en) | 2020-07-27 | 2020-07-27 | Cover generation method and device and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113986407A true CN113986407A (en) | 2022-01-28 |
Family
ID=79731579
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010735047.8A Pending CN113986407A (en) | 2020-07-27 | 2020-07-27 | Cover generation method and device and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113986407A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114708363A (en) * | 2022-04-06 | 2022-07-05 | 广州虎牙科技有限公司 | Game live broadcast cover generation method and server |
CN116860247A (en) * | 2023-08-31 | 2023-10-10 | 江西省信息中心(江西省电子政务网络管理中心 江西省信用中心 江西省大数据中心) | User interface generation method and device, storage medium and electronic equipment |
WO2024016103A1 (en) * | 2022-07-18 | 2024-01-25 | 京东方科技集团股份有限公司 | Image display method and apparatus |
CN114708363B (en) * | 2022-04-06 | 2025-03-28 | 广州虎牙科技有限公司 | Game live broadcast cover generation method and server |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106569763A (en) * | 2016-10-19 | 2017-04-19 | 华为机器有限公司 | An image display method and terminal |
CN107392974A (en) * | 2017-07-13 | 2017-11-24 | 北京金山安全软件有限公司 | Picture generation method and device and terminal equipment |
CN107888845A (en) * | 2017-11-14 | 2018-04-06 | 腾讯数码(天津)有限公司 | A kind of method of video image processing, device and terminal |
CN108090463A (en) * | 2017-12-29 | 2018-05-29 | 腾讯科技(深圳)有限公司 | Object control method, apparatus, storage medium and computer equipment |
CN108550101A (en) * | 2018-04-19 | 2018-09-18 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
CN108833939A (en) * | 2018-06-20 | 2018-11-16 | 北京优酷科技有限公司 | Generate the method and device of the poster of video |
CN108989609A (en) * | 2018-08-10 | 2018-12-11 | 北京微播视界科技有限公司 | Video cover generation method, device, terminal device and computer storage medium |
CN109257645A (en) * | 2018-09-11 | 2019-01-22 | 传线网络科技(上海)有限公司 | Video cover generation method and device |
CN109361852A (en) * | 2018-10-18 | 2019-02-19 | 维沃移动通信有限公司 | A kind of image processing method and device |
CN109729426A (en) * | 2017-10-27 | 2019-05-07 | 优酷网络技术(北京)有限公司 | A kind of generation method and device of video cover image |
CN109816743A (en) * | 2018-12-19 | 2019-05-28 | 华为技术有限公司 | Generate the method and terminal device of identification pattern |
CN109996091A (en) * | 2019-03-28 | 2019-07-09 | 苏州八叉树智能科技有限公司 | Generate method, apparatus, electronic equipment and the computer readable storage medium of video cover |
CN109992697A (en) * | 2019-03-27 | 2019-07-09 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN110602554A (en) * | 2019-08-16 | 2019-12-20 | 华为技术有限公司 | Cover image determining method, device and equipment |
-
2020
- 2020-07-27 CN CN202010735047.8A patent/CN113986407A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106569763A (en) * | 2016-10-19 | 2017-04-19 | 华为机器有限公司 | An image display method and terminal |
CN107392974A (en) * | 2017-07-13 | 2017-11-24 | 北京金山安全软件有限公司 | Picture generation method and device and terminal equipment |
CN109729426A (en) * | 2017-10-27 | 2019-05-07 | 优酷网络技术(北京)有限公司 | A kind of generation method and device of video cover image |
CN107888845A (en) * | 2017-11-14 | 2018-04-06 | 腾讯数码(天津)有限公司 | A kind of method of video image processing, device and terminal |
CN108090463A (en) * | 2017-12-29 | 2018-05-29 | 腾讯科技(深圳)有限公司 | Object control method, apparatus, storage medium and computer equipment |
CN108550101A (en) * | 2018-04-19 | 2018-09-18 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
CN108833939A (en) * | 2018-06-20 | 2018-11-16 | 北京优酷科技有限公司 | Generate the method and device of the poster of video |
CN108989609A (en) * | 2018-08-10 | 2018-12-11 | 北京微播视界科技有限公司 | Video cover generation method, device, terminal device and computer storage medium |
CN109257645A (en) * | 2018-09-11 | 2019-01-22 | 传线网络科技(上海)有限公司 | Video cover generation method and device |
CN109361852A (en) * | 2018-10-18 | 2019-02-19 | 维沃移动通信有限公司 | A kind of image processing method and device |
CN109816743A (en) * | 2018-12-19 | 2019-05-28 | 华为技术有限公司 | Generate the method and terminal device of identification pattern |
CN109992697A (en) * | 2019-03-27 | 2019-07-09 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN109996091A (en) * | 2019-03-28 | 2019-07-09 | 苏州八叉树智能科技有限公司 | Generate method, apparatus, electronic equipment and the computer readable storage medium of video cover |
CN110602554A (en) * | 2019-08-16 | 2019-12-20 | 华为技术有限公司 | Cover image determining method, device and equipment |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114708363A (en) * | 2022-04-06 | 2022-07-05 | 广州虎牙科技有限公司 | Game live broadcast cover generation method and server |
CN114708363B (en) * | 2022-04-06 | 2025-03-28 | 广州虎牙科技有限公司 | Game live broadcast cover generation method and server |
WO2024016103A1 (en) * | 2022-07-18 | 2024-01-25 | 京东方科技集团股份有限公司 | Image display method and apparatus |
CN116860247A (en) * | 2023-08-31 | 2023-10-10 | 江西省信息中心(江西省电子政务网络管理中心 江西省信用中心 江西省大数据中心) | User interface generation method and device, storage medium and electronic equipment |
CN116860247B (en) * | 2023-08-31 | 2023-11-21 | 江西省信息中心(江西省电子政务网络管理中心 江西省信用中心 江西省大数据中心) | User interface generation method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109618222B (en) | A kind of splicing video generation method, device, terminal device and storage medium | |
US10049477B1 (en) | Computer-assisted text and visual styling for images | |
CN108780389B (en) | Image retrieval for computing devices | |
CN111654635A (en) | Shooting parameter adjusting method and device and electronic equipment | |
US11256919B2 (en) | Method and device for terminal-based object recognition, electronic device | |
EP2560145A2 (en) | Methods and systems for enabling the creation of augmented reality content | |
US9176748B2 (en) | Creating presentations using digital media content | |
CN106203286B (en) | Augmented reality content acquisition method and device and mobile terminal | |
JP2006331393A (en) | Album creating apparatus, album creating method and program | |
WO2013023705A1 (en) | Methods and systems for enabling creation of augmented reality content | |
JP2022166078A (en) | Composing and realizing viewer's interaction with digital media | |
CN111541907B (en) | Article display method, apparatus, device and storage medium | |
US20220174237A1 (en) | Video special effect generation method and terminal | |
CN111638784B (en) | Facial expression interaction method, interaction device and computer storage medium | |
CN105519101A (en) | Recognition interfaces for computing devices | |
CN109658486B (en) | Image processing method and device, and storage medium | |
WO2020125481A1 (en) | Method for generating identification pattern, and terminal device | |
CN112232260A (en) | Subtitle area identification method, device, device and storage medium | |
CN113986407A (en) | Cover generation method and device and computer storage medium | |
CN114564131A (en) | A content publishing method, device, computer equipment and storage medium | |
CN109074680A (en) | Realtime graphic and signal processing method and system in augmented reality based on communication | |
CN112907702A (en) | Image processing method, image processing device, computer equipment and storage medium | |
US20230334791A1 (en) | Interactive reality computing experience using multi-layer projections to create an illusion of depth | |
CN111445439B (en) | Image analysis method, device, electronic equipment and medium | |
CN113794799A (en) | Video processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |