[go: up one dir, main page]

CN108737852A - A kind of method for processing video frequency, terminal, the device with store function - Google Patents

A kind of method for processing video frequency, terminal, the device with store function Download PDF

Info

Publication number
CN108737852A
CN108737852A CN201810388139.6A CN201810388139A CN108737852A CN 108737852 A CN108737852 A CN 108737852A CN 201810388139 A CN201810388139 A CN 201810388139A CN 108737852 A CN108737852 A CN 108737852A
Authority
CN
China
Prior art keywords
video frame
video
target
specific
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810388139.6A
Other languages
Chinese (zh)
Inventor
刘晨晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tinno Mobile Technology Co Ltd
Shenzhen Tinno Wireless Technology Co Ltd
Original Assignee
Shenzhen Tinno Mobile Technology Co Ltd
Shenzhen Tinno Wireless Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tinno Mobile Technology Co Ltd, Shenzhen Tinno Wireless Technology Co Ltd filed Critical Shenzhen Tinno Mobile Technology Co Ltd
Priority to CN201810388139.6A priority Critical patent/CN108737852A/en
Publication of CN108737852A publication Critical patent/CN108737852A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

This application discloses a kind of method for processing video frequency, terminal, with the device of store function, this method includes:Pending video is obtained, the pending video includes multiple video frame;At least one particular video frequency frame is extracted from multiple video frame in the pending video;All or part of information in the particular video frequency frame is synthesized in each video frame before the particular video frequency frame.By the above-mentioned means, the application can reduce the difficulty for making and including the video attended to anything else.

Description

Video processing method, terminal and device with storage function
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video processing method, a terminal, and a device with a storage function.
Background
With the advent of social networks, users can upload various interesting small videos on the platforms of various applications (e.g., WeChat, QQ, tremble, fast-hand, etc.). For example, the user can divide the body according to the large-size movie of hollywood X war police and the inside movie of japan cartoon fire ninja, and make the photographed videos into videos including several parts corresponding to the same object (e.g., human, animal, etc.).
The inventor of the present application found in a long-term research and development process that the existing process of making the above-mentioned one video includes several entities is complicated.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a video processing method, a terminal and a device with a storage function, which can reduce the difficulty of making a video containing a split body.
In order to solve the technical problem, the application adopts a technical scheme that: there is provided a video processing method, the processing method comprising: acquiring a video to be processed, wherein the video to be processed comprises a plurality of video frames; extracting at least one specific video frame from a plurality of video frames in the video to be processed; and synthesizing all or part of the information in the specific video frame into each video frame before the specific video frame.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a terminal, the terminal comprising: the system comprises a processor, a memory, a communication circuit and a display, wherein the processor is respectively coupled with the memory, the communication circuit and the display, and the processor, the memory, the communication circuit and the display can realize the steps of any one of the methods when in work.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a device having a storage function, on which program data are stored, which program data, when being executed by a processor, carry out the steps of any of the methods described above.
The beneficial effect of this application is: different from the prior art, the video processing method provided by the application comprises the following steps: at least one specific video frame is extracted from a plurality of video frames of the video to be processed, and all or part of information in the specific video frame is synthesized into each video frame before the specific video frame, namely, all or part of information in the specific video frame is contained in each video frame before the specific video frame, so that the effect of distinguishing the body is achieved. The method for processing the video with the split body is simple, and difficulty in making the video with the split body is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a video processing method according to the present application;
FIG. 2 is a schematic flow chart of one embodiment of step S103 in FIG. 1;
FIG. 3 is a schematic flow chart of one embodiment of step S201 in FIG. 2;
FIG. 4 is a schematic flow chart of another embodiment of step S103 in FIG. 1
Fig. 5 is a schematic diagram of an embodiment of processing a video to be processed by using the video processing method of the present application, where fig. 5 (a) is a schematic diagram of an embodiment of a video frame located before a first specific video frame, fig. 5 (b) is a schematic diagram of an embodiment of a video frame located between the first specific video frame and a second specific video frame, fig. 5 (c) is a schematic diagram of an embodiment of a video frame located between the second specific video frame and a third specific video frame, and fig. 5 (d) is a schematic diagram of an embodiment of a video frame located after the third specific video frame;
FIG. 6 is a schematic block diagram of an embodiment of a terminal of the present application;
fig. 7 is a schematic structural diagram of an embodiment of the device with a storage function according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a video processing method according to an embodiment of the present application, the video processing method including:
s101: and acquiring a video to be processed, wherein the video to be processed comprises a plurality of video frames.
Specifically, the video to be processed may be captured by a device (e.g., a mobile phone, a camera, a tablet, etc.) having a function of capturing video. The essence of the video to be processed is that it is made up of individual still pictures, which are referred to as video frames. Since the frequency of human eye recognition is limited, when the number of video frames seen in a unit time exceeds a certain number (for example, 25 video frames appear in 1 second), the picture is perceived as moving by human eyes.
S102: at least one specific video frame is extracted from a plurality of video frames in a video to be processed.
Specifically, the number of the specific video frames may be one, two, three, etc., which is not limited in this application, and the at least one specific video frame appears in the video to be processed at different times.
In an embodiment, the method for extracting the specific video frame may be to play the video to be processed according to a slow speed, a normal speed, a fast speed, or the like, and when the video frame deemed by the user to be needed is played, the user may pause the playing and copy the video frame corresponding to the currently paused page, thereby obtaining the specific video frame. In another application scenario, the method for extracting a specific video frame may be other, and the application is not limited thereto.
S103: all or part of the information in the specific video frame is synthesized into each video frame before the specific video frame.
Specifically, in one embodiment, information in a particular video frame that needs to be composited into each video frame preceding the particular video frame is defined as a target, and information in the particular video frame that does not need to be composited into each video frame preceding the particular video frame is defined as a background. In order to obtain a better visual effect, in one application scene, the target is a part of the video to be processed where a form (e.g., a posture, a position, etc.) occurs, and the background is a part of the video to be processed where the form changes or does not change. Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an embodiment of step S103 in fig. 1, where the step S103 specifically includes:
s201: objects in a particular video frame are extracted.
In an application scenario, the target is information that occupies the largest angle of view in the video to be processed and is retained and can move, and can be determined by visual observation or other manners, for example, the area of the picture occupied by all objects (e.g., people, animals, etc.) that can move in the video to be processed can be calculated, and the object with the largest occupied area is selected as the target; of course, in other application scenarios, the target may be selected by the user, and the target does not necessarily occupy the maximum viewing angle reservation in the video to be processed.
At present, a video to be processed generally consists of a plurality of video frames of color images, and since color information in the video frames of the color images may interfere with an extraction target, please refer to fig. 3, where fig. 3 is a flowchart illustrating an embodiment of step S201 in fig. 2, where the step S201 specifically includes:
s301: and converting the color image of the specific video frame into a gray-scale image.
Specifically, the purpose of converting the color image into the grayscale image is to remove color information interference and facilitate extracting the target, and any one of the prior art (e.g., averaging, weighted averaging, binarization, etc.) may be used as the method for converting the color image into the grayscale image, and the detailed description of the application is omitted here.
S302: and obtaining the area corresponding to the target from the gray-scale image.
Specifically, in one embodiment, the region corresponding to the target may be manually selected from the grayscale image; in another embodiment, the number of the selected specific video frames is at least two, and due to different object forms (e.g., postures, positions, etc.), the object in each specific video frame is different from the object in another specific video frame in form, and the area corresponding to the object in the specific video frame can be obtained by simple superposition.
S303: and extracting the target from the color image according to the region corresponding to the target.
In other embodiments, the method for extracting the object in the specific video frame in step S201 may be other methods, and the present application is not limited thereto, and for example, the color image of the specific video frame may be converted into a grayscale image, the object may be extracted from the grayscale image, and the extracted object may be converted from the grayscale image into the color image.
S202: the target is superimposed on each video frame preceding the particular video frame.
Specifically, in an application scenario, each video frame located before a specific video frame is defined as a first video frame, and an object may be superimposed onto the first video frame by a method that the object and the first video frame before superimposition are located in the same layer, the first video frame before superimposition itself changes, and part of information in the first video frame before superimposition is replaced by information contained in the object, where the object is located at a first position in the specific video frame, the object is located at a second position in the first video frame after superimposition, and the first position and the second position coincide with each other, that is, when the first position and the second position are projected into the same coordinate system, coordinate information corresponding to the first position and the second position is the same.
In another application scenario, the method for overlaying the target onto the first video frame may also be that the target and the first video frame before overlaying are in different layers, the target is disposed directly above or directly below the first video frame, after the target is overlaid onto the first video frame, the target is orthographically projected onto a second position in the first video frame, and the first position and the second position coincide with each other, that is, when the first position and the second position are projected onto the same coordinate system, coordinate information corresponding to the first position and the second position is the same. In other application scenarios, the method for overlaying the target into the first video frame may be other, and the application does not limit this.
In another application scenario, when the background in the video to be processed is still, step S202 in the above embodiment may also superimpose the target and the background together into each video frame before the specific video frame, which is not limited in this application.
Specifically, in another embodiment, the number of the specific video frames is at least two, for example, the specific video frames include a first specific video frame and a second specific video frame, and the second specific video frame is located after the first specific video frame in the video frames of the video to be processed, that is, in the video to be processed, the second specific video frame occurs later than the first specific video frame, please refer to fig. 4, in which step S103 in the above embodiment includes:
s401: a first object in a first particular video frame is extracted.
Specifically, the method for obtaining the first target is the same as that in step S201 in the above embodiment, and is not repeated here.
S402: a second object in a second particular video frame is extracted.
Specifically, the method for obtaining the second target is the same as that in step S201 in the above embodiment, and is not repeated here. The first target and the second target correspond to different forms (postures, positions, etc.) of the same object.
S403: and correspondingly overlaying the first target and the second target to each video frame before the first specific video frame and the second specific video frame respectively.
In an application scenario, the step S403 includes: (1) superimposing a first object into each video frame preceding a first specific video frame to generate a video frame of a first processed video, wherein the first processed video comprises the video frame on which the first object is superimposed; (2) in the video frames of the first processed video, a second object is superimposed into each video frame preceding a second specific video to generate video frames of a second processed video, wherein the second processed video includes the video frames on which the first object and the second object are superimposed and the video frames on which the second object is superimposed. The manner of overlapping the first object and the second object with the video frame may refer to step S202 in the above embodiment, and is not described herein again. Of course, in other application scenarios, the second target may be superimposed first, and then the first target may be superimposed, which is not limited in this application.
In another application scenario, the step S403 includes: (1) superposing the first target and the second target to form a target image, wherein the target image comprises all information of the first target and the second target; (2) superimposing the target image on each video frame before the second specific video frame to generate a video frame of the first processed video, where the superimposing manner can be referred to step S202 in the above embodiment, and details are not repeated herein; (3) each of the video frames of the first processed video that is located between the first specific video frame and the second specific video frame is processed so that the first object is not displayed. In one embodiment, when the target image is located right above the video frame, the transparency of the area containing the first target in the target image can be set to 100%, so that the first target is not displayed; in another embodiment, when the video frame contains the target image, the information of the area corresponding to the first target in the target image can be replaced by the background information in the corresponding video frame in the video to be processed; in other embodiments, the effect that the first target is not displayed may also be achieved in other manners, which is not limited in this application.
In another embodiment, after the video to be processed is processed by the above video processing method, an overlay error or other situations may inevitably occur, and the method provided in the present application further includes: receiving an instruction for reprocessing the video to be processed, and reprocessing the video to be processed; or receiving an instruction for finishing the processing of the video to be processed, and outputting the processed video to be processed. In an application scenario, the instruction is issued by the user, that is, the user determines whether the processed visual effect is acceptable, and only the processed video meeting the visual effect of the user is output. In another application scenario, the instruction is sent by the device itself, in one embodiment, the processed video to be processed can be uploaded to a database, the later database summarizes the preference of the visual effect corresponding to the current user through a large amount of accumulated data, and the device automatically determines whether the visual effect is satisfied according to the preference.
The video processing method provided by the present application is further described below with a specific application scenario, and the method includes the following processes:
(1) recording a section of video to be processed by using equipment such as a mobile phone;
(2) selecting a specific video frame from a video to be processed, and supposing that three specific video frames are selected, wherein the three specific video frames are respectively a first specific video frame, a second specific video frame and a third specific video frame, the first specific video frame is positioned before the second specific video frame, and the second specific video frame is positioned before the third specific video frame;
(3) extracting a first object A from a first specific video frame, a second object B from a second specific video frame, and an object C from a third video frame, wherein the first object A, the second object B, and the third object C correspond to different forms (e.g., postures, positions, etc.) of the same object (e.g., a person, an animal, etc.);
(4) and correspondingly overlaying the first target A, the second target B and the third target C to each video frame before the first specific video frame, the second specific video frame and the third specific video frame respectively.
Referring to fig. 5, in fig. 5, a first object a, a second object B, and a third object C correspond to a girl, which is a background outside the girl and includes a static part (e.g., a teaching building, etc.) and a non-static part (e.g., the rest of the children on the playground), where fig. 5 (a) is a schematic diagram of an embodiment of a video frame before a first specific video frame, which includes the first object a, the second object B, and the third object C. Fig. 5 (B) is a schematic diagram of an embodiment of a video frame located between a first specific video frame and a second specific video frame, where the video frame includes a second object B and a third object C. Fig. 5 (C) is a schematic diagram of an embodiment of a video frame located between a second specific video frame and a third specific video frame, where the video frame includes a third object C. Fig. 5 (d) is a schematic diagram of an embodiment of a video frame after the third specific video frame, in which the first object a, the second object B, and the third object C are absent.
(5) Checking whether the visual effect of the processed video to be processed is possible, and if not, editing again; and if the video is judged to be possible, outputting the processed video to be processed.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of a terminal according to the present application, where the terminal includes a processor 100, a memory 102, a communication circuit 104, and a display 106, the processor 100 is respectively coupled to the memory 102, the communication circuit 104, and the display 106, and the processor 100, the memory 102, the communication circuit 104, and the display 106 are capable of implementing steps of a video processing method in any of the embodiments when in operation. In one application scenario, the terminal may be a mobile phone, a tablet, a computer, or the like.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of an apparatus with storage function according to the present application, in which the apparatus 20 stores program data 200, and the program data 200 is executed by a processor to implement the steps in the video processing method according to any of the embodiments.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A video processing method, characterized in that the processing method comprises:
acquiring a video to be processed, wherein the video to be processed comprises a plurality of video frames;
extracting at least one specific video frame from a plurality of video frames in the video to be processed;
and synthesizing all or part of the information in the specific video frame into each video frame before the specific video frame.
2. The processing method according to claim 1, wherein the synthesizing information in the specific video frame that needs to be synthesized into each video frame before the specific video frame is defined as a target, the synthesizing information in the specific video frame that does not need to be synthesized into each video frame before the specific video frame is defined as a background, and the synthesizing all or part of the information in the specific video frame into each video frame before the specific video frame comprises:
extracting the target in the specific video frame;
superimposing the target into each video frame preceding the particular video frame.
3. The processing method according to claim 2, wherein the video frame is a color image, and the extracting the object in the specific video frame comprises:
converting the color image of the specific video frame into a gray-scale image;
obtaining a region corresponding to the target from the gray-scale image;
and extracting the target from the color image according to the region corresponding to the target.
4. The processing method according to claim 2,
the target is located at a first position in the particular video frame;
each video frame preceding the specific video frame is defined as a first video frame, and after the target is superimposed into the first video frame, the target is located at a second position in the first video frame or is orthographically projected at the second position in the first video frame;
wherein the first position coincides with the second position.
5. The processing method according to claim 2, wherein the specific video frame comprises a first specific video frame and a second specific video frame, and wherein the second specific video frame is located after the first specific video frame in the video frames, and the synthesizing all or part of the information in the specific video frame into each video frame located before the specific video frame comprises:
extracting a first target in the first specific video frame;
extracting a second target in the second specific video frame, wherein the first target and the second target correspond to different forms of the same object;
correspondingly overlaying the first target and the second target to each video frame before the first specific video frame and the second specific video frame respectively.
6. The processing method according to claim 5, wherein said superimposing the first object and the second object respectively into each video frame preceding the first specific video frame and the second specific video frame comprises:
superimposing the first target into each video frame preceding the first particular video frame to produce a video frame of a first processed video, wherein the first processed video comprises the video frame on which the first target is superimposed; and
in the video frames of the first processed video, the second object is superimposed into each video frame preceding the second specific video to generate video frames of a second processed video, wherein the second processed video includes the video frames on which the first object and the second object are superimposed and the video frames on which the second object is superimposed.
7. The processing method according to claim 5, wherein said superimposing the first object and the second object respectively into each video frame preceding the first specific video frame and the second specific video frame comprises:
superimposing the first target and the second target to form a target image;
superimposing the target image into each video frame preceding the second particular video frame to produce a video frame of a first processed video;
processing each video frame of the video frames of the first processed video that is located between the first particular video frame and the second particular video frame such that the first target is not displayed.
8. The method of claim 1, wherein after the synthesizing all or part of the information in the specific video frame into each video frame before the specific video frame, the method further comprises:
receiving an instruction for reprocessing the video to be processed, and reprocessing the video to be processed; or,
and receiving an instruction for finishing the processing of the video to be processed, and outputting the processed video to be processed.
9. A terminal, characterized in that the terminal comprises: a processor, a memory, a communication circuit, and a display, the processor being coupled to the memory, the communication circuit, and the display, respectively, the processor, the memory, the communication circuit, and the display being operable to implement the steps of the method of any of claims 1-8.
10. An apparatus having a storage function, on which program data are stored, characterized in that the program data realize the steps in the method of any of claims 1-8 when executed by a processor.
CN201810388139.6A 2018-04-26 2018-04-26 A kind of method for processing video frequency, terminal, the device with store function Withdrawn CN108737852A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810388139.6A CN108737852A (en) 2018-04-26 2018-04-26 A kind of method for processing video frequency, terminal, the device with store function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810388139.6A CN108737852A (en) 2018-04-26 2018-04-26 A kind of method for processing video frequency, terminal, the device with store function

Publications (1)

Publication Number Publication Date
CN108737852A true CN108737852A (en) 2018-11-02

Family

ID=63939970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810388139.6A Withdrawn CN108737852A (en) 2018-04-26 2018-04-26 A kind of method for processing video frequency, terminal, the device with store function

Country Status (1)

Country Link
CN (1) CN108737852A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110225241A (en) * 2019-04-29 2019-09-10 努比亚技术有限公司 A kind of video capture control method, terminal and computer readable storage medium
CN110807407A (en) * 2019-10-30 2020-02-18 东北大学 A Feature Extraction Method for Highly Approximate Dynamic Objects in Video
CN111832539A (en) * 2020-07-28 2020-10-27 北京小米松果电子有限公司 Video processing method and device and storage medium
WO2023036160A1 (en) * 2021-09-07 2023-03-16 上海商汤智能科技有限公司 Video processing method and apparatus, computer-readable storage medium, and computer device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110225241A (en) * 2019-04-29 2019-09-10 努比亚技术有限公司 A kind of video capture control method, terminal and computer readable storage medium
CN110807407A (en) * 2019-10-30 2020-02-18 东北大学 A Feature Extraction Method for Highly Approximate Dynamic Objects in Video
CN110807407B (en) * 2019-10-30 2023-04-18 东北大学 Feature extraction method for highly approximate dynamic target in video
CN111832539A (en) * 2020-07-28 2020-10-27 北京小米松果电子有限公司 Video processing method and device and storage medium
WO2023036160A1 (en) * 2021-09-07 2023-03-16 上海商汤智能科技有限公司 Video processing method and apparatus, computer-readable storage medium, and computer device

Similar Documents

Publication Publication Date Title
EP3545686B1 (en) Methods and apparatus for generating video content
JP7146662B2 (en) Image processing device, image processing method, and program
CN106973228B (en) Shooting method and electronic equipment
CN113228625A (en) Video conference supporting composite video streams
EP1703730A1 (en) Method and apparatus for composing images during video communications
CN113973190A (en) Video virtual background image processing method and device and computer equipment
CN108737852A (en) A kind of method for processing video frequency, terminal, the device with store function
CN109509146A (en) Image split-joint method and device, storage medium
CN105939497B (en) Media streaming system and media streaming method
KR102203109B1 (en) Method and apparatus of processing image based on artificial neural network
US11716539B2 (en) Image processing device and electronic device
US20230166157A1 (en) Electronic apparatus and control method therefor
US12217368B2 (en) Extended field of view generation for split-rendering for virtual reality streaming
CN113691737B (en) Video shooting method, equipment and storage medium
EP3196838A1 (en) An apparatus and associated methods
CN102542300A (en) Method for automatically recognizing human body positions in somatic game and display terminal
WO2020059327A1 (en) Information processing device, information processing method, and program
CN112887653B (en) Information processing method and information processing device
CN107580228B (en) Monitoring video processing method, device and equipment
CN105261041A (en) Information processing method and electronic device
CN105094614B (en) Method for displaying image and device
US10937174B2 (en) Image processing device, image processing program, and recording medium
CN112672057B (en) Shooting method and device
CN115801983A (en) Image superposition method and device and electronic equipment
CN115835035A (en) Image frame interpolation method, device and equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20181102