[go: up one dir, main page]

WO2022130514A1 - Video processing method, video processing device, and program - Google Patents

Video processing method, video processing device, and program Download PDF

Info

Publication number
WO2022130514A1
WO2022130514A1 PCT/JP2020/046825 JP2020046825W WO2022130514A1 WO 2022130514 A1 WO2022130514 A1 WO 2022130514A1 JP 2020046825 W JP2020046825 W JP 2020046825W WO 2022130514 A1 WO2022130514 A1 WO 2022130514A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
display
processed
area
processing
Prior art date
Application number
PCT/JP2020/046825
Other languages
French (fr)
Japanese (ja)
Inventor
裕 千明
仁 山口
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2020/046825 priority Critical patent/WO2022130514A1/en
Publication of WO2022130514A1 publication Critical patent/WO2022130514A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region

Definitions

  • the present disclosure relates to a video processing method, a video processing device, and a program for displaying a video on a display.
  • Display technology for displaying images has been developed, and it is highly portable, and various displays are known that are composed of various shapes such as circles and polygons as well as rectangles. It is also known that one virtual display configured by combining the plurality of displays displays the entire image by displaying a part of the image on each of the plurality of displays. Further, as shown in Non-Patent Document 1, a plurality of mobile displays in which each of the plurality of moving objects is equipped with a display displays a part of an image corresponding to each position, whereby the plurality of mobile displays are displayed. It is known that one virtual display configured by the above displays the entire image.
  • An object of the present disclosure made in view of such circumstances is to provide a video processing method, a video processing device, and a program capable of displaying an appropriate video while reducing the amount of video data.
  • the video processing method includes steps for acquiring video and In the video, the non-processed area based on the display area displayed on the display of the image display device is not processed to reduce the amount of data, and the processed image is processed by applying the data amount reduction processing to the processed area different from the non-processed area. Steps to generate and A step of encoding the processed video so that the amount of data of the processed video is reduced, A step of transmitting to the video display device setting information including the encoded video, the position of the display area in the video, and the enlargement ratio of the display area when the display is displayed on the display. And, including.
  • the video processing apparatus reduces the amount of data in the acquisition unit for acquiring a video and the non-processed region based on the display area displayed on the display of the video display device in the video.
  • An image processing unit that generates a processed image by performing data amount reduction processing on a processing area different from the non-processed area without processing, and the processed image so that the transmission amount of the processed image is reduced.
  • the setting information including the encoder to be encoded, the processed image encoded by the encoder, the position of the display area in the image, and the enlargement ratio of the display area when the display area is displayed on the display is described. It is provided with a transmission unit for transmitting to a video display device.
  • the program according to the present disclosure causes the computer to function as the above-mentioned video processing device.
  • the video processing device According to the video processing method, the video processing device, and the program according to the present disclosure, it is possible to display an appropriate video while reducing the amount of video data.
  • FIG. It is a flowchart for demonstrating the operation of the image processing apparatus shown in FIG. It is a schematic diagram which shows the other example of a processed image. It is a functional block diagram of the video processing system which concerns on the 3rd Embodiment of this disclosure. It is a figure for demonstrating another example which determines a non-machining area. It is a schematic diagram which shows the non-processed area and the processed area determined by another example shown in FIG. It is a flowchart for demonstrating operation of the image processing apparatus shown in FIG. It is a hardware block diagram of the image processing apparatus and the image display apparatus which concerns on 1st to 3rd Embodiment.
  • FIG. 1 is a functional block diagram of the video processing system 100 according to the first embodiment of the present invention.
  • the video processing system 100 includes a camera 1, a video processing device 2, and a video display device 3.
  • the camera 1 and the video processing device 2 communicate with each other via a communication cable or a communication network.
  • the video processing device 2 and the video display device 3 communicate with each other via a communication network.
  • the video processing system 100 does not have to include the camera 1, and in such a configuration, the video processing device 2 may generate video by any method such as computer graphics. good. Further, the video processing device 2 may acquire the video generated by another information processing device.
  • acquiring an image may include generating an image by the image processing device 2 and acquiring an image generated by the camera 1 or the information processing device.
  • an example of acquiring an image generated by the camera 1 will be described.
  • the camera 1 captures a subject in the background of the main subject OB and the main subject OB included in the shooting area, and thereby includes an image OBI of the main subject OB and an image including the background image.
  • the shooting area can be, for example, a rectangular area having a predetermined aspect ratio.
  • the camera 1 transmits a video CV to the video processing device 2.
  • the camera 1 may transmit a video CV to the video processing device 2 using the communication standard of SDI (Serial Digital Interface), or may use the communication standard of HDMI (registered trademark) (High-Definition Multimedia Interface).
  • the video CV may be transmitted to the video processing device 2. Further, the camera 1 may transmit the video CV to the video processing device 2 by using any other communication standard.
  • the video processing device 2 includes an acquisition unit 21, a display setting holding unit 22, a video processing unit 23, an encoder 24, and a transmission unit 25.
  • the acquisition unit 21 and the transmission unit 25 are configured by a communication interface.
  • the display setting holding unit 22 is composed of, for example, a memory such as a semiconductor memory, a magnetic memory, or an optical memory.
  • the video processing unit 23 and the encoder 24 form a control unit (controller).
  • the control unit may be configured by dedicated hardware such as an ASIC (Application Specific Integrated Circuit) or FPGA (Field-Programmable Gate Array), may be configured by a processor, or may be configured by including both. May be good.
  • the acquisition unit 21 acquires the video CV generated by the camera 1 as shown in FIG. 2 transmitted from the camera 1.
  • the acquisition unit 21 can acquire the video CV by using the communication standard corresponding to the communication standard used when the camera 1 transmits the video CV as described above.
  • the display setting holding unit 22 holds the setting information for the display 34, which will be described later, to display the video, which is provided in the video display device 3.
  • the setting information includes the position of the display area R1 in the video CV and the enlargement ratio of the display area R1 when the video display device 3 displays the display area R1 as shown in FIG.
  • the enlargement ratio of the display area R1 is determined by the size of the display 34 in the real space included in the video display device 3.
  • the display area R1 in the video CV is an area displayed on the display 34 in the video CV.
  • the display area R1 is determined by the position of the display area R1 in the video CV, the enlargement ratio of the display area R1 when displaying the display area R1 on the display 34, and the shape and size of the display 34, which are included in the setting information.
  • the video processing unit 23 processes the video CV to generate the processed video PV based on the setting information held in the display setting holding unit 22. Specifically, the video processing unit 23 does not perform data amount reduction processing on the non-processing region R2 based on the display region R1 in the video CV, but performs data amount reduction processing on the processing region R3 different from the non-processing region R2.
  • the non-processed region R2 based on the display region R1 can be the same region as the display region R1.
  • the data amount reduction processing is processing for generating a processed video PV in which the amount of data when transmitted via a communication network is reduced by being encoded by the encoder 24 which will be described in detail next. ..
  • the data amount reduction processing can be a processing for making the feature amount of the pixel uniform.
  • the feature amount of the pixel may be, for example, the brightness of the pixel.
  • the image processing unit 23 may set the luminance of the pixels constituting the processing region R3 to, for example, zero, that is, the pixels constituting the processing region R2 may be black.
  • the data amount reduction processing may be processing for converting the processing region R3 in each of the images of a plurality of frames constituting the video into the same still image.
  • the still image may be any image.
  • the still image may be, for example, an image having a grid pattern.
  • the still image may be, for example, an image of the processing region R3 in the image of any one frame (reference frame) among the images of a plurality of frames constituting the image.
  • the data amount reduction process may be a process of deleting the processing region R3.
  • the encoder 24 encodes the processed video PV so that the amount of data of the processed video PV generated by the video processing unit 23 is reduced. Specifically, the encoder 24 has a data amount smaller than the data amount when transmitting the processed video PV according to the processing method applied in generating the processed video PV by the image processing unit 23.
  • the processed video PV is encoded.
  • the encoder 24 may encode the processed video PV by an MPEG (Moving Picture Experts Group) format video data compression method, or may encode the processed video PV by any other method.
  • MPEG Motion Picture Experts Group
  • the encoder 24 when the encoder 24 has been processed by the image processing unit 23 to reduce the amount of data for converting the processing area R3 in each of the images of a plurality of frames constituting the image into the same still image, the processed image is predicted by inter-frame prediction. PV may be encoded.
  • the transmission unit 25 transmits the processed video PV encoded by the encoder 24 and the setting information held in the display setting holding unit 22 to the video display device 3.
  • the transmission unit 25 may transmit the encoded processed video PV to the video display device 3 using a communication protocol such as UDP (User Datagram Protocol). Further, the transmission unit 25 may transmit the encoded processed video PV and the setting information to the video display device 3 using a wireless communication network such as LTE, or the encoded processed video PV and the setting information. May be transmitted to the video display device 3 using any other wireless communication standard. Further, the transmission unit 25 may transmit the encoded processed video PV and the setting information to the video display device 3 using the wired communication standard.
  • the video display device 3 includes a receiving unit 31, a decoder 32, a display control unit 33, and a display 34.
  • the receiving unit 31 receives the encoded processed video PV and the setting information transmitted from the transmitting unit 25 of the video processing device 2. At this time, the receiving unit 31 can use the communication protocol used for transmission by the transmitting unit 25. Further, the receiving unit 31 may receive the encoded processed video PV and the setting information using the same communication method as the communication method used by the transmitting unit 25, or may receive the encoded video PV using a different communication method. You may.
  • the decoder 32 outputs the processed video PV by decoding the encoded processed video PV received by the receiving unit 31.
  • the display control unit 33 displays the processed video PV output by the decoder 32 on the display 34 based on the setting information received by the reception unit 31. Specifically, the display control unit 33 causes the display 34 to display the display area R1 at the position shown in the setting information in the processed video PV at the enlargement ratio shown in the setting information. As described above, since the display region R1 in the processed video PV is the non-processed region R2, the data amount reduction processing is not performed by the image processing unit 23. Therefore, the display control unit 33 can display the image corresponding to the display area R1 in the image CV generated by the camera 1 on the display 34.
  • the display 34 displays the display area R1 in the processed video PV, that is, the display area R1 in the video CV, based on the control of the display control unit 33.
  • FIG. 4 is a flowchart showing an example of the operation in the video processing of the video processing apparatus 2 according to the first embodiment.
  • the operation in the video processing of the video processing apparatus 2 described with reference to FIG. 4 corresponds to the video processing method according to the first embodiment.
  • step S11 the acquisition unit 21 receives the video CV from the camera 1.
  • step S12 the video processing unit 23 generates a processed video PV obtained by processing the video CV based on the setting information held in the display setting holding unit 22.
  • the image processing unit 23 does not perform data amount reduction processing on the non-processing area R2 based on the display area R1 in the image CV, but performs data amount reduction processing on the processing area R3 different from the non-processing area R2.
  • the data amount reduction processing may be processing for making the feature amounts of the pixels constituting the processing region R3 uniform, or the processing region R3 in each of the images of a plurality of frames constituting the video CV is converted into the same still image. It may be processed.
  • step S13 the encoder 24 encodes the processed video PV so that the amount of data of the processed video PV processed in step S13 is reduced.
  • step S12 when the image processing unit 23 has performed data amount reduction processing for converting the processing area R3 in each of the images of a plurality of frames constituting the image into the same still image, in step S13, the encoder 24 sets the encoder 24.
  • the processed video PV may be encoded by inter-frame prediction.
  • step S14 the transmission unit 25 transmits the processed video PV encoded by the encoder 24 and the setting information held by the display setting holding unit 22 to the video display device 3 via the communication network.
  • the setting information includes the position of the display area R1 in the video CV and the enlargement ratio of the display area R1 when the display area R1 is displayed on the display 34.
  • FIG. 5 is a flowchart showing an example of an operation in the video display of the video display device 3 according to the first embodiment.
  • the operation in the video processing of the video display device 3 described with reference to FIG. 5 corresponds to the video display method according to the first embodiment.
  • step S21 the receiving unit 31 receives the encoded processed video PV and the setting information transmitted from the transmitting unit 25 of the video processing device 2.
  • step S22 the decoder 32 outputs the processed video PV by decoding the encoded processed video PV received by the receiving unit 31.
  • step S23 the display control unit 33 causes the display 34 to display the processed video PV output by the decoder 32 based on the setting information received by the reception unit 31.
  • the image processing apparatus 2 of the first embodiment generates the processed image PV by performing the data amount reduction processing on the processing area R3 without performing the data amount reduction processing on the non-processing area R2, and processing it.
  • the processed video PV is encoded so that the amount of data in the video PV is reduced. This makes it possible to reduce the amount of video data transmitted to the video display device 3. Therefore, it is possible to prevent the reception of the video by the video display device 3 from being delayed and causing problems such as delay in the display of the video by the video display device 3.
  • the video processing unit 23 performs data amount reduction processing for converting the processing region R3 in each of the images of a plurality of frames constituting the video into the same still image.
  • the processed video PV may be encoded by inter-frame prediction.
  • the encoder 24 encodes the processed video PV by inter-frame prediction, so that the amount of data in the processed video PV is greatly reduced. Therefore, it is possible to further suppress the delay in receiving the video by the video display device 3 and the occurrence of problems such as delay in the display of the video by the video display device 3.
  • FIG. 6 is a functional block diagram of the video processing system 101 according to the second embodiment of the present invention.
  • the video processing system 101 of the second embodiment includes a camera 4, a video processing device 5, and a plurality of video display devices 61, 62, ..., 6n (n is an integer). Be prepared.
  • each of the plurality of video display devices 61, 62, ..., 6n may be referred to as a video display device 6k (k is an integer of 1 to n).
  • the camera 4 is the same as the camera 1 of the first embodiment.
  • the video processing system 101 does not have to include the camera 4, and in such a configuration, the video processing device 5 is a computer graphic or the like.
  • the video may be generated by any method of.
  • the video processing device 2 may acquire the video generated by another information processing device.
  • "acquiring an image” may include generating an image by the image processing device 5 and acquiring an image generated by the camera 4 or the information processing device.
  • an example of acquiring an image generated by the camera 4 will be described.
  • the video processing device 5 includes an acquisition unit 51, a display setting holding unit 52, a video processing unit 53, an encoder 54, a transmission unit 55, and an arrangement information acquisition unit 56.
  • the acquisition unit 51 is the same as the acquisition unit 21 of the first embodiment.
  • the arrangement information acquisition unit 56 acquires arrangement information regarding the arrangement such as the position, orientation, size, and shape of the display 6k4 which is described in detail later in the video display device 6k.
  • the displays 6k4 included in each of the plurality of video display devices 6k constitute one virtual display VD.
  • n 8
  • the display 684 is shown from the display 614, but the number n of the display 6k4 is not limited to 8.
  • the placement information acquisition unit 56 may acquire placement information including a position measured based on a satellite signal received by a receiver of a GPS (Global Positioning System) system attached to the display 6k4. Further, the arrangement information acquisition unit 56 may acquire arrangement information including the direction indicated by the compass attached to the display 6k4. Further, the arrangement information acquisition unit 56 includes the position of the display 6k4 measured by the infrared camera based on the infrared rays reflected from the infrared reflecting element by irradiating the infrared reflecting element attached to the display 6k4 with infrared light. You may acquire the arrangement information. The arrangement information acquisition unit 56 may acquire arrangement information including the position of the display 6k4 measured based on the image taken by the camera that receives visible light. The placement information acquisition unit 56 is not limited to these, and can acquire placement information by any method.
  • the display setting holding unit 52 holds the setting information for the display 6k4 to display the image, as in the first embodiment.
  • the setting information includes the position of the display area R1-k in the video CV and the enlargement ratio when displaying the display area R1-k on the display 6k4, as in the first embodiment. ..
  • the position of the display area R1 in the present embodiment is the position of the display area R1-k to be displayed on each of the displays 6k4. Further, the position and the enlargement ratio of the display area R1-k are determined by the arrangement information acquired by the arrangement information acquisition unit 56.
  • the video processing unit 53 processes the video CV to generate the processed video PV based on the setting information held in the display setting holding unit 52. Specifically, the video processing unit 53 has, as shown in FIG. 8, each display 6k4 in the video CV in the video space, based on the setting information determined based on the placement information acquired by the placement information acquisition unit 56. Each display area R1-k is determined.
  • the video processing unit 53 has a display area R1-1 displayed on the display 614, a display area R1-2 displayed on the display 624, and a display area R1-displayed on the display 634 in the video CV. 3 is decided. Further, the video processing unit 53 determines the display area R1-4 to be displayed on the display 644, the display area R1-5 to be displayed on the display 654, and the display area R1-6 to be displayed on the display 664 in the video CV. .. Further, the video processing unit 53 determines the display area R1-7 to be displayed on the display 674, the display area R1-8 to be displayed on the display 684, and the display area R1-8 to be displayed on the display 684 in the video CV. ..
  • the image processing unit 53 determines the non-processed area R2-k based on each of the display areas R1-k of the display 6k4. In one example, as shown in FIG. 8, the video processing unit 53 may determine the display area R1-k as the non-processed area R2-k.
  • the video processing unit 53 determines a region different from any non-processed region R2-k in the video CV as the processed region R3.
  • the video processing unit 53 generates the processed video PV by performing the data amount reduction processing on the processed region R3 in the video CV without performing the data amount reduction processing on the non-processed region R2-k in the video CV.
  • the encoder 54 is the same as the encoder 24 of the first embodiment.
  • the transmission unit 55 is the same as the transmission unit 25 of the first embodiment.
  • the video display device 6k includes a receiving unit 6k1, a decoder 6k2, a display control unit 6k3, and a display 6k4.
  • the receiving unit 6k1, the decoder 6k2, the display control unit 6k3, and the display 6k4 are the same as the receiving unit 31, the decoder 32, the display control unit 33, and the display 34 of the video display device 3 of the first embodiment, respectively.
  • FIG. 9 is a flowchart showing an example of the operation in the video processing of the video processing apparatus 5 according to the second embodiment.
  • the operation in the video processing of the video processing apparatus 5 described with reference to FIG. 9 corresponds to the video processing method according to the second embodiment.
  • step S31 the video processing unit 53 acquires the video CV from the camera 4.
  • step S32 the arrangement information acquisition unit 56 acquires the arrangement information of the display 6k4 included in each of the video display devices 6k.
  • step S33 the display setting holding unit 52 holds the setting information determined based on the placement information acquired by the placement information acquisition unit 56.
  • step S34 the video processing unit 53 determines the non-processing area R2 and the processing area R3 based on the display area R1 corresponding to the setting information held in the display setting holding unit 52.
  • step S34 the video processing unit 53 generates a processed video PV obtained by processing the video CV.
  • the image processing unit 53 generates the processed image PV by performing the data amount reduction processing on the processing area R3 different from the non-processing area R2 without performing the data amount reduction processing on the non-processing area R2.
  • step S35 the encoder 54 encodes the processed video PV processed in step S13.
  • step S36 the transmission unit 55 transmits the processed video PV encoded by the encoder 54 and the setting information held by the display setting holding unit 52 to each of the video display devices 6k via the communication network.
  • the video processing device 5 acquires the layout information of the display 6k4 included in each of the plurality of video display devices 6k, and based on the layout information of the display 6k4, in the video CV. , The display area R1-k to be displayed on each of the displays 6k4 is determined. Therefore, even when each of the plurality of video display devices 6k constituting one virtual display VD displays an area based on each display area R1-k in the video CV, the video transmitted to the video display device 3 The amount of data can be reduced.
  • the video processing unit 53 may not perform data amount reduction processing only on the non-processed region R2-k of any of the displays 6k4 from the display 614 to the display 6n4. In this case, the image processing unit 53 performs data amount reduction processing by setting a region different from the non-processed region R2-k as the processed region R3.
  • the image processing unit 53 does not perform data amount reduction processing on the non-processed area R2-1 of the display 614, and forms a region different from the non-processed area R2-1 of the display 614 as a non-processed area.
  • the processed image PV for the display 614 may be generated by performing the data amount reduction processing as R3.
  • the encoder 54 encodes the processed video PV for the display 614, and the transmission unit 55 transmits the encoded processed video PV to the video display device 61.
  • the image processing unit 53 does not perform data amount reduction processing on the portion included in the non-processed area R2-2 of the display 624, and the data amount is in the processing area R3 different from the non-processed area R2-2 of the display 624.
  • the reduced processing may be performed to generate a processed video PV for the display 624.
  • the encoder 54 encodes the processed video PV for the display 624, and the transmission unit 55 transmits the encoded processed video PV to the video display device 62.
  • the video processing unit 53 may generate processed video PV for each of the displays 634 to 6n4.
  • the encoder 54 encodes the processed video PV for each of the displays 34 to 6n4, and the transmission unit 55 transmits the encoded processed video PV to the video display devices 63 to 6n, respectively.
  • FIG. 11 is a functional block diagram of the video processing system 102 according to the third embodiment of the present invention.
  • the video processing system 102 of the third embodiment has a camera 4, a video processing device 5, and a plurality of video display devices 61, similarly to the video processing system 101 of the second embodiment. 62, ..., 6n and the like.
  • the camera 4 is the same as the camera 4 of the second embodiment.
  • the video processing device 5 includes an acquisition unit 51, a display setting holding unit 52, a video processing unit 53, an encoder 54, a transmission unit 55, and an arrangement information determination unit 57.
  • the acquisition unit 51, the encoder 54, and the transmission unit 55 are the same as the acquisition unit 51, the encoder 54, and the transmission unit 55 of the second embodiment, respectively.
  • the arrangement information determination unit 57 determines the target position TP of each of the plurality of displays 6k4 based on the image CV captured by the camera 4. Specifically, first, the arrangement information determination unit 57 extracts the subject region R4 in which the image OBI of the main subject OB is captured in the video CV acquired by the acquisition unit 51. The arrangement information determination unit 57 extracts, for example, the difference between the background image and the video CV generated by imaging a background that is a subject different from the main subject OB in a state where the main subject OB does not exist in the imaging range. Can extract the subject area R4. The arrangement information determination unit 57 can extract the subject area R4 by using any method, not limited to this.
  • the subject area R4 may be an area in which the main part of the image CV is displayed.
  • the main part can be, for example, a part of the video CV that changes more than the other parts.
  • the arrangement information determination unit 57 calculates the center of gravity of the subject area R4.
  • the arrangement information determination unit 57 can calculate the center of gravity of the subject area R4 by using, for example, the arithmetic mean of each coordinate of the subject area R4. Not limited to this, the center of gravity of the subject area R4 can be calculated by using any method.
  • the arrangement information determination unit 57 repeats the process of calculating the center of gravity of the subject area R4 every time the video CV is updated. Then, the arrangement information determination unit 57 calculates the amount of change in the center of gravity of the subject area R4, which is calculated every time the video CV is updated.
  • the arrangement information determination unit 57 determines the target position TP of each of the displays 6k4 based on the subject area R4. Specifically, the arrangement information determination unit 57 sets a target position TP for each of the displays 6k4 so that one virtual display VD composed of the displays 614, 624, ..., And 6n4 displays the subject area R4. To decide.
  • the arrangement information determination unit 57 updates the target position TP of each of the displays 6k4 based on the amount of change in the center of gravity. Specifically, the placement information determination unit 57 updates the target position TP by adding the amount of change in the center of gravity to the target position TP to the current target position TP. After that, the arrangement information determination unit 57 updates the target position TP of each of the displays 6k4 based on the change amount of the center of gravity each time the change amount of the center of gravity of the subject area R4 where the video CV is updated is calculated. repeat.
  • the arrangement information determination unit 57 determines the display area R1 based on the target position TP of each of the displays 6k4. Specifically, the arrangement information determination unit 57 determines a display area R1-k including at least a part of the subject area R4 to be displayed on each of the displays 6k4 when each of the displays 6k4 is arranged at the target position TP. do. The arrangement information determination unit 57 determines the display areas R1-1 to R1-8 so as to include, for example, a part of each of the subject areas R4 representing the image OBI of the subject, as shown in FIG. Further, the arrangement information determination unit 57 determines the enlargement ratio when each of the displays 6k4 displays the display area R1-k.
  • the display setting holding unit 52 holds the setting information for the display 6k4 to display the image, as in the second embodiment.
  • the setting information includes the position of the display area R1 in the video CV corresponding to the target position TP of the display 6k4 determined by the arrangement information determination unit 57, and the display area R1 in the video display device 6k. Is included with the enlargement ratio of the display area R1 when displaying.
  • the video processing unit 53 generates a processed video PV obtained by processing the video CV, as in the second embodiment.
  • the image processing unit 53 determines the non-processed area R2-k based on each of the display areas R1-k of the display 6k4. In one example, as in the second embodiment, the video processing unit 53 may determine the display area R1-k as the non-processed area R2-k. In another example, the video processing unit 53 may determine the processing region R2-k as a region composed of a display region R1-k and a region adjacent to the display region R1-k.
  • FIG. 12 is an enlarged view showing one of the display areas R1-k shown in FIG.
  • the video processing unit 53 determines the display area R1-k to be displayed on the display 6k4 based on the arrangement information acquired by the arrangement information acquisition unit 56.
  • an error may occur between the target position TP of the display 6k4 and the actual position of the display 6k4 driven based on the target position TP by the drive unit 6k5 of the image display device 6k, which will be described in detail later. ..
  • the video display device 6k causes the display 6k4 to display the display area R1-k based on the target position TP of the display 6k4, whereas the actual position of the display 6k4 is indicated by a dashed line. It may be appropriate to display the area a. In such a case, in order for the video display device 6k to correct the display area R1-k to be displayed on the display 6k4 to the area a based on the error, the area a is not processed to reduce the amount of data. It is necessary. Therefore, as shown in FIG. 12, the video processing unit 53 is composed of a region including the region a, specifically, a display region R1-k and a region adjacent to the display region R1-k. The unprocessed region R2-k is determined.
  • the non-processed region R2-k covers the region from the outer edge of the display region R1-k to the distance d1 as shown in FIG. It can be an area adjacent to the display area R1-k.
  • the distance d1 is a distance in the video space corresponding to the distance d in the real space.
  • the non-processed area R2-k is the area corresponding to the square of 70 cm ⁇ 70 cm in the video CV.
  • the region included in the range of the display region R1-k when the display is rotated by ⁇ ⁇ ° is defined as the non-processed region R2-k.
  • the non-processed region R2-k corresponds to the display 6k4 when the display 6k4 is rotated to 5 ° from the angle indicated by the arrangement information in the video CV. It becomes the area to do.
  • the video processing unit 53 does not perform data amount reduction processing on the non-processed region R2 based on the display region R1 in the video CV, and reduces the data amount to the processed region R3 different from the non-processed region R2. Apply processing.
  • the transmission unit 55 transmits the processed video PV encoded by the encoder 54 and the setting information held in the display setting holding unit 52 to the video display device 6k. Further, the transmission unit 55 transmits drive information indicating the target position TP of each of the plurality of displays 6k4 to the video display device 6k provided with the displays 6k4.
  • the video display device 6k includes a reception unit 6k1, a decoder 6k2, a display control unit 6k3, a display 6k4, and a drive unit 6k5.
  • the decoder 6k2 is the same as the decoder 32 of the video display device 3 of the first embodiment, respectively.
  • the receiving unit 6k1 receives the encoded processed video PV and the setting information in the same manner as the receiving unit 6k1 of the video display device 6k of the second embodiment. Further, the receiving unit 6k1 receives the drive information transmitted from the transmitting unit 55 of the video processing device 5.
  • the display control unit 6k3 causes the display 6k4 to display the display area R1-k included in the non-processed area R3 of the processed image PV received by the receiving unit 6k1 and coded by the decoder 6k2. At this time, the display control unit 6k3 may correct the display area R1-k to be displayed on the display 6k4 based on the target position TP included in the drive information and the actual position of the display 6k4.
  • the display 6k4 displays an area included in the non-processed area R3 of the processed video PV based on the control of the display control unit 6k3.
  • the drive unit 6k5 drives the display 6k4. Specifically, the drive unit 6k5 drives the display 6k4 so as to arrange the display 6k4 at the target position TP indicated by the drive information received by the reception unit 6k1.
  • FIG. 14 is a flowchart showing an example of the operation in the video processing of the video processing apparatus 5 according to the third embodiment.
  • the operation in the video processing of the video processing apparatus 5 described with reference to FIG. 14 corresponds to the video processing method according to the third embodiment.
  • step S41 the acquisition unit 51 acquires the video CV from the camera 4.
  • step S42 the video processing unit 53 determines the subject area R4 in the video CV.
  • step S43 the video processing unit 53 determines the target position TP of each of the displays 6k4 based on the subject area R4.
  • step S44 the position of the display area R1-k to be displayed on the display 6k4 and the enlargement ratio of the display area R1-k when the image processing unit 53 displays the display area R1-k on the display 6k4 in the video CV.
  • the display setting holding unit 52 holds the setting information including the position and the enlargement ratio of the display area R1-k.
  • step S45 the image processing unit 53 determines the non-processed area R2 and the processed area R3 based on the display area R1-k in the target position TP of the display 6k4.
  • the non-processed region R2 may be composed of a display region R1 and a region adjacent to the display region R1.
  • step S46 the video processing unit 53 does not perform data amount reduction processing on the non-processed region R2, but performs data amount reduction processing on the processed region R3 to generate a processed video PV processed by the video CV.
  • the image processing unit 53 generates the processed image PV by performing the data amount reduction processing on the processing area R3 different from the non-processing area R2 without performing the data amount reduction processing on the non-processing area R2.
  • step S47 the encoder 54 encodes the processed video PV processed in step S45.
  • step S48 the transmission unit 55 transmits drive information indicating the target position TP of each of the displays 6k4 to the video display device 6k.
  • step S49 the transmission unit 55 transmits the processed video PV encoded by the encoder 54 and the setting information held by the display setting holding unit 52 to the video display device 3.
  • the operation in the video display of each of the video display devices 6k of the third embodiment is the same as the operation in the video display of the video display device 6k according to the second embodiment.
  • the drive unit 6k5 drives the display 6k4 based on the drive information received by the reception unit 6k1.
  • the display control unit 6k3 causes the display 6k4 to display an image in the state where the display 6k4 is in the position driven by the drive unit 6k5, as in the second embodiment.
  • the image processing apparatus 5 sets the display area R1-k to be displayed on the display 6k4 and the target position TP of the display 6k4 based on the main subject area R4 in the image CV. decide. Then, the video processing device 5 transmits the drive information including the target position TP to the video display device 6k, so that the display 6k4 is driven to the target position TP. Therefore, even when each of the plurality of video display devices 6k constituting one virtual display VD displays an area based on each display area R1-k in the video CV, the video transmitted to the video display device 3 The amount of data can be reduced.
  • the processing region R2 may be composed of a display region R1 and a region adjacent to the display region R1.
  • the image processing unit 53 has low data in the area to be displayed by the display 6k4 at the position where the display 6k4 is actually arranged. No quantity processing is applied. Therefore, the video display device 6k appropriately displays the video that has not been processed to reduce the amount of data when the display area R1 to be displayed on the display 6k4 is corrected in the processed video PV received from the video processing device 5. Can be done.
  • FIG. 15 is a block diagram showing a schematic configuration of a computer 103 that functions as a video processing device 2, a video display device 3, a video processing device 5, and a video display device 6.
  • the computer 103 may be a general-purpose computer, a dedicated computer, a workstation, a PC (Personal Computer), an electronic notepad, or the like.
  • the program instruction may be a program code, a code segment, or the like for executing a necessary task.
  • the computer 103 includes a processor 110, a ROM (ReadOnlyMemory) 120, a RAM (RandomAccessMemory) 130, a storage 140, an input unit 150, an output unit 160, and a communication interface ( I / F) 170 and.
  • the processor 110 is a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a GPU (Graphics Processing Unit), a DSP (Digital Signal Processor), a SoC (System on a Chip), or the like, and is of the same type or different types. It may be composed of a plurality of processors.
  • the processor 110 controls each configuration and executes various arithmetic processes. That is, the processor 110 reads the program from the ROM 120 or the storage 140, and executes the program using the RAM 130 as a work area. The processor 110 controls each of the above configurations and performs various arithmetic processes according to the program stored in the ROM 120 or the storage 140. In the present embodiment, the program according to the present disclosure is stored in the ROM 120 or the storage 140.
  • the program may be recorded on a recording medium that can be read by the computer 103. By using such a recording medium, it is possible to install the program on the computer 103.
  • the recording medium on which the program is recorded may be a non-transitory recording medium.
  • the non-transient recording medium is not particularly limited, but may be, for example, a CD-ROM, a DVD-ROM, a USB (Universal Serial Bus) memory, or the like. Further, this program may be downloaded from an external device via a network.
  • the ROM 120 stores various programs and various data.
  • the RAM 130 temporarily stores a program or data as a work area.
  • the storage 140 is composed of an HDD (Hard Disk Drive) or an SSD (Solid State Drive), and stores various programs including an operating system and various data.
  • the input unit 150 includes one or more input interfaces that accept user input operations and acquire information based on the user's operations.
  • the input unit 150 is, but is not limited to, a pointing device, a keyboard, a mouse, and the like.
  • the output unit 160 includes one or more output interfaces that output information.
  • the output unit 160 is a display that outputs information as a video, or a speaker that outputs information as audio, but is not limited thereto.
  • the output unit 160 also functions as an input unit 150 in the case of a touch panel type display.
  • the communication interface 170 is an interface for communicating with an external device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A video processing method according to the present disclosure comprises: a step (S11) for obtaining a video (CV) generated by a camera (1); a step (S12) for generating a processed video (PV) by, with respect to the video (CV), not performing data quantity reduction processing in a non-processing region (R2) based on a display region (R1) displayed on a display (34) of a video display device (3), and performing data quantity reduction processing in a processing region (R3) which is different from the non-processing region (R2); a step (S13) for encoding the processed video (PV) so that the data quantity of the processed video (PV) is reduced; and a step (S14) for transmitting, to the video display device 3, the encoded processed video (PV) and setting information which includes the position of the display region (R1) of the video (CV) and a magnification ratio of the display region (R1) when the display region (R1) is displayed on the display (34).

Description

映像処理方法、映像処理装置、及びプログラムVideo processing methods, video processing equipment, and programs
 本開示は、映像をディスプレイに表示させるための映像処理方法、映像処理装置、及びプログラムに関する。 The present disclosure relates to a video processing method, a video processing device, and a program for displaying a video on a display.
 映像表示を行うためのディスプレイ技術が発達し、可搬性が高く、また矩形だけでなく円形や多角形といった様々な形状で構成された、多様なディスプレイが知られている。また、複数のディスプレイそれぞれが映像の一部を表示することによって、該複数のディスプレイを組み合わせて構成された1の仮想的なディスプレイが、該映像の全体を表示することも知られている。さらに、非特許文献1に示すように、複数の移動体それぞれがディスプレイを搭載した複数の移動式ディスプレイが、それぞれの位置に応じた映像の一部を表示し、これにより、複数の移動式ディスプレイによって構成された1つの仮想的なディスプレイが、映像の全体を表示することが知られている。 Display technology for displaying images has been developed, and it is highly portable, and various displays are known that are composed of various shapes such as circles and polygons as well as rectangles. It is also known that one virtual display configured by combining the plurality of displays displays the entire image by displaying a part of the image on each of the plurality of displays. Further, as shown in Non-Patent Document 1, a plurality of mobile displays in which each of the plurality of moving objects is equipped with a display displays a part of an image corresponding to each position, whereby the plurality of mobile displays are displayed. It is known that one virtual display configured by the above displays the entire image.
 特許文献1に記載された技術において、複数の移動式ディスプレイ(以下、単に「ディスプレイ」という)は、カメラによって撮影された映像をリアルタイムに受信して、ストリーミング技術により映像を表示することが考えられる。しかしながら、ストリーミング技術により映像を適切に表示させるためには、カメラからディスプレイに映像を高速に送信するように、十分な無線伝送帯域を確保する必要がある。無線LAN(Local Area Network)、LTE(登録商標)(Long Term Evolution)等の通信規格を用いた通信ネットワークを介して、カメラからディスプレイに映像を送信することが考えられる。しかし、多数の利用者がこれらの通信規格を用いた通信ネットワークを介して情報を送受信することが想定されるため、カメラからディスプレイへの映像の送信において、帯域が十分に確保できず、良好な通信を行えないことがある。その結果、ディスプレイによる映像の表示に遅延が発生する可能性がある。 In the technique described in Patent Document 1, it is conceivable that a plurality of mobile displays (hereinafter, simply referred to as "displays") receive an image taken by a camera in real time and display the image by a streaming technique. .. However, in order to properly display the video by the streaming technique, it is necessary to secure a sufficient wireless transmission band so that the video is transmitted from the camera to the display at high speed. It is conceivable to transmit images from the camera to the display via a communication network using a communication standard such as a wireless LAN (Local Area Network) or LTE (registered trademark) (LongTermEvolution). However, since it is assumed that a large number of users send and receive information via a communication network using these communication standards, sufficient bandwidth cannot be secured when transmitting video from the camera to the display, which is good. Communication may not be possible. As a result, there may be a delay in displaying the image on the display.
 かかる事情に鑑みてなされた本開示の目的は、映像のデータ量を低減させつつ、適切な映像を表示させることができる映像処理方法、映像処理装置、及びプログラムを提供することにある。 An object of the present disclosure made in view of such circumstances is to provide a video processing method, a video processing device, and a program capable of displaying an appropriate video while reducing the amount of video data.
 上記課題を解決するため、本開示に係る映像処理方法は、映像を取得するステップと、
 前記映像における、映像表示装置のディスプレイに表示される表示領域に基づく非加工領域にデータ量低減加工を施さず、前記非加工領域とは異なる加工領域にデータ量低減加工を施すことによって加工映像を生成するステップと、
 前記加工映像のデータ量が低減されるように、該加工映像をエンコードするステップと、
 前記エンコードされた前記加工映像、並びに前記映像における前記表示領域の位置、及び前記ディスプレイに前記表示領域を表示させるときの前記表示領域の拡大率を含む設定情報を、前記映像表示装置に送信するステップと、を含む。
In order to solve the above problems, the video processing method according to the present disclosure includes steps for acquiring video and
In the video, the non-processed area based on the display area displayed on the display of the image display device is not processed to reduce the amount of data, and the processed image is processed by applying the data amount reduction processing to the processed area different from the non-processed area. Steps to generate and
A step of encoding the processed video so that the amount of data of the processed video is reduced,
A step of transmitting to the video display device setting information including the encoded video, the position of the display area in the video, and the enlargement ratio of the display area when the display is displayed on the display. And, including.
 また、上記課題を解決するため、本開示に係る映像処理装置は、映像を取得する取得部と、前記映像における、映像表示装置のディスプレイに表示される表示領域に基づく非加工領域にデータ量低減加工を施さず、前記非加工領域とは異なる加工領域にデータ量低減加工を施すことによって加工映像を生成する映像処理部と、前記加工映像の伝送量が低減されるように、該加工映像をエンコードするエンコーダと、前記エンコーダによってエンコードされた前記加工映像、並びに前記映像における前記表示領域の位置、及び前記ディスプレイに前記表示領域を表示させるときの前記表示領域の拡大率を含む設定情報を、前記映像表示装置に送信する送信部と、を備える。 Further, in order to solve the above-mentioned problems, the video processing apparatus according to the present disclosure reduces the amount of data in the acquisition unit for acquiring a video and the non-processed region based on the display area displayed on the display of the video display device in the video. An image processing unit that generates a processed image by performing data amount reduction processing on a processing area different from the non-processed area without processing, and the processed image so that the transmission amount of the processed image is reduced. The setting information including the encoder to be encoded, the processed image encoded by the encoder, the position of the display area in the image, and the enlargement ratio of the display area when the display area is displayed on the display is described. It is provided with a transmission unit for transmitting to a video display device.
 また、上記課題を解決するため、本開示に係るプログラムは、コンピュータを上述した映像処理装置として機能させる。 Further, in order to solve the above-mentioned problems, the program according to the present disclosure causes the computer to function as the above-mentioned video processing device.
 本開示に係る映像処理方法、映像処理装置、及びプログラムによれば、映像のデータ量を低減させつつ、適切な映像を表示させることができる。 According to the video processing method, the video processing device, and the program according to the present disclosure, it is possible to display an appropriate video while reducing the amount of video data.
本開示の第1の実施形態に係る映像処理システムの機能ブロック図である。It is a functional block diagram of the video processing system which concerns on 1st Embodiment of this disclosure. 図1に示すカメラによって生成された映像の一例を示す模式図である。It is a schematic diagram which shows an example of the image generated by the camera shown in FIG. 図2に示す映像における非加工領域及び加工領域を説明するための模式図である。It is a schematic diagram for demonstrating the non-processed area and the processed area in the image shown in FIG. 図1に示す映像処理装置の動作を説明するためのフローチャートである。It is a flowchart for demonstrating the operation of the image processing apparatus shown in FIG. 図1に示す映像表示装置の動作を説明するためのフローチャートである。It is a flowchart for demonstrating the operation of the image display device shown in FIG. 本開示の第2の実施形態に係る映像処理システムの機能ブロック図である。It is a functional block diagram of the video processing system which concerns on the 2nd Embodiment of this disclosure. 図6に示すディスプレイの配置関係を説明するための模式図である。It is a schematic diagram for demonstrating the arrangement relation of the display shown in FIG. 図6に示すディスプレイの配置関係に基づく、映像における表示領域を説明するための模式図である。It is a schematic diagram for demonstrating the display area in an image based on the arrangement relation of the display shown in FIG. 図6に示す映像処理装置の動作を説明するためのフローチャートである。It is a flowchart for demonstrating the operation of the image processing apparatus shown in FIG. 加工映像の他の例を示す模式図である。It is a schematic diagram which shows the other example of a processed image. 本開示の第3の実施形態に係る映像処理システムの機能ブロック図である。It is a functional block diagram of the video processing system which concerns on the 3rd Embodiment of this disclosure. 非加工領域を決定する他の例を説明するための図である。It is a figure for demonstrating another example which determines a non-machining area. 図12に示す他の例によって決定された非加工領域及び加工領域を示す模式図である。It is a schematic diagram which shows the non-processed area and the processed area determined by another example shown in FIG. 図11に示す映像処理装置の動作を説明するためのフローチャートである。It is a flowchart for demonstrating operation of the image processing apparatus shown in FIG. 第1~第3の実施形態に係る映像処理装置及び映像表示装置のハードウェアブロック図である。It is a hardware block diagram of the image processing apparatus and the image display apparatus which concerns on 1st to 3rd Embodiment.
 まず、本開示の実施形態について図面を参照して説明する。 First, the embodiments of the present disclosure will be described with reference to the drawings.
 <<第1の実施形態>>
 図1を参照して第1の実施形態の全体構成について説明する。図1は、本発明の第1の実施形態に係る映像処理システム100の機能ブロック図である。
<< First Embodiment >>
The overall configuration of the first embodiment will be described with reference to FIG. FIG. 1 is a functional block diagram of the video processing system 100 according to the first embodiment of the present invention.
 図1に示されるように、第1の実施形態に係る映像処理システム100は、カメラ1と、映像処理装置2と、映像表示装置3とを備える。カメラ1と映像処理装置2とは、通信ケーブル又は通信ネットワークを介して、互いに通信する。映像処理装置2と映像表示装置3とは、通信ネットワークを介して互いに通信する。 As shown in FIG. 1, the video processing system 100 according to the first embodiment includes a camera 1, a video processing device 2, and a video display device 3. The camera 1 and the video processing device 2 communicate with each other via a communication cable or a communication network. The video processing device 2 and the video display device 3 communicate with each other via a communication network.
 なお、第1の実施形態において、映像処理システム100は、カメラ1を備えなくてもよく、このような構成において、映像処理装置2は、コンピュータグラフィック等の任意の方法で映像を生成してもよい。また、映像処理装置2は、他の情報処理装置において生成された映像を取得してもよい。以降において、「映像を取得する」ことには、映像処理装置2が映像を生成すること、及び、カメラ1又は情報処理装置によって生成された映像を取得することが含まれてもよい。以降においては、一例として、カメラ1によって生成された映像を取得する例を説明する。 In the first embodiment, the video processing system 100 does not have to include the camera 1, and in such a configuration, the video processing device 2 may generate video by any method such as computer graphics. good. Further, the video processing device 2 may acquire the video generated by another information processing device. Hereinafter, "acquiring an image" may include generating an image by the image processing device 2 and acquiring an image generated by the camera 1 or the information processing device. Hereinafter, as an example, an example of acquiring an image generated by the camera 1 will be described.
 <カメラの構成>
 図2に示すように、カメラ1は、撮影領域に含まれる、主被写体OB及び主被写体OBの背景にある被写体を撮影することによって、該主被写体OBの像OBI、及び背景の像を含む映像CVを生成する。撮影領域は、例えば、予め定められたアスペクト比で構成されている矩形の領域とすることができる。カメラ1は、映像処理装置2に、映像CVを伝送する。カメラ1は、SDI(Serial Digital Interface)の通信規格を用いて、映像処理装置2に映像CVを伝送してもよいし、HDMI(登録商標)(High-Definition Multimedia Interface)の通信規格を用いて、映像処理装置2に映像CVを伝送してもよい。また、カメラ1は、他の任意の通信規格を用いて、映像処理装置2に映像CVを伝送してもよい。
<Camera configuration>
As shown in FIG. 2, the camera 1 captures a subject in the background of the main subject OB and the main subject OB included in the shooting area, and thereby includes an image OBI of the main subject OB and an image including the background image. Generate CV. The shooting area can be, for example, a rectangular area having a predetermined aspect ratio. The camera 1 transmits a video CV to the video processing device 2. The camera 1 may transmit a video CV to the video processing device 2 using the communication standard of SDI (Serial Digital Interface), or may use the communication standard of HDMI (registered trademark) (High-Definition Multimedia Interface). , The video CV may be transmitted to the video processing device 2. Further, the camera 1 may transmit the video CV to the video processing device 2 by using any other communication standard.
 <映像処理装置の構成>
 図1に示すように、映像処理装置2は、取得部21と、表示設定保持部22と、映像処理部23と、エンコーダ24と、送信部25とを備える。取得部21及び送信部25は、通信インターフェースによって構成される。表示設定保持部22は、例えば半導体メモリ、磁気メモリ、光メモリ等のメモリによって構成される。映像処理部23と、エンコーダ24とは、制御部(コントローラ)を構成する。制御部は、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)などの専用のハードウェアによって構成されてもよいし、プロセッサによって構成されてもよいし、双方を含んで構成されてもよい。
<Configuration of video processing equipment>
As shown in FIG. 1, the video processing device 2 includes an acquisition unit 21, a display setting holding unit 22, a video processing unit 23, an encoder 24, and a transmission unit 25. The acquisition unit 21 and the transmission unit 25 are configured by a communication interface. The display setting holding unit 22 is composed of, for example, a memory such as a semiconductor memory, a magnetic memory, or an optical memory. The video processing unit 23 and the encoder 24 form a control unit (controller). The control unit may be configured by dedicated hardware such as an ASIC (Application Specific Integrated Circuit) or FPGA (Field-Programmable Gate Array), may be configured by a processor, or may be configured by including both. May be good.
 取得部21は、カメラ1から伝送された、図2に示すような、カメラ1によって生成された映像CVを取得する。取得部21は、上述したようなカメラ1が映像CVを送信するときに用いる通信規格に対応する通信規格を用いて映像CVを取得することができる。 The acquisition unit 21 acquires the video CV generated by the camera 1 as shown in FIG. 2 transmitted from the camera 1. The acquisition unit 21 can acquire the video CV by using the communication standard corresponding to the communication standard used when the camera 1 transmits the video CV as described above.
 表示設定保持部22は、映像表示装置3が備える、後述するディスプレイ34が映像を表示するための設定情報を保持している。設定情報は、図3に示すような、映像CVにおける表示領域R1の位置と、映像表示装置3に表示領域R1を表示させるときの該表示領域R1の拡大率とを含む。表示領域R1の拡大率は、映像表示装置3が備える、実空間内でのディスプレイ34の大きさよって定まる。映像CVにおける表示領域R1は、映像CVにおいて、ディスプレイ34に表示される領域である。表示領域R1は、設定情報に含まれる、映像CVにおける表示領域R1の位置、ディスプレイ34に表示領域R1を表示させるときの該表示領域R1の拡大率、並びにディスプレイ34の形状および大きさによって定まる。 The display setting holding unit 22 holds the setting information for the display 34, which will be described later, to display the video, which is provided in the video display device 3. The setting information includes the position of the display area R1 in the video CV and the enlargement ratio of the display area R1 when the video display device 3 displays the display area R1 as shown in FIG. The enlargement ratio of the display area R1 is determined by the size of the display 34 in the real space included in the video display device 3. The display area R1 in the video CV is an area displayed on the display 34 in the video CV. The display area R1 is determined by the position of the display area R1 in the video CV, the enlargement ratio of the display area R1 when displaying the display area R1 on the display 34, and the shape and size of the display 34, which are included in the setting information.
 映像処理部23は、表示設定保持部22に保持されている設定情報に基づいて、映像CVを加工して加工映像PVを生成する。具体的には、映像処理部23は、映像CVにおける表示領域R1に基づく非加工領域R2にデータ量低減加工を施さず、非加工領域R2とは異なる加工領域R3にデータ量低減加工を施す。本実施形態では、表示領域R1に基づく非加工領域R2は、表示領域R1と同一の領域とすることができる。 The video processing unit 23 processes the video CV to generate the processed video PV based on the setting information held in the display setting holding unit 22. Specifically, the video processing unit 23 does not perform data amount reduction processing on the non-processing region R2 based on the display region R1 in the video CV, but performs data amount reduction processing on the processing region R3 different from the non-processing region R2. In the present embodiment, the non-processed region R2 based on the display region R1 can be the same region as the display region R1.
 データ量低減加工は、次に詳細を説明するエンコーダ24によってエンコードされることによって、通信ネットワークを介して送信されるときのデータ量が低減されるような加工映像PVを生成するための加工である。 The data amount reduction processing is processing for generating a processed video PV in which the amount of data when transmitted via a communication network is reduced by being encoded by the encoder 24 which will be described in detail next. ..
 一例では、データ量低減加工は、画素の特徴量を均一にする加工とすることができる。画素の特徴量は、例えば、画素の輝度としてもよい。このような構成において、映像処理部23は、加工領域R3を構成する画素の輝度を、例えばゼロとする、すなわち、加工領域R2を構成する画素を黒色としてもよい。 In one example, the data amount reduction processing can be a processing for making the feature amount of the pixel uniform. The feature amount of the pixel may be, for example, the brightness of the pixel. In such a configuration, the image processing unit 23 may set the luminance of the pixels constituting the processing region R3 to, for example, zero, that is, the pixels constituting the processing region R2 may be black.
 他の例では、データ量低減加工は、映像を構成する複数フレームの画像それぞれにおける加工領域R3を同一の静止画像に変換する加工であってもよい。静止画像は、任意の画像であってよい。静止画像は、例えば、格子状の模様を有する画像であってもよい。静止画像は、例えば、映像を構成する複数フレームの画像のうちの任意の一のフレーム(基準フレーム)の画像における加工領域R3の画像であってもよい。さらなる他の例では、データ量低減加工は、加工領域R3を削除する加工であってもよい。 In another example, the data amount reduction processing may be processing for converting the processing region R3 in each of the images of a plurality of frames constituting the video into the same still image. The still image may be any image. The still image may be, for example, an image having a grid pattern. The still image may be, for example, an image of the processing region R3 in the image of any one frame (reference frame) among the images of a plurality of frames constituting the image. In yet another example, the data amount reduction process may be a process of deleting the processing region R3.
 図1に示したように、エンコーダ24は、映像処理部23によって生成された加工映像PVのデータ量が低減されるように、該加工映像PVをエンコードする。具体的には、エンコーダ24は、映像処理部23によって加工映像PVを生成するにあたって施された加工方式に応じて、該加工映像PVを伝送する場合のデータ量より小さいデータ量となるように、該加工映像PVをエンコードする。エンコーダ24は、MPEG(Moving Picture Experts Group)形式の映像データ圧縮方式により加工映像PVをエンコードしてもよいし、任意の他の方式により加工映像PVをエンコードしてもよい。例えば、エンコーダ24は、映像処理部23によって、映像を構成する複数フレームの画像それぞれにおける加工領域R3を同一の静止画像に変換するデータ量低減加工が行われていた場合、フレーム間予測により加工映像PVをエンコードしてもよい。 As shown in FIG. 1, the encoder 24 encodes the processed video PV so that the amount of data of the processed video PV generated by the video processing unit 23 is reduced. Specifically, the encoder 24 has a data amount smaller than the data amount when transmitting the processed video PV according to the processing method applied in generating the processed video PV by the image processing unit 23. The processed video PV is encoded. The encoder 24 may encode the processed video PV by an MPEG (Moving Picture Experts Group) format video data compression method, or may encode the processed video PV by any other method. For example, when the encoder 24 has been processed by the image processing unit 23 to reduce the amount of data for converting the processing area R3 in each of the images of a plurality of frames constituting the image into the same still image, the processed image is predicted by inter-frame prediction. PV may be encoded.
 送信部25は、エンコーダ24によってエンコードされた加工映像PV、及び表示設定保持部22に保持されている設定情報を、映像表示装置3に送信する。送信部25はエンコードされた加工映像PVを、UDP(User Datagram Protocol)等の通信プロトコルを用いて、映像表示装置3に送信してもよい。また、送信部25は、エンコードされた加工映像PVと設定情報とを、LTE等の無線通信ネットワークを用いて、映像表示装置3に送信してもよいし、エンコードされた加工映像PVと設定情報とを、他の任意の無線通信規格を用いて、映像表示装置3に送信してもよい。また、送信部25は、エンコードされた加工映像PVと設定情報とを、有線通信規格を用いて、映像表示装置3に送信してもよい。 The transmission unit 25 transmits the processed video PV encoded by the encoder 24 and the setting information held in the display setting holding unit 22 to the video display device 3. The transmission unit 25 may transmit the encoded processed video PV to the video display device 3 using a communication protocol such as UDP (User Datagram Protocol). Further, the transmission unit 25 may transmit the encoded processed video PV and the setting information to the video display device 3 using a wireless communication network such as LTE, or the encoded processed video PV and the setting information. May be transmitted to the video display device 3 using any other wireless communication standard. Further, the transmission unit 25 may transmit the encoded processed video PV and the setting information to the video display device 3 using the wired communication standard.
 映像表示装置3は、受信部31と、デコーダ32と、表示制御部33と、ディスプレイ34とを備える。 The video display device 3 includes a receiving unit 31, a decoder 32, a display control unit 33, and a display 34.
 受信部31は、映像処理装置2の送信部25から送信された、エンコードされた加工映像PVと設定情報とを受信する。このとき、受信部31は、送信部25による送信に用いられた通信プロトコルを用いることができる。また、受信部31は、エンコードされた加工映像PVと設定情報とを、送信部25によって用いられた通信方式と同じ通信方式を用いて受信してもよいし、異なる通信方式を用いて受信してもよい。 The receiving unit 31 receives the encoded processed video PV and the setting information transmitted from the transmitting unit 25 of the video processing device 2. At this time, the receiving unit 31 can use the communication protocol used for transmission by the transmitting unit 25. Further, the receiving unit 31 may receive the encoded processed video PV and the setting information using the same communication method as the communication method used by the transmitting unit 25, or may receive the encoded video PV using a different communication method. You may.
 デコーダ32は、受信部31によって受信された、エンコードされた加工映像PVをデコードすることによって、加工映像PVを出力する。 The decoder 32 outputs the processed video PV by decoding the encoded processed video PV received by the receiving unit 31.
 表示制御部33は、受信部31によって受信された設定情報に基づいて、デコーダ32によって出力された加工映像PVをディスプレイ34に表示させる。具体的には、表示制御部33は、加工映像PVにおける、設定情報に示される位置の表示領域R1を、設定情報に示される拡大率で、ディスプレイ34に表示させる。上述したように、加工映像PVにおける表示領域R1は、非加工領域R2であるため、映像処理部23によってデータ量低減加工が施されていない。したがって、表示制御部33は、カメラ1によって生成された映像CVにおける表示領域R1に相当する映像をディスプレイ34に表示させることができる。 The display control unit 33 displays the processed video PV output by the decoder 32 on the display 34 based on the setting information received by the reception unit 31. Specifically, the display control unit 33 causes the display 34 to display the display area R1 at the position shown in the setting information in the processed video PV at the enlargement ratio shown in the setting information. As described above, since the display region R1 in the processed video PV is the non-processed region R2, the data amount reduction processing is not performed by the image processing unit 23. Therefore, the display control unit 33 can display the image corresponding to the display area R1 in the image CV generated by the camera 1 on the display 34.
 ディスプレイ34は、表示制御部33の制御に基づいて、加工映像PVにおける表示領域R1、すなわち、映像CVにおける表示領域R1を表示する。 The display 34 displays the display area R1 in the processed video PV, that is, the display area R1 in the video CV, based on the control of the display control unit 33.
 <映像処理装置の動作>
 ここで、第1の実施形態に係る映像処理装置2における映像処理の動作について、図4を参照して説明する。図4は、第1の実施形態に係る映像処理装置2の映像処理における動作の一例を示すフローチャートである。図4を参照して説明する映像処理装置2の映像処理における動作は第1の実施形態に係る映像処理方法に相当する。
<Operation of video processing device>
Here, the operation of video processing in the video processing device 2 according to the first embodiment will be described with reference to FIG. FIG. 4 is a flowchart showing an example of the operation in the video processing of the video processing apparatus 2 according to the first embodiment. The operation in the video processing of the video processing apparatus 2 described with reference to FIG. 4 corresponds to the video processing method according to the first embodiment.
 ステップS11において、取得部21が、カメラ1から映像CVを受信する。 In step S11, the acquisition unit 21 receives the video CV from the camera 1.
 ステップS12において、映像処理部23が、表示設定保持部22に保持されている設定情報に基づいて、映像CVを加工した加工映像PVを生成する。このとき、映像処理部23は、映像CVにおける表示領域R1に基づく非加工領域R2にデータ量低減加工を施さず、非加工領域R2とは異なる加工領域R3にデータ量低減加工を施す。データ量低減加工は、加工領域R3を構成する画素の特徴量を均一にする加工であってもよいし、映像CVを構成する複数フレームの画像それぞれにおける加工領域R3を同一の静止画像に変換する加工であってもよい。 In step S12, the video processing unit 23 generates a processed video PV obtained by processing the video CV based on the setting information held in the display setting holding unit 22. At this time, the image processing unit 23 does not perform data amount reduction processing on the non-processing area R2 based on the display area R1 in the image CV, but performs data amount reduction processing on the processing area R3 different from the non-processing area R2. The data amount reduction processing may be processing for making the feature amounts of the pixels constituting the processing region R3 uniform, or the processing region R3 in each of the images of a plurality of frames constituting the video CV is converted into the same still image. It may be processed.
 ステップS13において、エンコーダ24が、ステップS13で加工された加工映像PVのデータ量が低減されるように、加工映像PVをエンコードする。ステップS12で、映像処理部23によって、映像を構成する複数フレームの画像それぞれにおける加工領域R3を同一の静止画像に変換するデータ量低減加工が行われていた場合、ステップS13において、エンコーダ24は、フレーム間予測により加工映像PVをエンコードしてもよい。 In step S13, the encoder 24 encodes the processed video PV so that the amount of data of the processed video PV processed in step S13 is reduced. In step S12, when the image processing unit 23 has performed data amount reduction processing for converting the processing area R3 in each of the images of a plurality of frames constituting the image into the same still image, in step S13, the encoder 24 sets the encoder 24. The processed video PV may be encoded by inter-frame prediction.
 ステップS14において、送信部25が、通信ネットワークを介して、エンコーダ24によってエンコードされた加工映像PVと、表示設定保持部22によって保持されている設定情報を映像表示装置3に送信する。設定情報は、上述したように、映像CVにおける表示領域R1の位置、及びディスプレイ34に表示領域R1を表示させるときの表示領域R1の拡大率を含む。 In step S14, the transmission unit 25 transmits the processed video PV encoded by the encoder 24 and the setting information held by the display setting holding unit 22 to the video display device 3 via the communication network. As described above, the setting information includes the position of the display area R1 in the video CV and the enlargement ratio of the display area R1 when the display area R1 is displayed on the display 34.
 <映像表示装置の動作>
 続いて、第1の実施形態に係る映像表示装置3の映像表示における動作について、図5を参照して説明する。図5は、第1の実施形態に係る映像表示装置3の映像表示における動作の一例を示すフローチャートである。図5を参照して説明する映像表示装置3の映像処理における動作は第1の実施形態に係る映像表示方法に相当する。
<Operation of video display device>
Subsequently, the operation of the video display device 3 according to the first embodiment in the video display will be described with reference to FIG. FIG. 5 is a flowchart showing an example of an operation in the video display of the video display device 3 according to the first embodiment. The operation in the video processing of the video display device 3 described with reference to FIG. 5 corresponds to the video display method according to the first embodiment.
 ステップS21において、受信部31は、映像処理装置2の送信部25から送信された、エンコードされた加工映像PV、及び設定情報を受信する。 In step S21, the receiving unit 31 receives the encoded processed video PV and the setting information transmitted from the transmitting unit 25 of the video processing device 2.
 ステップS22において、デコーダ32が、受信部31によって受信された、エンコードされた加工映像PVをデコードすることによって、加工映像PVを出力する。 In step S22, the decoder 32 outputs the processed video PV by decoding the encoded processed video PV received by the receiving unit 31.
 ステップS23において、表示制御部33が、受信部31によって受信された設定情報に基づいて、デコーダ32によって出力された加工映像PVをディスプレイ34に表示させる。 In step S23, the display control unit 33 causes the display 34 to display the processed video PV output by the decoder 32 based on the setting information received by the reception unit 31.
 上述したように、第1の実施形態の映像処理装置2は、非加工領域R2にデータ量低減加工を施さず、加工領域R3にデータ量低減加工を施すことによって加工映像PVを生成し、加工映像PVのデータ量が低減されるように該加工映像PVをエンコードする。これによって、映像表示装置3に送信する映像のデータ量を削減することができる。したがって、映像表示装置3による映像の受信が遅延して、映像表示装置3による映像の表示に遅延等の不具合が発生することを抑制することができる。 As described above, the image processing apparatus 2 of the first embodiment generates the processed image PV by performing the data amount reduction processing on the processing area R3 without performing the data amount reduction processing on the non-processing area R2, and processing it. The processed video PV is encoded so that the amount of data in the video PV is reduced. This makes it possible to reduce the amount of video data transmitted to the video display device 3. Therefore, it is possible to prevent the reception of the video by the video display device 3 from being delayed and causing problems such as delay in the display of the video by the video display device 3.
 また、第1の実施形態の映像処理装置2のエンコーダ24は、映像処理部23によって、映像を構成する複数フレームの画像それぞれにおける加工領域R3を同一の静止画像に変換するデータ量低減加工が行われていた場合、フレーム間予測により加工映像PVをエンコードしてもよい。複数フレームの画像それぞれにおける加工領域R3が同一の静止画像である加工映像PVは、前後のフレームの画像における加工領域R3の差分がない。そのため、このように、エンコーダ24がフレーム間予測により加工映像PVをエンコードすることによって、加工映像PVのデータ量は大きく低減される。したがって、映像表示装置3による映像の受信が遅延して、映像表示装置3による映像の表示に遅延等の不具合が発生することをさらに抑制することができる。 Further, in the encoder 24 of the video processing device 2 of the first embodiment, the video processing unit 23 performs data amount reduction processing for converting the processing region R3 in each of the images of a plurality of frames constituting the video into the same still image. If so, the processed video PV may be encoded by inter-frame prediction. In the processed video PV in which the processed area R3 in each of the images of the plurality of frames is the same still image, there is no difference in the processed area R3 in the images of the preceding and following frames. Therefore, in this way, the encoder 24 encodes the processed video PV by inter-frame prediction, so that the amount of data in the processed video PV is greatly reduced. Therefore, it is possible to further suppress the delay in receiving the video by the video display device 3 and the occurrence of problems such as delay in the display of the video by the video display device 3.
 <<第2の実施形態>>
 図6を参照して第2の実施形態の全体構成について説明する。図6は、本発明の第2の実施形態に係る映像処理システム101の機能ブロック図である。
<< Second Embodiment >>
The overall configuration of the second embodiment will be described with reference to FIG. FIG. 6 is a functional block diagram of the video processing system 101 according to the second embodiment of the present invention.
 図6に示すように、第2の実施形態の映像処理システム101は、カメラ4と、映像処理装置5と、複数の映像表示装置61、62、・・・、6n(nは整数)とを備える。以降において、複数の映像表示装置61、62、・・・、6nのそれぞれを映像表示装置6k(kは1~nの整数)ということがある。カメラ4は、第1の実施形態のカメラ1と同じである。 As shown in FIG. 6, the video processing system 101 of the second embodiment includes a camera 4, a video processing device 5, and a plurality of video display devices 61, 62, ..., 6n (n is an integer). Be prepared. Hereinafter, each of the plurality of video display devices 61, 62, ..., 6n may be referred to as a video display device 6k (k is an integer of 1 to n). The camera 4 is the same as the camera 1 of the first embodiment.
 なお、第2の実施形態及び追って詳細に説明する第3の実施形態において、映像処理システム101は、カメラ4を備えなくてもよく、このような構成において、映像処理装置5は、コンピュータグラフィック等の任意の方法で映像を生成してもよい。また、映像処理装置2は、他の情報処理装置において生成された映像を取得してもよい。以降において、「映像を取得する」ことには、映像処理装置5が映像を生成すること、及び、カメラ4又は情報処理装置によって生成された映像を取得することが含まれてもよい。以降においては、一例として、カメラ4によって生成された映像を取得する例を説明する。 In the second embodiment and the third embodiment described in detail later, the video processing system 101 does not have to include the camera 4, and in such a configuration, the video processing device 5 is a computer graphic or the like. The video may be generated by any method of. Further, the video processing device 2 may acquire the video generated by another information processing device. Hereinafter, "acquiring an image" may include generating an image by the image processing device 5 and acquiring an image generated by the camera 4 or the information processing device. Hereinafter, as an example, an example of acquiring an image generated by the camera 4 will be described.
 <映像処理装置の機能構成>
 映像処理装置5は、取得部51と、表示設定保持部52と、映像処理部53と、エンコーダ54と、送信部55と、配置情報取得部56と、を備える。
<Functional configuration of video processing equipment>
The video processing device 5 includes an acquisition unit 51, a display setting holding unit 52, a video processing unit 53, an encoder 54, a transmission unit 55, and an arrangement information acquisition unit 56.
 取得部51は、第1の実施形態の取得部21と同じである。 The acquisition unit 51 is the same as the acquisition unit 21 of the first embodiment.
 配置情報取得部56は、映像表示装置6kが有する、追って詳細に説明するディスプレイ6k4の位置、向き、大きさ、形状等の配置に関する配置情報を取得する。図7に示すように、複数の映像表示装置6kそれぞれが備えるディスプレイ6k4は、1つの仮想的なディスプレイVDを構成している。なお、図7に示す例では、n=8であり、ディスプレイ614からディスプレイ684が示されているが、ディスプレイ6k4の数nが8に限られることはない。 The arrangement information acquisition unit 56 acquires arrangement information regarding the arrangement such as the position, orientation, size, and shape of the display 6k4 which is described in detail later in the video display device 6k. As shown in FIG. 7, the displays 6k4 included in each of the plurality of video display devices 6k constitute one virtual display VD. In the example shown in FIG. 7, n = 8, and the display 684 is shown from the display 614, but the number n of the display 6k4 is not limited to 8.
 例えば、配置情報取得部56は、ディスプレイ6k4に取り付けられている、GPS(Global Positioning System)システムの受信機が受信した衛星信号に基づいて測定された位置を含む配置情報を取得してもよい。また、配置情報取得部56は、ディスプレイ6k4に取り付けられているコンパスが示す方向を含む配置情報を取得してもよい。また、配置情報取得部56は、赤外線カメラが、ディスプレイ6k4に取り付けられた赤外線反射素子に赤外光を照射し、該赤外線反射素子から反射された赤外線に基づいて測定したディスプレイ6k4の位置を含む配置情報を取得してもよい。配置情報取得部56は、可視光線を受光するカメラによって撮影された画像に基づいて測定したディスプレイ6k4の位置を含む配置情報を取得してもよい。配置情報取得部56は、これらに限られず、任意の方法で配置情報を取得することができる。 For example, the placement information acquisition unit 56 may acquire placement information including a position measured based on a satellite signal received by a receiver of a GPS (Global Positioning System) system attached to the display 6k4. Further, the arrangement information acquisition unit 56 may acquire arrangement information including the direction indicated by the compass attached to the display 6k4. Further, the arrangement information acquisition unit 56 includes the position of the display 6k4 measured by the infrared camera based on the infrared rays reflected from the infrared reflecting element by irradiating the infrared reflecting element attached to the display 6k4 with infrared light. You may acquire the arrangement information. The arrangement information acquisition unit 56 may acquire arrangement information including the position of the display 6k4 measured based on the image taken by the camera that receives visible light. The placement information acquisition unit 56 is not limited to these, and can acquire placement information by any method.
 表示設定保持部52は、第1の実施形態と同様に、ディスプレイ6k4が映像を表示するための設定情報を保持する。第2の実施形態において、設定情報は、第1の実施形態と同様に、映像CVにおける表示領域R1-kの位置と、ディスプレイ6k4に表示領域R1-kを表示させるときの拡大率とを含む。本実施形態における表示領域R1の位置は、ディスプレイ6k4それぞれに表示させる表示領域R1-kの位置である。また、表示領域R1-kの位置及び拡大率は、配置情報取得部56によって取得された配置情報によって決定される。 The display setting holding unit 52 holds the setting information for the display 6k4 to display the image, as in the first embodiment. In the second embodiment, the setting information includes the position of the display area R1-k in the video CV and the enlargement ratio when displaying the display area R1-k on the display 6k4, as in the first embodiment. .. The position of the display area R1 in the present embodiment is the position of the display area R1-k to be displayed on each of the displays 6k4. Further, the position and the enlargement ratio of the display area R1-k are determined by the arrangement information acquired by the arrangement information acquisition unit 56.
 映像処理部53は、表示設定保持部52に保持されている設定情報に基づいて、映像CVを加工して加工映像PVを生成する。具体的には、映像処理部53は、配置情報取得部56によって取得された配置情報に基づいて定まる設定情報に基づいて、図8に示すように、映像空間での、映像CVにおける各ディスプレイ6k4それぞれの表示領域R1-kを決定する。 The video processing unit 53 processes the video CV to generate the processed video PV based on the setting information held in the display setting holding unit 52. Specifically, the video processing unit 53 has, as shown in FIG. 8, each display 6k4 in the video CV in the video space, based on the setting information determined based on the placement information acquired by the placement information acquisition unit 56. Each display area R1-k is determined.
 図8に示す例では、映像処理部53は、映像CVにおける、ディスプレイ614に表示させる表示領域R1-1と、ディスプレイ624に表示させる表示領域R1-2と、ディスプレイ634に表示させる表示領域R1-3とを決定する。また、映像処理部53は、映像CVにおける、ディスプレイ644に表示させる表示領域R1-4と、ディスプレイ654に表示させる表示領域R1-5と、ディスプレイ664に表示させる表示領域R1-6とを決定する。また、映像処理部53は、映像CVにおける、ディスプレイ674に表示させる表示領域R1-7と、ディスプレイ684に表示させる表示領域R1-8と、ディスプレイ684に表示させる表示領域R1-8とを決定する。 In the example shown in FIG. 8, the video processing unit 53 has a display area R1-1 displayed on the display 614, a display area R1-2 displayed on the display 624, and a display area R1-displayed on the display 634 in the video CV. 3 is decided. Further, the video processing unit 53 determines the display area R1-4 to be displayed on the display 644, the display area R1-5 to be displayed on the display 654, and the display area R1-6 to be displayed on the display 664 in the video CV. .. Further, the video processing unit 53 determines the display area R1-7 to be displayed on the display 674, the display area R1-8 to be displayed on the display 684, and the display area R1-8 to be displayed on the display 684 in the video CV. ..
 また、映像処理部53は、ディスプレイ6k4の表示領域R1-kそれぞれに基づいて、非加工領域R2-kを決定する。一例では、図8に示すように、映像処理部53は、表示領域R1-kを非加工領域R2-kと決定してもよい。 Further, the image processing unit 53 determines the non-processed area R2-k based on each of the display areas R1-k of the display 6k4. In one example, as shown in FIG. 8, the video processing unit 53 may determine the display area R1-k as the non-processed area R2-k.
 また、映像処理部53は、映像CVにおける、いずれの非加工領域R2-kとも異なる領域を加工領域R3と決定する。 Further, the video processing unit 53 determines a region different from any non-processed region R2-k in the video CV as the processed region R3.
 映像処理部53は、映像CVにおける非加工領域R2-kにデータ量低減加工を施さず、映像CVにおける加工領域R3にデータ量低減加工を施すことによって、加工映像PVを生成する。 The video processing unit 53 generates the processed video PV by performing the data amount reduction processing on the processed region R3 in the video CV without performing the data amount reduction processing on the non-processed region R2-k in the video CV.
 エンコーダ54は、第1の実施形態のエンコーダ24と同じである。 The encoder 54 is the same as the encoder 24 of the first embodiment.
 送信部55は、第1の実施形態の送信部25と同じである。 The transmission unit 55 is the same as the transmission unit 25 of the first embodiment.
 <映像表示装置の機能構成>
 図6に示すように、映像表示装置6kは、受信部6k1と、デコーダ6k2と、表示制御部6k3と、ディスプレイ6k4と、を備える。受信部6k1、デコーダ6k2、表示制御部6k3、ディスプレイ6k4は、それぞれ第1の実施形態の映像表示装置3の受信部31、デコーダ32、表示制御部33、及びディスプレイ34と同じである。
<Functional configuration of video display device>
As shown in FIG. 6, the video display device 6k includes a receiving unit 6k1, a decoder 6k2, a display control unit 6k3, and a display 6k4. The receiving unit 6k1, the decoder 6k2, the display control unit 6k3, and the display 6k4 are the same as the receiving unit 31, the decoder 32, the display control unit 33, and the display 34 of the video display device 3 of the first embodiment, respectively.
 <映像処理装置の動作>
 ここで、第2の実施形態に係る映像処理装置5における映像処理の動作について、図9を参照して説明する。図9は、第2の実施形態に係る映像処理装置5の映像処理における動作の一例を示すフローチャートである。図9を参照して説明する映像処理装置5の映像処理における動作は第2の実施形態に係る映像処理方法に相当する。
<Operation of video processing device>
Here, the operation of video processing in the video processing apparatus 5 according to the second embodiment will be described with reference to FIG. FIG. 9 is a flowchart showing an example of the operation in the video processing of the video processing apparatus 5 according to the second embodiment. The operation in the video processing of the video processing apparatus 5 described with reference to FIG. 9 corresponds to the video processing method according to the second embodiment.
 ステップS31において、映像処理部53が、カメラ4から映像CVを取得する。 In step S31, the video processing unit 53 acquires the video CV from the camera 4.
 ステップS32において、配置情報取得部56が、映像表示装置6kそれぞれが備えるディスプレイ6k4の配置情報を取得する。 In step S32, the arrangement information acquisition unit 56 acquires the arrangement information of the display 6k4 included in each of the video display devices 6k.
 ステップS33において、表示設定保持部52が、配置情報取得部56によって取得された配置情報に基づいて決まる設定情報を保持する。 In step S33, the display setting holding unit 52 holds the setting information determined based on the placement information acquired by the placement information acquisition unit 56.
 ステップS34において、映像処理部53が、表示設定保持部52に保持されている設定情報に応じた表示領域R1に基づいて、非加工領域R2および加工領域R3を決定する。 In step S34, the video processing unit 53 determines the non-processing area R2 and the processing area R3 based on the display area R1 corresponding to the setting information held in the display setting holding unit 52.
 ステップS34において、映像処理部53が、映像CVを加工した加工映像PVを生成する。このとき、映像処理部53は、非加工領域R2にデータ量低減加工を施さず、非加工領域R2とは異なる加工領域R3にデータ量低減加工を施すことによって、加工映像PVを生成する。 In step S34, the video processing unit 53 generates a processed video PV obtained by processing the video CV. At this time, the image processing unit 53 generates the processed image PV by performing the data amount reduction processing on the processing area R3 different from the non-processing area R2 without performing the data amount reduction processing on the non-processing area R2.
 ステップS35において、エンコーダ54が、ステップS13で加工された加工映像PVをエンコードする。 In step S35, the encoder 54 encodes the processed video PV processed in step S13.
 ステップS36において、送信部55が、通信ネットワークを介して、エンコーダ54によってエンコードされた加工映像PVと、表示設定保持部52によって保持されている設定情報を映像表示装置6kそれぞれに送信する。 In step S36, the transmission unit 55 transmits the processed video PV encoded by the encoder 54 and the setting information held by the display setting holding unit 52 to each of the video display devices 6k via the communication network.
 <映像表示装置の動作>
 第2の実施形態の映像表示装置6kそれぞれの映像表示における動作は、第1の実施形態に係る映像表示装置3の映像表示における動作と同様である。
<Operation of video display device>
The operation in the video display of each of the video display devices 6k of the second embodiment is the same as the operation in the video display of the video display device 3 according to the first embodiment.
 上述したように、第2の実施形態によれば、映像処理装置5は、複数の映像表示装置6kがそれぞれ備えるディスプレイ6k4の配置情報を取得し、ディスプレイ6k4の配置情報に基づいて、映像CVにおける、ディスプレイ6k4それぞれに表示させる表示領域R1-kを決定する。このため、1つの仮想的なディスプレイVDを構成する複数の映像表示装置6kそれぞれが、映像CVにおけるそれぞれの表示領域R1-kに基づく領域を表示する場合においても、映像表示装置3に送信する映像のデータ量を削減することができる。 As described above, according to the second embodiment, the video processing device 5 acquires the layout information of the display 6k4 included in each of the plurality of video display devices 6k, and based on the layout information of the display 6k4, in the video CV. , The display area R1-k to be displayed on each of the displays 6k4 is determined. Therefore, even when each of the plurality of video display devices 6k constituting one virtual display VD displays an area based on each display area R1-k in the video CV, the video transmitted to the video display device 3 The amount of data can be reduced.
 なお、第2の実施形態において、映像処理部53は、ディスプレイ614からディスプレイ6n4のいずれかのディスプレイ6k4の非加工領域R2-kのみにデータ量低減加工を施さないとしてもよい。この場合、映像処理部53は、当該非加工領域R2-kとは異なる領域を加工領域R3としてデータ量低減加工を施す。 In the second embodiment, the video processing unit 53 may not perform data amount reduction processing only on the non-processed region R2-k of any of the displays 6k4 from the display 614 to the display 6n4. In this case, the image processing unit 53 performs data amount reduction processing by setting a region different from the non-processed region R2-k as the processed region R3.
 例えば、図10に示すように、映像処理部53は、ディスプレイ614の非加工領域R2-1にデータ量低減加工を施さず、ディスプレイ614の非加工領域R2-1とは異なる領域を非加工領域R3としてデータ量低減加工を施して、ディスプレイ614用の加工映像PVを生成もよい。そして、エンコーダ54は、ディスプレイ614用の加工映像PVをエンコードして、送信部55は、エンコードされた加工映像PVを映像表示装置61に送信する。同様にして、映像処理部53は、ディスプレイ624の非加工領域R2-2に含まれる部分にデータ量低減加工を施さず、ディスプレイ624の非加工領域R2-2とは異なる加工領域R3にデータ量低減加工を施して、ディスプレイ624用の加工映像PVを生成もよい。そして、エンコーダ54は、ディスプレイ624用の加工映像PVをエンコードして、送信部55は、エンコードされた加工映像PVを映像表示装置62に送信する。同様にして、映像処理部53は、ディスプレイ634~6n4それぞれ用の加工映像PVを生成してもよい。そして、エンコーダ54は、ディスプレイ34~6n4それぞれ用の加工映像PVをエンコードして、送信部55は、エンコードされた加工映像PVをそれぞれ映像表示装置63~6nに送信する。 For example, as shown in FIG. 10, the image processing unit 53 does not perform data amount reduction processing on the non-processed area R2-1 of the display 614, and forms a region different from the non-processed area R2-1 of the display 614 as a non-processed area. The processed image PV for the display 614 may be generated by performing the data amount reduction processing as R3. Then, the encoder 54 encodes the processed video PV for the display 614, and the transmission unit 55 transmits the encoded processed video PV to the video display device 61. Similarly, the image processing unit 53 does not perform data amount reduction processing on the portion included in the non-processed area R2-2 of the display 624, and the data amount is in the processing area R3 different from the non-processed area R2-2 of the display 624. The reduced processing may be performed to generate a processed video PV for the display 624. Then, the encoder 54 encodes the processed video PV for the display 624, and the transmission unit 55 transmits the encoded processed video PV to the video display device 62. Similarly, the video processing unit 53 may generate processed video PV for each of the displays 634 to 6n4. Then, the encoder 54 encodes the processed video PV for each of the displays 34 to 6n4, and the transmission unit 55 transmits the encoded processed video PV to the video display devices 63 to 6n, respectively.
 <<第3の実施形態>>
 図11を参照して第3の実施形態の全体構成について説明する。図11は、本発明の第3の実施形態に係る映像処理システム102の機能ブロック図である。
<< Third Embodiment >>
The overall configuration of the third embodiment will be described with reference to FIG. FIG. 11 is a functional block diagram of the video processing system 102 according to the third embodiment of the present invention.
 図11に示すように、第3の実施形態の映像処理システム102は、第2の実施形態の映像処理システム101と同様に、カメラ4と、映像処理装置5と、複数の映像表示装置61、62、・・・、6nとを備える。カメラ4は、第2の実施形態のカメラ4と同じである。 As shown in FIG. 11, the video processing system 102 of the third embodiment has a camera 4, a video processing device 5, and a plurality of video display devices 61, similarly to the video processing system 101 of the second embodiment. 62, ..., 6n and the like. The camera 4 is the same as the camera 4 of the second embodiment.
 <映像処理装置の機能構成>
 映像処理装置5は、取得部51と、表示設定保持部52と、映像処理部53と、エンコーダ54と、送信部55と、配置情報決定部57とを備える。取得部51、エンコーダ54、及び送信部55は、それぞれ第2の実施形態の取得部51、エンコーダ54、及び送信部55と同じである。
<Functional configuration of video processing equipment>
The video processing device 5 includes an acquisition unit 51, a display setting holding unit 52, a video processing unit 53, an encoder 54, a transmission unit 55, and an arrangement information determination unit 57. The acquisition unit 51, the encoder 54, and the transmission unit 55 are the same as the acquisition unit 51, the encoder 54, and the transmission unit 55 of the second embodiment, respectively.
 配置情報決定部57は、カメラ4によって撮像された映像CVに基づいて、複数のディスプレイ6k4それぞれの目標位置TPを決定する。具体的には、まず、配置情報決定部57は、取得部51によって取得された映像CVにおける、主被写体OBの像OBIが撮像されている被写体領域R4を抽出する。配置情報決定部57は、例えば、撮像範囲に主被写体OBが存在しない状態で、主被写体OBとは異なる被写体である背景を撮像することによって生成した背景画像と映像CVとの差分を抽出することによって、被写体領域R4を抽出することができる。配置情報決定部57は、これに限られず、任意の方法を用いて、被写体領域R4を抽出することができる。なお、映像が、カメラによって生成された映像ではない、例えばコンピュータグラフィックによって生成された映像である場合、被写体領域R4は、映像CVにおける主要部分が表示されている領域としてもよい。主要部分とは、例えば、映像CVにおいて、他の部分より変化の大きい部分とすることができる。 The arrangement information determination unit 57 determines the target position TP of each of the plurality of displays 6k4 based on the image CV captured by the camera 4. Specifically, first, the arrangement information determination unit 57 extracts the subject region R4 in which the image OBI of the main subject OB is captured in the video CV acquired by the acquisition unit 51. The arrangement information determination unit 57 extracts, for example, the difference between the background image and the video CV generated by imaging a background that is a subject different from the main subject OB in a state where the main subject OB does not exist in the imaging range. Can extract the subject area R4. The arrangement information determination unit 57 can extract the subject area R4 by using any method, not limited to this. When the image is not an image generated by the camera, for example, an image generated by computer graphics, the subject area R4 may be an area in which the main part of the image CV is displayed. The main part can be, for example, a part of the video CV that changes more than the other parts.
 次に、配置情報決定部57は、被写体領域R4の重心を算出する。配置情報決定部57は、例えば、被写体領域R4の各座標の算術平均を用いることによって被写体領域R4の重心を算出することができる。これに限られず、任意の方法を用いて、被写体領域R4の重心を算出することができる。 Next, the arrangement information determination unit 57 calculates the center of gravity of the subject area R4. The arrangement information determination unit 57 can calculate the center of gravity of the subject area R4 by using, for example, the arithmetic mean of each coordinate of the subject area R4. Not limited to this, the center of gravity of the subject area R4 can be calculated by using any method.
 配置情報決定部57は、映像CVが更新されるごとに、該被写体領域R4の重心を算出する処理を繰り返す。そして、配置情報決定部57は、映像CVが更新されるごとに算出される被写体領域R4の重心の変化量を算出する。 The arrangement information determination unit 57 repeats the process of calculating the center of gravity of the subject area R4 every time the video CV is updated. Then, the arrangement information determination unit 57 calculates the amount of change in the center of gravity of the subject area R4, which is calculated every time the video CV is updated.
 配置情報決定部57は、被写体領域R4に基づいて、ディスプレイ6k4それぞれの目標位置TPを決定する。具体的には、配置情報決定部57は、ディスプレイ614、624、・・・、及び6n4が構成する1つの仮想的なディスプレイVDが被写体領域R4を表示するように、ディスプレイ6k4それぞれの目標位置TPを決定する。 The arrangement information determination unit 57 determines the target position TP of each of the displays 6k4 based on the subject area R4. Specifically, the arrangement information determination unit 57 sets a target position TP for each of the displays 6k4 so that one virtual display VD composed of the displays 614, 624, ..., And 6n4 displays the subject area R4. To decide.
 その後、配置情報決定部57は、映像CVが更新されて重心の変化量が算出されると該重心の変化量に基づいて、ディスプレイ6k4それぞれの目標位置TPを更新する。具体的には、配置情報決定部57は、目標位置TPに重心の変化量を、現在の目標位置TPに加算することによって目標位置TPを更新する。以降、配置情報決定部57は、映像CVが更新される被写体領域R4の重心の変化量が算出されるごとに、該重心の変化量に基づいて、ディスプレイ6k4それぞれ目標位置TPを更新することを繰り返す。 After that, when the video CV is updated and the amount of change in the center of gravity is calculated, the arrangement information determination unit 57 updates the target position TP of each of the displays 6k4 based on the amount of change in the center of gravity. Specifically, the placement information determination unit 57 updates the target position TP by adding the amount of change in the center of gravity to the target position TP to the current target position TP. After that, the arrangement information determination unit 57 updates the target position TP of each of the displays 6k4 based on the change amount of the center of gravity each time the change amount of the center of gravity of the subject area R4 where the video CV is updated is calculated. repeat.
 配置情報決定部57は、ディスプレイ6k4それぞれの目標位置TPに基づいて、表示領域R1を決定する。具体的には、配置情報決定部57は、ディスプレイ6k4それぞれが目標位置TPに配置されたときに、該ディスプレイ6k4それぞれに表示させる、被写体領域R4の少なくとも一部を含む表示領域R1-kを決定する。配置情報決定部57は、例えば、図8に示すように、被写体の像OBIを表す被写体領域R4のそれぞれ一部を含むように、表示領域R1-1~R1-8を決定する。また、配置情報決定部57は、ディスプレイ6k4それぞれが表示領域R1-kを表示するときの拡大率を決定する。 The arrangement information determination unit 57 determines the display area R1 based on the target position TP of each of the displays 6k4. Specifically, the arrangement information determination unit 57 determines a display area R1-k including at least a part of the subject area R4 to be displayed on each of the displays 6k4 when each of the displays 6k4 is arranged at the target position TP. do. The arrangement information determination unit 57 determines the display areas R1-1 to R1-8 so as to include, for example, a part of each of the subject areas R4 representing the image OBI of the subject, as shown in FIG. Further, the arrangement information determination unit 57 determines the enlargement ratio when each of the displays 6k4 displays the display area R1-k.
 表示設定保持部52は、第2の実施形態と同様に、ディスプレイ6k4が映像を表示するための設定情報を保持する。第3の実施形態においては、設定情報には、配置情報決定部57によって決定されたディスプレイ6k4の目標位置TPに対応する、映像CVにおける表示領域R1の位置と、映像表示装置6kに表示領域R1を表示させるときの表示領域R1の拡大率とが含まれる。 The display setting holding unit 52 holds the setting information for the display 6k4 to display the image, as in the second embodiment. In the third embodiment, the setting information includes the position of the display area R1 in the video CV corresponding to the target position TP of the display 6k4 determined by the arrangement information determination unit 57, and the display area R1 in the video display device 6k. Is included with the enlargement ratio of the display area R1 when displaying.
 映像処理部53は、第2の実施形態と同様に、映像CVを加工した加工映像PVを生成する。 The video processing unit 53 generates a processed video PV obtained by processing the video CV, as in the second embodiment.
 具体的には、映像処理部53は、ディスプレイ6k4の表示領域R1-kそれぞれに基づいて、非加工領域R2-kを決定する。一例では、第2の実施形態と同様に、映像処理部53は、表示領域R1-kを非加工領域R2-kと決定してもよい。他の例では、映像処理部53は、加工領域R2-kを、表示領域R1-kと、該表示領域R1-kに隣接する領域とによって構成される領域と決定してもよい。 Specifically, the image processing unit 53 determines the non-processed area R2-k based on each of the display areas R1-k of the display 6k4. In one example, as in the second embodiment, the video processing unit 53 may determine the display area R1-k as the non-processed area R2-k. In another example, the video processing unit 53 may determine the processing region R2-k as a region composed of a display region R1-k and a region adjacent to the display region R1-k.
 ここで、映像処理部53が非加工領域R2を決定する他の例について図12を参照して詳細に説明する。図12は、図8に示す表示領域R1-kの1つを拡大して示す図である。図12に示すように、映像処理部53は、配置情報取得部56によって取得された配置情報に基づいて、ディスプレイ6k4に表示させる表示領域R1-kを決定する。ここで、ディスプレイ6k4の目標位置TPと、追って詳細に説明される映像表示装置6kの駆動部6k5によって目標位置TPに基づいて駆動されたディスプレイ6k4の実際の位置とに誤差が発生することがある。この場合、映像表示装置6kは、該ディスプレイ6k4に、ディスプレイ6k4の目標位置TPに基づく表示領域R1-kを表示させるのに対して、ディスプレイ6k4の実際の位置によれば、一点鎖線で示される領域aを表示させることが適切であるという場合がある。このような場合に、映像表示装置6kが、誤差に基づいて、ディスプレイ6k4に表示させる表示領域R1-kを領域aに補正するためには、該領域aにデータ量低減加工が施されていないことが必要である。そこで、図12に示すように、映像処理部53は、該領域aも含む領域、具体的には、表示領域R1-kと該表示領域R1-kに隣接している領域とによって構成される非加工領域R2-kを決定する。 Here, another example in which the image processing unit 53 determines the non-processed region R2 will be described in detail with reference to FIG. FIG. 12 is an enlarged view showing one of the display areas R1-k shown in FIG. As shown in FIG. 12, the video processing unit 53 determines the display area R1-k to be displayed on the display 6k4 based on the arrangement information acquired by the arrangement information acquisition unit 56. Here, an error may occur between the target position TP of the display 6k4 and the actual position of the display 6k4 driven based on the target position TP by the drive unit 6k5 of the image display device 6k, which will be described in detail later. .. In this case, the video display device 6k causes the display 6k4 to display the display area R1-k based on the target position TP of the display 6k4, whereas the actual position of the display 6k4 is indicated by a dashed line. It may be appropriate to display the area a. In such a case, in order for the video display device 6k to correct the display area R1-k to be displayed on the display 6k4 to the area a based on the error, the area a is not processed to reduce the amount of data. It is necessary. Therefore, as shown in FIG. 12, the video processing unit 53 is composed of a region including the region a, specifically, a display region R1-k and a region adjacent to the display region R1-k. The unprocessed region R2-k is determined.
 例えば、ディスプレイ6k4の位置の誤差が距離dであることが予想される場合、非加工領域R2-kは、図12に示すように、表示領域R1-kの外縁から、距離d1までの領域を表示領域R1-kに隣接している領域とすることができる。距離d1は、実空間における距離dに対応する、映像空間内における距離である。例えば、ディスプレイが50cm×50cmの正方形であって、ディスプレイの位置には10cmの誤差が見込まれる場合、非加工領域R2-kは、映像CVにおける、70cm×70cmの正方形に対応する領域となる。また、配置情報が示すディスプレイの角度の精度がθ°である場合、ディスプレイを±θ°回転させたときに、表示領域R1-kの範囲に含まれる領域を、非加工領域R2-kとすることができる。例えば、ディスプレイの角度に5°の誤差が見込まれる場合、非加工領域R2-kは、映像CVにおける、配置情報が示す角度から5°までディスプレイ6k4を回転させた場合に、当該ディスプレイ6k4に対応する領域となる。 For example, when it is expected that the position error of the display 6k4 is the distance d, the non-processed region R2-k covers the region from the outer edge of the display region R1-k to the distance d1 as shown in FIG. It can be an area adjacent to the display area R1-k. The distance d1 is a distance in the video space corresponding to the distance d in the real space. For example, when the display is a square of 50 cm × 50 cm and an error of 10 cm is expected in the position of the display, the non-processed area R2-k is the area corresponding to the square of 70 cm × 70 cm in the video CV. Further, when the accuracy of the angle of the display indicated by the arrangement information is θ °, the region included in the range of the display region R1-k when the display is rotated by ± θ ° is defined as the non-processed region R2-k. be able to. For example, when an error of 5 ° is expected in the angle of the display, the non-processed region R2-k corresponds to the display 6k4 when the display 6k4 is rotated to 5 ° from the angle indicated by the arrangement information in the video CV. It becomes the area to do.
 そして、図13に示すように、映像処理部53は、映像CVにおける表示領域R1に基づく非加工領域R2にデータ量低減加工を施さず、非加工領域R2とは異なる加工領域R3にデータ量低減加工を施す。 Then, as shown in FIG. 13, the video processing unit 53 does not perform data amount reduction processing on the non-processed region R2 based on the display region R1 in the video CV, and reduces the data amount to the processed region R3 different from the non-processed region R2. Apply processing.
 送信部55は、エンコーダ54によってエンコードされた加工映像PVと、表示設定保持部52に保持されている設定情報とを映像表示装置6kに送信する。また、送信部55は、複数の前記ディスプレイ6k4それぞれの目標位置TPを示す駆動情報を、該ディスプレイ6k4を備える映像表示装置6kに送信する。 The transmission unit 55 transmits the processed video PV encoded by the encoder 54 and the setting information held in the display setting holding unit 52 to the video display device 6k. Further, the transmission unit 55 transmits drive information indicating the target position TP of each of the plurality of displays 6k4 to the video display device 6k provided with the displays 6k4.
 <映像表示装置の機能構成>
 映像表示装置6kは、受信部6k1と、デコーダ6k2と、表示制御部6k3と、ディスプレイ6k4と、駆動部6k5とを備える。デコーダ6k2は、それぞれ第1の実施形態の映像表示装置3のデコーダ32と同じである。
<Functional configuration of video display device>
The video display device 6k includes a reception unit 6k1, a decoder 6k2, a display control unit 6k3, a display 6k4, and a drive unit 6k5. The decoder 6k2 is the same as the decoder 32 of the video display device 3 of the first embodiment, respectively.
 受信部6k1は、第2の実施形態の映像表示装置6kの受信部6k1と同様に、エンコードされた加工映像PV、及び設定情報を受信する。また、受信部6k1は、映像処理装置5の送信部55から送信された駆動情報を受信する。 The receiving unit 6k1 receives the encoded processed video PV and the setting information in the same manner as the receiving unit 6k1 of the video display device 6k of the second embodiment. Further, the receiving unit 6k1 receives the drive information transmitted from the transmitting unit 55 of the video processing device 5.
 表示制御部6k3は、受信部6k1によって受信され、デコーダ6k2によって出コードされた加工画像PVの非加工領域R3に含まれる表示領域R1-kをディスプレイ6k4に表示させる。このとき、表示制御部6k3は、駆動情報に含まれる目標位置TP及びディスプレイ6k4の実際の位置に基づいて、ディスプレイ6k4に表示させる表示領域R1-kを補正してもよい。 The display control unit 6k3 causes the display 6k4 to display the display area R1-k included in the non-processed area R3 of the processed image PV received by the receiving unit 6k1 and coded by the decoder 6k2. At this time, the display control unit 6k3 may correct the display area R1-k to be displayed on the display 6k4 based on the target position TP included in the drive information and the actual position of the display 6k4.
 ディスプレイ6k4は、表示制御部6k3の制御に基づいて、加工映像PVの非加工領域R3に含まれる領域を表示する。 The display 6k4 displays an area included in the non-processed area R3 of the processed video PV based on the control of the display control unit 6k3.
 駆動部6k5は、ディスプレイ6k4を駆動する。具体的には、駆動部6k5は、受信部6k1によって受信された駆動情報が示す目標位置TPにディスプレイ6k4を配置するようにディスプレイ6k4を駆動する。 The drive unit 6k5 drives the display 6k4. Specifically, the drive unit 6k5 drives the display 6k4 so as to arrange the display 6k4 at the target position TP indicated by the drive information received by the reception unit 6k1.
 <映像処理装置の動作>
 ここで、第3の実施形態に係る映像処理装置5における映像処理の動作について、図14を参照して説明する。図14は、第3の実施形態に係る映像処理装置5の映像処理における動作の一例を示すフローチャートである。図14を参照して説明する映像処理装置5の映像処理における動作は第3の実施形態に係る映像処理方法に相当する。
<Operation of video processing device>
Here, the operation of video processing in the video processing apparatus 5 according to the third embodiment will be described with reference to FIG. FIG. 14 is a flowchart showing an example of the operation in the video processing of the video processing apparatus 5 according to the third embodiment. The operation in the video processing of the video processing apparatus 5 described with reference to FIG. 14 corresponds to the video processing method according to the third embodiment.
 ステップS41において、取得部51が、カメラ4から映像CVを取得する。 In step S41, the acquisition unit 51 acquires the video CV from the camera 4.
 ステップS42において、映像処理部53が、映像CVにおける被写体領域R4を決定する。 In step S42, the video processing unit 53 determines the subject area R4 in the video CV.
 ステップS43において、映像処理部53が、被写体領域R4に基づいて、ディスプレイ6k4それぞれの目標位置TPを決定する。 In step S43, the video processing unit 53 determines the target position TP of each of the displays 6k4 based on the subject area R4.
 ステップS44において、映像処理部53が、映像CVにおける、ディスプレイ6k4に表示させる表示領域R1-kの位置と、ディスプレイ6k4に表示領域R1-kを表示させるときの表示領域R1-kの拡大率とを決定し、表示設定保持部52が、表示領域R1-kの位置及び拡大率を含む設定情報を保持する。 In step S44, the position of the display area R1-k to be displayed on the display 6k4 and the enlargement ratio of the display area R1-k when the image processing unit 53 displays the display area R1-k on the display 6k4 in the video CV. The display setting holding unit 52 holds the setting information including the position and the enlargement ratio of the display area R1-k.
 ステップS45において、映像処理部53が、ディスプレイ6k4の目標位置TPにおける表示領域R1-kに基づいて、非加工領域R2及び加工領域R3を決定する。このとき、非加工領域R2は、表示領域R1と、該表示領域R1に隣接する領域とによって構成されていてもよい。 In step S45, the image processing unit 53 determines the non-processed area R2 and the processed area R3 based on the display area R1-k in the target position TP of the display 6k4. At this time, the non-processed region R2 may be composed of a display region R1 and a region adjacent to the display region R1.
 ステップS46において、映像処理部53が、非加工領域R2にデータ量低減加工を施さず、加工領域R3にデータ量低減加工を施すことによって、映像CVを加工した加工映像PVを生成する。このとき、映像処理部53は、非加工領域R2にデータ量低減加工を施さず、非加工領域R2とは異なる加工領域R3にデータ量低減加工を施すことによって、加工映像PVを生成する。 In step S46, the video processing unit 53 does not perform data amount reduction processing on the non-processed region R2, but performs data amount reduction processing on the processed region R3 to generate a processed video PV processed by the video CV. At this time, the image processing unit 53 generates the processed image PV by performing the data amount reduction processing on the processing area R3 different from the non-processing area R2 without performing the data amount reduction processing on the non-processing area R2.
 ステップS47において、エンコーダ54が、ステップS45で加工された加工映像PVをエンコードする。 In step S47, the encoder 54 encodes the processed video PV processed in step S45.
 ステップS48において、送信部55が、ディスプレイ6k4それぞれの目標位置TPを示す駆動情報を映像表示装置6kに送信する。 In step S48, the transmission unit 55 transmits drive information indicating the target position TP of each of the displays 6k4 to the video display device 6k.
 ステップS49において、送信部55が、エンコーダ54によってエンコードされた加工映像PVと、表示設定保持部52によって保持されている設定情報を映像表示装置3に送信する。 In step S49, the transmission unit 55 transmits the processed video PV encoded by the encoder 54 and the setting information held by the display setting holding unit 52 to the video display device 3.
 <映像表示装置の動作>
 第3の実施形態の映像表示装置6kそれぞれの映像表示における動作は、第2の実施形態に係る映像表示装置6kの映像表示における動作と同様の動作を行う。このとき、駆動部6k5が、受信部6k1によって受信された駆動情報に基づいて、ディスプレイ6k4を駆動する。そして、表示制御部6k3は、ディスプレイ6k4が駆動部6k5によって駆動された位置にある状態において、第2の実施形態と同様に、ディスプレイ6k4に映像を表示させる。
<Operation of video display device>
The operation in the video display of each of the video display devices 6k of the third embodiment is the same as the operation in the video display of the video display device 6k according to the second embodiment. At this time, the drive unit 6k5 drives the display 6k4 based on the drive information received by the reception unit 6k1. Then, the display control unit 6k3 causes the display 6k4 to display an image in the state where the display 6k4 is in the position driven by the drive unit 6k5, as in the second embodiment.
 上述したように、第3の実施形態によれば、映像処理装置5は、映像CVにおける主被写体領域R4に基づいて、ディスプレイ6k4に表示させる表示領域R1-k、及びディスプレイ6k4の目標位置TPを決定する。そして、映像処理装置5が、目標位置TPを含む駆動情報を映像表示装置6kに送信することにより、ディスプレイ6k4は、目標位置TPに駆動される。このため、1つの仮想的なディスプレイVDを構成する複数の映像表示装置6kそれぞれが、映像CVにおけるそれぞれの表示領域R1-kに基づく領域を表示する場合においても、映像表示装置3に送信する映像のデータ量を削減することができる。 As described above, according to the third embodiment, the image processing apparatus 5 sets the display area R1-k to be displayed on the display 6k4 and the target position TP of the display 6k4 based on the main subject area R4 in the image CV. decide. Then, the video processing device 5 transmits the drive information including the target position TP to the video display device 6k, so that the display 6k4 is driven to the target position TP. Therefore, even when each of the plurality of video display devices 6k constituting one virtual display VD displays an area based on each display area R1-k in the video CV, the video transmitted to the video display device 3 The amount of data can be reduced.
 また、第3の実施形態によれば、加工領域R2は、表示領域R1と、該表示領域R1に隣接する領域とによって構成されていてもよい。これによって、ディスプレイ6k4の実際の位置が、目標位置TPに対して誤差があったとしても、映像処理部53は、ディスプレイ6k4が実際に配置された位置において該ディスプレイ6k4が表示させる領域に低データ量加工を施さない。したがって、映像表示装置6kは、映像処理装置5から受信した加工映像PVにおける、ディスプレイ6k4に表示させる表示領域R1を補正したときに、データ量低減加工が施されていない映像を適切に表示することができる。 Further, according to the third embodiment, the processing region R2 may be composed of a display region R1 and a region adjacent to the display region R1. As a result, even if the actual position of the display 6k4 has an error with respect to the target position TP, the image processing unit 53 has low data in the area to be displayed by the display 6k4 at the position where the display 6k4 is actually arranged. No quantity processing is applied. Therefore, the video display device 6k appropriately displays the video that has not been processed to reduce the amount of data when the display area R1 to be displayed on the display 6k4 is corrected in the processed video PV received from the video processing device 5. Can be done.
 <プログラム>
 上述した映像処理装置2、映像表示装置3、映像処理装置5、及び映像表示装置6として機能させるために、それぞれプログラム命令を実行可能なコンピュータを用いることも可能である。図15は、映像処理装置2、映像表示装置3、映像処理装置5、及び映像表示装置6として機能するコンピュータ103の概略構成を示すブロック図である。ここで、コンピュータ103は、汎用コンピュータ、専用コンピュータ、ワークステーション、PC(Personal Computer)、電子ノートパッドなどであってもよい。プログラム命令は、必要なタスクを実行するためのプログラムコード、コードセグメントなどであってもよい。
<Program>
In order to function as the video processing device 2, the video display device 3, the video processing device 5, and the video display device 6 described above, it is also possible to use a computer capable of executing a program instruction, respectively. FIG. 15 is a block diagram showing a schematic configuration of a computer 103 that functions as a video processing device 2, a video display device 3, a video processing device 5, and a video display device 6. Here, the computer 103 may be a general-purpose computer, a dedicated computer, a workstation, a PC (Personal Computer), an electronic notepad, or the like. The program instruction may be a program code, a code segment, or the like for executing a necessary task.
 図15に示すように、コンピュータ103は、プロセッサ110と、ROM(Read Only Memory)120と、RAM(Random Access Memory)130と、ストレージ140と、入力部150と、出力部160と、通信インターフェース(I/F)170と、を備える。各構成は、バス180を介して相互に通信可能に接続されている。プロセッサ110は、具体的にはCPU(Central Processing Unit)、MPU(Micro Processing Unit)、GPU(Graphics Processing Unit)、DSP(Digital Signal Processor)、SoC(System on a Chip)などであり、同種又は異種の複数のプロセッサにより構成されてもよい。 As shown in FIG. 15, the computer 103 includes a processor 110, a ROM (ReadOnlyMemory) 120, a RAM (RandomAccessMemory) 130, a storage 140, an input unit 150, an output unit 160, and a communication interface ( I / F) 170 and. Each configuration is communicably connected to each other via bus 180. Specifically, the processor 110 is a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a GPU (Graphics Processing Unit), a DSP (Digital Signal Processor), a SoC (System on a Chip), or the like, and is of the same type or different types. It may be composed of a plurality of processors.
 プロセッサ110は、各構成の制御、及び各種の演算処理を実行する。すなわち、プロセッサ110は、ROM120又はストレージ140からプログラムを読み出し、RAM130を作業領域としてプログラムを実行する。プロセッサ110は、ROM120又はストレージ140に記憶されているプログラムに従って、上記各構成の制御及び各種の演算処理を行う。本実施形態では、ROM120又はストレージ140に、本開示に係るプログラムが格納されている。 The processor 110 controls each configuration and executes various arithmetic processes. That is, the processor 110 reads the program from the ROM 120 or the storage 140, and executes the program using the RAM 130 as a work area. The processor 110 controls each of the above configurations and performs various arithmetic processes according to the program stored in the ROM 120 or the storage 140. In the present embodiment, the program according to the present disclosure is stored in the ROM 120 or the storage 140.
 プログラムは、コンピュータ103が読み取り可能な記録媒体に記録されていてもよい。このような記録媒体を用いれば、プログラムをコンピュータ103にインストールすることが可能である。ここで、プログラムが記録された記録媒体は、非一過性(non-transitory)の記録媒体であってもよい。非一過性の記録媒体は、特に限定されるものではないが、例えば、CD-ROM、DVD-ROM、USB(Universal Serial Bus)メモリなどであってもよい。また、このプログラムは、ネットワークを介して外部装置からダウンロードされる形態としてもよい。 The program may be recorded on a recording medium that can be read by the computer 103. By using such a recording medium, it is possible to install the program on the computer 103. Here, the recording medium on which the program is recorded may be a non-transitory recording medium. The non-transient recording medium is not particularly limited, but may be, for example, a CD-ROM, a DVD-ROM, a USB (Universal Serial Bus) memory, or the like. Further, this program may be downloaded from an external device via a network.
 ROM120は、各種プログラム及び各種データを格納する。RAM130は、作業領域として一時的にプログラム又はデータを記憶する。ストレージ140は、HDD(Hard Disk Drive)又はSSD(Solid State Drive)により構成され、オペレーティングシステムを含む各種プログラム及び各種データを格納する。 ROM 120 stores various programs and various data. The RAM 130 temporarily stores a program or data as a work area. The storage 140 is composed of an HDD (Hard Disk Drive) or an SSD (Solid State Drive), and stores various programs including an operating system and various data.
 入力部150は、ユーザの入力操作を受け付けて、ユーザの操作に基づく情報を取得する1つ以上の入力インターフェースを含む。例えば、入力部150は、ポインティングデバイス、キーボード、マウスなどであるが、これらに限定されない。 The input unit 150 includes one or more input interfaces that accept user input operations and acquire information based on the user's operations. For example, the input unit 150 is, but is not limited to, a pointing device, a keyboard, a mouse, and the like.
 出力部160は、情報を出力する1つ以上の出力インターフェースを含む。例えば、出力部160は、情報を映像で出力するディスプレイ、又は情報を音声で出力するスピーカであるが、これらに限定されない。なお、出力部160は、タッチパネル方式のディスプレイである場合には、入力部150としても機能する。 The output unit 160 includes one or more output interfaces that output information. For example, the output unit 160 is a display that outputs information as a video, or a speaker that outputs information as audio, but is not limited thereto. The output unit 160 also functions as an input unit 150 in the case of a touch panel type display.
 通信インターフェース170は、外部の装置と通信するためのインターフェースである。 The communication interface 170 is an interface for communicating with an external device.
 上述の実施形態は代表的な例として説明したが、本開示の趣旨及び範囲内で、多くの変更及び置換ができることは当業者に明らかである。したがって、本発明は、上述の実施形態によって制限するものと解するべきではなく、請求の範囲から逸脱することなく、種々の変形又は変更が可能である。例えば、実施形態の構成図に記載の複数の構成ブロックを1つに組み合わせたり、あるいは1つの構成ブロックを分割したりすることが可能である。 Although the above-described embodiment has been described as a representative example, it is clear to those skilled in the art that many changes and substitutions can be made within the spirit and scope of the present disclosure. Therefore, the invention should not be construed as limiting by the embodiments described above, and various modifications or modifications can be made without departing from the claims. For example, it is possible to combine a plurality of the constituent blocks described in the configuration diagram of the embodiment into one, or to divide one constituent block into one.
1、4          カメラ
2,5          映像処理装置
3,6k         映像表示装置
21、51        取得部
22、52        ディスプレイ表示設定保持部
23、53        映像処理部
24、54        エンコーダ
25、55        送信部
31、6k1       受信部
32、6k2       デコーダ
33、6k3       制御部
34、6k4       ディスプレイ
56           配置情報取得部
57           配置情報決定部
6k5          駆動部
100、101、102  映像処理システム
103          コンピュータ
120          ROM
130          RAM
140          ストレージ
150          入力部
160          出力部
170          通信インターフェース
180          バス
1, 4 Cameras 2, 5 Video processing device 3, 6k Video display device 21, 51 Acquisition unit 22, 52 Display display setting holding unit 23, 53 Video processing unit 24, 54 Encoder 25, 55 Transmission unit 31, 6k1 Receiver unit 32 , 6k2 decoder 33, 6k3 control unit 34, 6k4 display 56 placement information acquisition unit 57 placement information determination unit 6k5 drive unit 100, 101, 102 video processing system 103 computer 120 ROM
130 RAM
140 Storage 150 Input unit 160 Output unit 170 Communication interface 180 Bus

Claims (8)

  1.  映像を取得するステップと、
     前記映像における、映像表示装置のディスプレイに表示される表示領域に基づく非加工領域にデータ量低減加工を施さず、前記非加工領域とは異なる加工領域にデータ量低減加工を施すことによって加工映像を生成するステップと、
     前記加工映像のデータ量が低減されるように、該加工映像をエンコードするステップと、
     前記エンコードされた前記加工映像、並びに前記映像における前記表示領域の位置、及び前記ディスプレイに前記表示領域を表示させるときの前記表示領域の拡大率を含む設定情報を、前記映像表示装置に送信するステップと、
    を含む映像処理方法。
    Steps to get the video and
    In the video, the non-processed area based on the display area displayed on the display of the image display device is not processed to reduce the amount of data, and the processed image is processed by applying the data amount reduction processing to the processed area different from the non-processed area. Steps to generate and
    A step of encoding the processed video so that the amount of data of the processed video is reduced,
    A step of transmitting to the video display device setting information including the encoded video, the position of the display area in the video, and the enlargement ratio of the display area when the display is displayed on the display. When,
    Video processing methods including.
  2.  前記データ量低減加工は、前記加工領域を構成する画素の特徴量を均一にする加工である、請求項1に記載の映像処理方法。 The video processing method according to claim 1, wherein the data amount reduction processing is a processing for making the feature amount of the pixels constituting the processing area uniform.
  3.  前記データ量低減加工は、前記映像を構成する複数フレームの画像それぞれにおける前記加工領域を同一の静止画像に変換する加工である、請求項1に記載の映像処理方法。 The video processing method according to claim 1, wherein the data amount reduction processing is a processing for converting the processed area in each of a plurality of frames of images constituting the video into the same still image.
  4.  複数の前記映像表示装置の配置情報を取得するステップをさらに含み、
     前記加工映像を生成するステップは、
      前記複数の映像処理装置の配置情報に基づいて、前記設定情報を保持するステップと、
      前記設定情報に応じた前記表示領域に基づいて前記加工映像及び非加工映像を決定するステップとを含む、請求項1から3のいずれか一項に記載の映像処理方法。
    Further including a step of acquiring the arrangement information of the plurality of video display devices.
    The step of generating the processed image is
    A step of holding the setting information based on the arrangement information of the plurality of video processing devices, and
    The image processing method according to any one of claims 1 to 3, further comprising a step of determining the processed image and the non-processed image based on the display area according to the setting information.
  5.  前記映像に基づいて、複数の前記ディスプレイそれぞれの目標位置を決定するステップと、
     複数の前記ディスプレイそれぞれの前記目標位置を示す駆動情報を、該ディスプレイを備える前記映像表示装置に送信するステップと、をさらに含む請求項1から3のいずれか一項に記載の映像処理方法。
    A step of determining a target position for each of the plurality of displays based on the image, and
    The video processing method according to any one of claims 1 to 3, further comprising a step of transmitting drive information indicating the target position of each of the plurality of displays to the video display device including the display.
  6.  前記加工領域は、前記表示領域と、該表示領域に隣接する領域とによって構成されている、請求項5に記載の映像処理方法。 The video processing method according to claim 5, wherein the processed area is composed of the display area and an area adjacent to the display area.
  7.  映像を取得する取得部と、
     前記映像における、映像表示装置のディスプレイに表示される表示領域に基づく非加工領域にデータ量低減加工を施さず、前記非加工領域とは異なる加工領域にデータ量低減加工を施すことによって加工映像を生成する映像処理部と、
     前記加工映像の伝送量が低減されるように、該加工映像をエンコードするエンコーダと、
     前記エンコーダによってエンコードされた前記加工映像、並びに前記映像における前記表示領域の位置、及び前記ディスプレイに前記表示領域を表示させるときの前記表示領域の拡大率を含む設定情報を、前記映像表示装置に送信する送信部と、
    を備える映像処理装置。
    The acquisition department that acquires video, and
    In the video, the non-processed area based on the display area displayed on the display of the image display device is not processed to reduce the amount of data, and the processed image is processed by applying the data amount reduction processing to the processed area different from the non-processed area. The video processing unit to generate and
    An encoder that encodes the processed video so that the transmission amount of the processed video is reduced,
    Setting information including the processed image encoded by the encoder, the position of the display area in the image, and the enlargement ratio of the display area when displaying the display area on the display is transmitted to the image display device. Transmitter and
    A video processing device equipped with.
  8.  コンピュータを、請求項7に記載の映像処理装置として機能させるためのプログラム。
     
    A program for causing a computer to function as the video processing device according to claim 7.
PCT/JP2020/046825 2020-12-15 2020-12-15 Video processing method, video processing device, and program WO2022130514A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/046825 WO2022130514A1 (en) 2020-12-15 2020-12-15 Video processing method, video processing device, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/046825 WO2022130514A1 (en) 2020-12-15 2020-12-15 Video processing method, video processing device, and program

Publications (1)

Publication Number Publication Date
WO2022130514A1 true WO2022130514A1 (en) 2022-06-23

Family

ID=82059218

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/046825 WO2022130514A1 (en) 2020-12-15 2020-12-15 Video processing method, video processing device, and program

Country Status (1)

Country Link
WO (1) WO2022130514A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025074591A1 (en) * 2023-10-05 2025-04-10 日本電信電話株式会社 Information presentation system and information presentation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1132296A (en) * 1997-07-11 1999-02-02 Fuji Photo Film Co Ltd Image information recording medium
JP2000165876A (en) * 1998-11-30 2000-06-16 Sharp Corp Image processor
JP2013232724A (en) * 2012-04-27 2013-11-14 Fujitsu Ltd Moving image processing device, moving image processing method, and moving image processing program
JP2015114424A (en) * 2013-12-10 2015-06-22 株式会社東芝 Electronic equipment, display device, method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1132296A (en) * 1997-07-11 1999-02-02 Fuji Photo Film Co Ltd Image information recording medium
JP2000165876A (en) * 1998-11-30 2000-06-16 Sharp Corp Image processor
JP2013232724A (en) * 2012-04-27 2013-11-14 Fujitsu Ltd Moving image processing device, moving image processing method, and moving image processing program
JP2015114424A (en) * 2013-12-10 2015-06-22 株式会社東芝 Electronic equipment, display device, method, and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025074591A1 (en) * 2023-10-05 2025-04-10 日本電信電話株式会社 Information presentation system and information presentation method

Similar Documents

Publication Publication Date Title
US10869059B2 (en) Point cloud geometry compression
US11538196B2 (en) Predictive coding for point cloud compression
US10437545B2 (en) Apparatus, system, and method for controlling display, and recording medium
JP2019534606A (en) Method and apparatus for reconstructing a point cloud representing a scene using light field data
JP2019530296A (en) Method and apparatus with video encoding function with syntax element signaling of rotation information and method and apparatus with associated video decoding function
US20190289203A1 (en) Image processing apparatus, image capturing system, image processing method, and recording medium
US8982135B2 (en) Information processing apparatus and image display method
WO2019124248A1 (en) Image processing device, content processing device, content processing system, and image processing method
US10531082B2 (en) Predictive light-field compression
WO2020063246A1 (en) Point cloud encoding method, point cloud decoding method, encoder, and decoder
US11102448B2 (en) Image capturing apparatus, image processing system, image processing method, and recording medium
WO2021035756A1 (en) Aircraft-based patrol inspection method and device, and storage medium
US9269281B2 (en) Remote screen control device, remote screen control method, and recording medium
JP6640876B2 (en) Work support device, work support method, work support program, and recording medium
US20250131630A1 (en) Prop display method, apparatus, device, and storage medium
JP6022123B1 (en) Image generation system and image generation method
JP6521352B2 (en) Information presentation system and terminal
WO2022130514A1 (en) Video processing method, video processing device, and program
CN116636219A (en) Compressing time data using geometry-based point cloud compression
JP2018033127A (en) Method and apparatus for encoding a signal representing light field content
CN115278189A (en) Image tone mapping method and apparatus, computer readable medium and electronic device
US20230085590A1 (en) Image processing apparatus, image processing method, and program
CN112153384B (en) Image coding and decoding method and device
WO2024257784A1 (en) Decoding device, decoding method, and encoding device
WO2024188090A1 (en) Video compression method and apparatus, and device and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20965902

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20965902

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP