TW201840200A - Interactive method for 3d image objects, a system, and method for post-production of 3d interactive video - Google Patents
Interactive method for 3d image objects, a system, and method for post-production of 3d interactive video Download PDFInfo
- Publication number
- TW201840200A TW201840200A TW106113465A TW106113465A TW201840200A TW 201840200 A TW201840200 A TW 201840200A TW 106113465 A TW106113465 A TW 106113465A TW 106113465 A TW106113465 A TW 106113465A TW 201840200 A TW201840200 A TW 201840200A
- Authority
- TW
- Taiwan
- Prior art keywords
- interactive
- post
- production
- movie
- film
- Prior art date
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
Description
本發明關於一種與影片中物件互動的方法與系統,特別是一種影片經過後製而置入立體模組物件後,可讓使用者與影片中立體物件互動的立體影像物件互動方法與系統。 The invention relates to a method and a system for interacting with an object in a film, in particular to a method and system for interacting a stereoscopic image object which allows a user to interact with a three-dimensional object in a movie after the film is inserted into the stereo module object.
現行技術中可以提供使用者與影片中影像互動的方式例如電腦遊戲,由遊戲製作者透過邏輯設計決定影片中各種影像與遊戲者互動的規則,當遊戲互動符合特定規則時,即依照事先設定好的方式產生互動內容。而這些遊戲畫面並非真實的影像,或是僅採用部分無法執行遊戲互動的真實影像。 The current technology can provide a way for a user to interact with images in a movie, such as a computer game. The game maker determines the rules for interacting with various players in the film through logical design. When the game interaction meets certain rules, it is set according to the prior art. The way to generate interactive content. These game screens are not real images, or only some real images that cannot perform game interaction.
其他提供影像互動的技術例如虛擬實境(Virtual Reality,VR)、擴增實境(Augmented Reality,AR)或一種混合VR與AR技術的混合實境(Mixed Reality,MR),其中需要在一開始時製作或拍攝符合VR或AR等規格的影像,並以軟體手段設計出使用者可與其中影像互動的介面,讓使用者可以操作行動裝置、穿戴式裝置,或一般電腦裝置進行互動。 Other technologies that provide image interaction such as Virtual Reality (VR), Augmented Reality (AR), or Mixed Reality (MR) with hybrid VR and AR technologies, which need to be in the beginning. Produce or shoot images conforming to VR or AR specifications, and design a user interface with the images in software to allow users to interact with mobile devices, wearable devices, or general computer devices.
現行提供使用者可以互動的影像內容皆是事先製作符合特定格式的影片,不可更動影片中的影像內容,如VR,或透過螢幕與攝影機將虛擬物件即時置入現實的AR,與事先設計(繪製)好有 特定邏輯規則的虛擬空間環境,如遊戲,並沒有可以在已經錄製好的真實影片中產生互動的技術。 Currently, the video content that the user can interact with is to make a movie in a specific format in advance, and can not change the image content in the video, such as VR, or put the virtual object into the real AR through the screen and the camera, and design (plot) in advance. A virtual space environment with specific logic rules, such as games, does not have the technology to create interactions in recorded real movies.
本揭露書提出一種立體影像物件互動方法與系統,以及其中立體互動影片後製方法,其目的之一能在一已經存在的原始影片中,透過後製程序置入一些提供互動效果的元素,例如一些影像物件,讓使用者可以藉由一些輸入手段,在影像播放過程中,同時產生與這些物件互動的效果,經由軟體的方式運算,產生互動畫面,達到影像物件互動,而產生即時互動影片的目的。 The present disclosure proposes a method and system for interacting stereoscopic image objects, and a method for post-production of stereoscopic interactive movies, one of the purposes of which can be used to create interactive elements through an after-production program in an existing original film, such as Some image objects allow the user to use some input means to simultaneously interact with the objects during the image playback process, and generate interactive images through software operations to achieve interactive imagery and instant interactive video. purpose.
根據實施例,提供一後製影片,此後製影片係藉由後製程序將一或多個模組化物件置入於一個原始影片中,原始影片可以為一般平面影像,影片會有運鏡的變化,因此為了讓其中特定影像的位置不致因為畫面變動而無法定位,原始影片可以設有多個錨點(anchor),可以多個錨點描述至少一實體物件,用以定位影像物件的位置,各模組化物件與原始影片中的多個錨點則具有一參考位置關係。 According to an embodiment, a post-production movie is provided, and the film is then placed into an original film by a post-production program. The original film can be a general flat image, and the film has a mirror. Change, so in order to make the position of a specific image not be able to be positioned due to picture changes, the original film may be provided with a plurality of anchors, and at least one physical object may be described by multiple anchor points for positioning the position of the image object. Each modular material has a reference positional relationship with a plurality of anchor points in the original film.
在立體影像物件互動方法的實施例中,一個電腦裝置接收一互動元件在一後製影片中動作的資訊,運算互動元件在後製影片中的動作,以偵測互動元件與後製影片中各模組化物件的一互動位置關係,之後,當對應模組化物件的互動位置關係進入一互動範圍時,產生一互動訊號,可以觸發互動的畫面,使得後製影片結合互動元件與對應互動訊號的一影像,形成一互動影片。 In an embodiment of the method for interacting a stereoscopic image object, a computer device receives information of an action of an interactive component in a post-production movie, and operates an action of the interactive component in the post-production movie to detect each of the interactive component and the post-production movie. An interactive positional relationship of the modularized member, and then, when the interactive positional relationship of the corresponding modularized member enters an interactive range, an interactive signal is generated, which can trigger an interactive image, so that the post-production video combines the interactive component with the corresponding interactive signal An image that forms an interactive film.
以上電腦裝置可以為使用者操作的終端裝置,或是雲端系統,當使用者在終端裝置上操作時,可以在終端裝置上運算而即時產生互動,形成互動影片;或是將互動產生的訊號傳送到雲端系統,由雲端系統運算產生顯示於終端裝置的互動影像。所述互動元件為顯示於終端裝置的顯示螢幕上,可為一接受觸控、聲控、 體感控制或滑鼠控制而移動位置的圖形。 The above computer device can be a terminal device operated by the user, or a cloud system. When the user operates on the terminal device, the user can perform an operation on the terminal device to instantly generate an interaction to form an interactive movie; or transmit the signal generated by the interaction. To the cloud system, the cloud system calculates the interactive image displayed on the terminal device. The interactive component is displayed on the display screen of the terminal device, and can be a graphic for moving the position by receiving touch, voice control, somatosensory control or mouse control.
特別的是,在影片後製過程中,通過軟體方法識別出原始影片的運鏡變化,使得模組化物件顯示時與原始影片有一致的空間角度,可以適當地置入在其中影像的同一空間關係上。 In particular, in the film post-production process, the physical film changes of the original film are recognized by the software method, so that the module material is displayed in a spatial angle consistent with the original film, and can be appropriately placed in the same space of the image. Relationship.
在互動影片製作方法的實施例中,先取得原始影片,根據原始影片的運鏡判斷其中以多個錨點描述的一實體物件的角度變化,之後根據實體物件的設置與角度變化,於實體物件上置入一或多個模組化物件,各模組化物件與原始影片中的多個錨點具有一參考位置關係,根據原始影片的運鏡判斷攝影機拍攝時移動軌跡,因此可以據以設定一或多個模組化物件的移動軌跡,並配合虛擬攝影機套用拍攝時的移動軌跡,以形成可即時互動的後製影片。 In the embodiment of the interactive film making method, the original film is first obtained, and the angle change of a physical object described by the plurality of anchor points is judged according to the operation mirror of the original film, and then the physical object is changed according to the setting and the angle of the physical object. One or more modularized parts are placed on the upper part, and each modularized piece has a reference positional relationship with a plurality of anchor points in the original film, and the moving track of the camera is determined according to the operating lens of the original film, so that the setting can be set according to the original film The movement trajectory of one or more modularized parts, and the virtual camera is used to apply the movement trajectory during shooting to form a post-production movie that can be interactively interacted.
實現揭露書所提出的立體影像物件互動方法,提出一種立體影像物件互動系統,系統主要有一雲端伺服器,設有一資料庫,資料庫包括原始影片庫、模組化物件庫以及後製影片庫,用以提供終端裝置顯示後製影片,透過操作一使用者介面於後製影片上進行一互動。 To realize the interaction method of stereoscopic image objects proposed in the disclosure book, a stereoscopic image object interaction system is proposed. The system mainly has a cloud server, and a database is provided. The database includes an original film library, a module material library and a post-production film library. It is used to provide a terminal device to display a post-production movie, and perform an interaction by operating a user interface on the post-production movie.
為了能更進一步瞭解本發明為達成既定目的所採取之技術、方法及功效,請參閱以下有關本發明之詳細說明、圖式,相信本發明之目的、特徵與特點,當可由此得以深入且具體之瞭解,然而所附圖式僅提供參考與說明用,並非用來對本發明加以限制者。 In order to further understand the technology, method and effect of the present invention in order to achieve the intended purpose, reference should be made to the detailed description and drawings of the present invention. The drawings are to be considered in all respects as illustrative and not restrictive
10‧‧‧實體物件 10‧‧‧ physical objects
12‧‧‧互動元件 12‧‧‧Interactive components
101,102,103,104,105,106‧‧‧模組化物件 101,102,103,104,105,106‧‧‧Modularized parts
111,112,113,114,115,116,117,118,119,120,121‧‧‧錨點 111,112,113,114,115,116,117,118,119,120,121‧‧‧ anchor
21‧‧‧雲端伺服器 21‧‧‧Cloud Server
23‧‧‧資料庫 23‧‧‧Database
25‧‧‧終端裝置 25‧‧‧ Terminal devices
20‧‧‧網路 20‧‧‧Network
251‧‧‧後製影片 251‧‧‧post film
30‧‧‧雲端伺服器 30‧‧‧Cloud Server
301‧‧‧運算單元 301‧‧‧ arithmetic unit
303‧‧‧記憶單元 303‧‧‧ memory unit
305‧‧‧通訊單元 305‧‧‧Communication unit
32‧‧‧資料庫 32‧‧‧Database
321‧‧‧原始影片庫 321‧‧‧ original film library
323‧‧‧模組化物件庫 323‧‧‧Modularized Parts Library
325‧‧‧後製影片庫 325‧‧‧post film library
34‧‧‧終端裝置 34‧‧‧ Terminal devices
341‧‧‧處理單元 341‧‧‧Processing unit
342‧‧‧記憶單元 342‧‧‧ memory unit
343‧‧‧通訊單元 343‧‧‧Communication unit
344‧‧‧輸入單元 344‧‧‧Input unit
345‧‧‧顯示單元 345‧‧‧ display unit
步驟S401~S411‧‧‧互動影片製作流程 Step S401~S411‧‧‧Interactive film production process
步驟S501~S517‧‧‧影像物件互動流程 Step S501~S517‧‧‧Image Object Interaction Process
圖1A顯示以本發明立體影像物件互動方法實現的互動影片實施例示意圖之一;圖1B顯示以本發明立體影像物件互動方法實現的互動影片實施例示意圖之二;圖2顯示本發明立體影像物件互動系統的架構實施例圖; 圖3顯示本發明立體影像物件互動系統的各端功能與電路單元的實施例圖;圖4顯示本發明立體互動影片後製方法的實施例流程;圖5顯示本發明立體影像物件互動方法的實施例流程。 1A is a schematic diagram showing an embodiment of an interactive film implemented by the method for interacting a stereoscopic object of the present invention; FIG. 1B is a second schematic view showing an embodiment of an interactive film realized by the method for interacting a stereoscopic object of the present invention; FIG. 2 is a view showing a stereoscopic image of the present invention; FIG. 3 is a view showing an embodiment of a function and a circuit unit of each end of the stereoscopic image object interaction system of the present invention; FIG. 4 is a flowchart showing an embodiment of the stereoscopic interactive film post-production method of the present invention; A flow of an embodiment of the method for interacting a stereoscopic image object is invented.
為了針對一般平面影片提供使用者產生互動影像的功能,揭露書描述一種立體影像物件互動方法與系統,以及其立體互動影片後製方法,讓使用者可以透過電腦裝置與所顯示的影片進行互動,其中提出一個經過後製(post-production)的影片,並在適當軟體手段播放時,可以讓使用者操作其中元件以產生互動。 In order to provide a user with an interactive image for a general flat film, the disclosure describes a method and system for interacting a stereoscopic image object, and a stereoscopic interactive film post-production method, which allows a user to interact with the displayed movie through a computer device. It proposes a post-production film that, when played in appropriate software, allows the user to manipulate the components to create interaction.
根據實施例之一,實現立體影像物件互動方法的電腦裝置中,先預備一原始影片,較佳為拍攝真實景色的平面影片,接著在此原始影片中經由後製程序置入一或多個模組化物件形成的後製影片,其中各模組化物件為獨立可變化的物件,可以為一些擬真圖案或是繪製得到的圖案,與原始影片組合,達到虛實整合的效果。後製影片由使用者以一電腦裝置進行互動,使得帶有運鏡的影片,都同樣能夠產生互動的效果。 According to one embodiment, in a computer device for implementing a method for interacting a stereoscopic image object, an original movie is first prepared, preferably a flat film for capturing a real scene, and then one or more modules are placed in the original movie via a post-production program. The post-production film formed by the grouping material, wherein each modular material is an independently changeable object, and may be a pseudo-real pattern or a drawn pattern, combined with the original film to achieve a virtual and real integration effect. The post-production film is interactive by the user with a computer device, so that the film with the mirror can also have an interactive effect.
其中方法特別的是,考慮了原始影片中影像物件因為運鏡(camera movement)產生的變化,使得當中後製置入的模組化元件也能合理地存在在一個真實影片中。其中運鏡是指拍攝原始影片時拍攝者操作攝影機鏡頭在位置上的改變,其中有幾種運鏡方式,如執行放大縮小(zoom)的促鏡,會連同其中的影像物件有放大或縮小的變化,在本發明中,後製置入的模組化物件也同步有放大或縮小的變化;運鏡包括有縱向變化的推鏡(dolly),同樣也會造成被拍攝的影像物件有放大或縮小的變化,在本發明中,後製置入的模組化物件也會有同步變化的效果;運鏡有一種橫向變化的橫推(truck)與搖鏡(pan)的手法,會使得被拍攝影像物 件平移變化,同樣地,後製置入的模組化物件也會有同步平移的變化;運鏡有一種抬鏡(tilt)的技巧,主要是攝影機機身不變,而以上下移動鏡頭拍攝影像,當影像物件上下變化時,當中的模組化物件也會同步變化;運鏡中的升鏡(pedestal)會連同攝影機一同在平台上下移動,使得被拍攝影像物件也產生變化,同理,後製置入的模組化物件也會有同步變化的效果;運鏡還有一種弧鏡(arc)的弧形變化方式,後製置入的模組化物件也會有同步變化的效果。 In particular, the method considers that the image elements in the original film are changed in the camera movement, so that the modular components placed in the middle film can reasonably exist in a real film. The moving mirror refers to the change of the position of the camera lens when the photographer takes the original film. There are several ways of mirroring, such as zooming in and out, which will be enlarged or reduced together with the image object. Variation, in the present invention, the post-assembly of the modularized member is also synchronized with a zoom in or zoom out; the mirror includes a longitudinally varying dolly, which also causes the captured image object to be magnified or In the present invention, the modularized member placed in the post-production system also has a synchronous change effect; the mirror has a laterally-changing method of a trump and a pan, which causes the The image of the image is changed in translation. Similarly, the module material that is placed in the rear will also have a synchronous translation change; the mirror has a tipping technique, mainly because the camera body does not change, but moves up and down. The lens captures the image. When the image object changes up and down, the module material in the image will also change synchronously; the pedestal in the mirror will move up and down along with the camera, so that the imaged object is also generated. In the same way, the modularized parts placed in the post-production system will also have the effect of synchronous change; the mirror also has an arc-shaped change mode of the arc mirror (arc), and the modularized parts placed in the post-production system will also have The effect of synchronous changes.
在此一提的是,所述互動式立體影片的後製技術是以視角重建透視原理為基礎,以此發展出的影片圖像視覺實體互動化的技術,其中視角重建是以螢幕上的虛擬影像,或使此虛擬影像能被投射的立體實體模型,經判斷觀者的空間及深度位置改變之後,可使虛擬影像根據觀者的空間位置的改變,而在實體模表面產生上下、左右或深度的視角與空間變換,以符合實際的空間認知。 It is mentioned here that the post-production technology of the interactive stereoscopic film is based on the perspective reconstruction perspective principle, thereby developing a visual entity interactive technology of the film image, wherein the perspective reconstruction is virtual on the screen. The image, or the solid solid model that enables the virtual image to be projected, can determine that the virtual image is up, down, left or right or on the surface of the solid mold according to the change of the spatial position of the viewer after determining the space and depth position of the viewer. Deep perspective and spatial transformation to match actual spatial perception.
在本發明採用之互動式立體影片後製技術上,在一實施方式中,結合了傳統實景影片拍攝並承襲運鏡、剪接與鏡頭語言安排等電影手法,於視角重建技術中,對於觀者的空間及深度位置改變的判讀機制,在此技術上轉變為對拍攝影片之攝影機運鏡軌跡的追蹤。透過由影片實景,產生對於立體物件造型的客觀認知與判斷,使重塑的立體模型與每張移動影格中的立體物件,進行低誤差的疊合。在後製過程中,可以實拍影片為基底,經由軟體手段,以進行實際影片中場景空間的追蹤,並反算出攝影機移動的軌跡(camera tracking)。 In the interactive stereoscopic film post-production technology adopted by the present invention, in one embodiment, a combination of a traditional real-life film and a filming method such as a mirror, a splicing and a lens language arrangement are used in the perspective reconstruction technique for the viewer. The interpretation mechanism of spatial and depth position changes is transformed into a tracking of the camera trajectory of the film. Through the real scene of the film, the objective cognition and judgment of the three-dimensional object shape are generated, so that the reshaped three-dimensional model and the three-dimensional object in each moving frame are superimposed with low error. In the post-production process, the real shot film can be used as a base, and the software can track the scene space in the actual movie and inversely calculate the camera tracking.
本發明立體影像物件互動方法實現的互動影片實施例情境示意圖可以參考圖1A與圖1B。 For a schematic diagram of an embodiment of an interactive film embodiment implemented by the stereoscopic image object interaction method of the present invention, reference may be made to FIG. 1A and FIG. 1B.
圖1A顯示為一個顯示在電腦裝置上影片中的影像物件,包括一實體物件10,此例顯示為一個建築物,此實體物件10所描述的建築物拍攝時呈現一個視覺上的角度,而可能因為拍攝此實體物 件10時運鏡的關係而顯示有如圖1B的另一個視覺角度。 1A shows an image object displayed in a movie on a computer device, including a physical object 10, which is shown as a building, and the structure described by the physical object 10 presents a visual angle when photographed, and may Because of the relationship of the mirror when the solid object 10 is photographed, another visual angle as shown in FIG. 1B is displayed.
因為影片中影像物件,如此例的建築物,會因為影片變化而改變在影片中的位置,以及視覺角度,因此可以透過設計有多個錨點(anchor)而定位此影像物件。如圖中顯示的錨點111,112,113,114,115,116,117,118,119,120與121等。 Because of the image objects in the film, the building of this kind will change the position in the film and the visual angle because of the change of the film, so the image object can be positioned by designing multiple anchors. The anchor points 111, 112, 113, 114, 115, 116, 117, 118, 119, 120 and 121 and the like are shown in the figure.
此例顯示在建築物(實體物件10)原本拍攝於一原始影片中,經過後製程序,建築物的各樓層中置入多個模組化物件101,102與103,以及經過互動設計後產生變動(如掉落)的模組化物件104,105,106,每個物件都為獨立的影像物件。而造成這些模組化物件104,105,106的變動可以是使用者操作一個互動元件12在此後製影片中移動時,根據設定的規則,當符合此規則時,有些模組化物件104,105,106即產生變動。 This example shows that the building (the physical object 10) was originally taken in an original film. After the post-production process, a plurality of modularized parts 101, 102 and 103 are placed in each floor of the building, and the interaction is designed to change ( The modularized members 104, 105, 106, such as falling, each of which is a separate image object. The variation of the module formations 104, 105, 106 may be caused by the user operating an interactive component 12 to move in the subsequent film. According to the set rules, when the rule is met, some of the module formations 104, 105, 106 are changed.
此例所述錨點111,112,113,114,115,116,117,118,119,120,121為描述實體物件10(一或多個)的一些識別位置,這些錨點也是作為後製置入模組化物件101,102,103,104,105,106時的參考點。根據實施例之一,在一後製軟體中,先引入原始影片,利用軟體工具在動態變化的影片中設計出定位一或多個實體物件10的錨點111,112,113,114,115,116,117,118,119,120,121,之後再以軟體工具以這些錨點為參考點進行建模,在特定位置置入模組化物件,與原始影片中的多個錨點具有一參考位置關係。因此,當影片有變化時,會使得實體物件10也隨著變動,根據此參考位置關係置入的模組化物件101,102,103,104,105,106也因此隨著變動。 In this example, the anchor points 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121 are some identification locations for describing the physical object 10(s), which are also used as reference points for the post-assembly of the module formations 101, 102, 103, 104, 105, 106. According to one embodiment, in a post-production software, an original movie is first introduced, and an anchor point 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121 for positioning one or more physical objects 10 is designed in a dynamically changing movie by using a software tool, and then these tools are used as software tools. The point is modeled as a reference point, and the module formation is placed at a specific position, and has a reference positional relationship with a plurality of anchor points in the original movie. Therefore, when there is a change in the movie, the physical object 10 is also changed, and the modularized members 101, 102, 103, 104, 105, 106 placed according to the reference position relationship are also changed accordingly.
互動元件12為提供使用者互動的介面之一,互動元件12可以為看得到的圖案,供使用者以觸控、聲控、體感或滑鼠(或鍵盤)等輸入手段操作,並可包括任何可以控制互動元件12的手段,透過互動元件12與後製影片中的模組化物件101,102,103,104,105,106互動,如此例表示模組化物件104,105與106因為符合 某個規則時,如遭到互動元件12碰撞,即產生後續互動變化,如掉落、彈開或被移動到其他位置。在另一實施例中,互動元件12可指使用者觸碰或是利用滑鼠產生的操作訊號的範圍,當此範圍與模組化物件101,102,103,104,105,106互動時符合特定規則,也能產生互動變化。 The interactive component 12 is one of the interfaces for providing user interaction, and the interactive component 12 can be a visible pattern for the user to operate by input means such as touch, voice control, body sense or mouse (or keyboard), and can include any The means for controlling the interactive component 12 can be interacted with the modularized components 101, 102, 103, 104, 105, 106 in the post-production film via the interactive component 12, such that the modularized components 104, 105 and 106 are subject to collision by the interactive component 12 if they meet certain rules. That is, subsequent interaction changes, such as dropping, popping, or being moved to other locations. In another embodiment, the interactive component 12 can refer to a range of operational signals that the user touches or utilizes the mouse to generate an interactive change when the range interacts with the modularized components 101, 102, 103, 104, 105, 106 in accordance with certain rules.
在背後的軟體程序就是一種影像處理程序,運算影片中每個被設定的影像的變化,如運算互動元件12在後製影片中的動作,能偵測互動元件12與後製影片中各模組化物件101,102,103,104,105,106的一互動位置關係,當對應一或多個模組化物件的互動位置關係進入系統所設定的一互動範圍,系統將產生一互動訊號,再以一軟體工具使得後製影片結合互動元件12,以及對應此互動訊號的一影像(如模組化物件的變化),形成一互動影片。 The software program behind it is an image processing program that calculates the changes of each set image in the movie, such as the action of the interactive component 12 in the post-production movie, and can detect the components in the interactive component 12 and the post-production movie. An interactive positional relationship of the chemical components 101, 102, 103, 104, 105, 106. When the interactive positional relationship corresponding to one or more modularized components enters an interactive range set by the system, the system generates an interactive signal, and then combines the post-production video with a software tool. The component 12, and an image corresponding to the interactive signal (such as a change in the modularization), form an interactive film.
圖1B接著示意顯示在相同影片中前述實體物件10因為錄製時運鏡的變化呈現了另一個視覺角度,此圖例顯示同樣如圖1A顯示的建築物,因為影片製作時運鏡產生的不同的視覺角度,也顯示使用者操作互動元件12的情況,使用者可以透過觸控或聲控輸入,或是滑鼠、鍵盤等方式控制互動元件12在後製影片上的動作,另可以行動裝置提供的體感操作手段(如利用加速度器)控制互動元件12的變化。 Figure 1B is then schematically shown in the same movie that the aforementioned physical object 10 presents another visual angle due to the change in the mirror when recording, this illustration shows the same building as shown in Figure 1A, because of the different vision produced by the mirror during film production. The angle also shows the user's operation of the interactive component 12. The user can control the action of the interactive component 12 on the post-production film through touch or voice input, or a mouse or a keyboard, and the body provided by the mobile device. Sensing means (such as using an accelerometer) controls the variation of the interactive element 12.
圖例顯示模組化物件的位置因為設定了與實體物件10上的各錨點之間的互動位置關係,能夠在影片變化時仍位於原先設定的相對位置上,不論運鏡如何,模組化物件也總是在原先設定與實體物件10的相對位置上。同時,在互動模式下,也因為互動元件12的移動而產生對應的變化,如因為碰撞而彈開或掉落。 The legend shows that the position of the modularized piece is set to the relative position between the anchor points on the physical object 10, and can still be located at the originally set relative position when the film changes, regardless of the mirror, the modularized piece It is also always set in the relative position to the physical object 10. At the same time, in the interactive mode, corresponding changes are also made due to the movement of the interactive element 12, such as bounce or fall due to a collision.
值得一提的是,在準備原始影片時,原始影片本身可以已經載有立體空間資訊,或是其中物件已經是一個立體物件,因而取得當中的空間資訊;或者,原始影片並無載有立體空間資訊,卻可自其中運鏡取得空間資訊,包括以一虛擬攝影機套用拍攝時的 移動軌跡,藉此運算得出空間資訊,也就是影片中各種物件之間的空間關係,使得後製時置入互動元件時,可以更為符合視覺上的空間感。最後,後製影片成為可以提供使用者進行即時互動的影片。 It is worth mentioning that when preparing the original film, the original film itself may already contain the three-dimensional space information, or the object is already a three-dimensional object, thus obtaining the spatial information; or the original film does not carry the three-dimensional space. Information, but from which the space information can be obtained from the mirror, including the use of a virtual camera to apply the movement trajectory when shooting, thereby calculating the spatial information, that is, the spatial relationship between various objects in the film, so that the post-production time is placed When interacting with components, it is more in line with the visual sense of space. Finally, post-production movies become videos that provide users with instant interaction.
根據發明實施例,主要以電腦裝置中的軟體手段實現以上立體影像物件互動方法,其中可以在使用者端的電腦裝置中直接以特定軟體程序運行如上述的影像物件互動,讓使用者操作電腦裝置,在後製影片中操作互動元件,產生互動畫面,最後可再錄製為一新的互動影片。另有實施例如圖2所示本發明立體影像物件互動系統的架構實施例圖。 According to an embodiment of the invention, the above-mentioned three-dimensional image object interaction method is mainly implemented by a software means in a computer device, wherein the computer object device can be directly operated by a specific software program in the user-side computer device, such as the above-mentioned image object interaction, so that the user operates the computer device. Operate interactive components in post-production movies to create an interactive screen that can then be recorded as a new interactive video. Another embodiment of the architecture of the stereoscopic image object interactive system of the present invention shown in FIG. 2 is implemented.
圖中顯示立體影像物件互動系統關於伺服器端的硬體設施(雲端伺服器21、資料庫23)與安裝於使用者端的終端裝置25的軟體程序,終端裝置25藉由網路20與雲端伺服器21連線,其中顯示由雲端伺服器21提供的後製影片251,此例顯示使用者可在終端裝置25上觀看後製影片251,並以觸控方式與後製影片251中的影像互動,特別是經由後製程序置入可以變化的模組化物件。 The figure shows the software program of the stereoscopic image object interaction system about the server-side hardware device (the cloud server 21, the database 23) and the terminal device 25 installed at the user terminal, and the terminal device 25 uses the network 20 and the cloud server. The connection 21 shows the post-production movie 251 provided by the cloud server 21. This example shows that the user can watch the post-production movie 251 on the terminal device 25 and interact with the image in the post-production movie 251 by touch. In particular, a modular module that can be changed is placed via a post-production program.
雲端伺服器21可提供多人的互動服務,讓終端裝置25經網路20取得系統已經製作好的後製影片251,執行上述實施例所示以互動元件與後製影片251中的模組化物件互動的動作。更可於執行互動過程中由終端裝置25中的軟體程序錄製互動過程,形成最後的互動影片。雲端伺服器21也可因此提供儲存使用者端完成的互動影片的服務,或是提供不同使用者之間分享影片的平台服務。 The cloud server 21 can provide a multi-person interactive service, and the terminal device 25 obtains the post-production movie 251 that has been created by the system via the network 20, and performs the modularization in the interactive component and the post-production movie 251 as shown in the above embodiment. The action of the object interaction. The interactive process can be recorded by the software program in the terminal device 25 during the execution of the interactive process to form the final interactive movie. The cloud server 21 can also provide a service for storing interactive videos completed by the user terminal or a platform service for sharing videos between different users.
雲端伺服器21設有資料庫23,當中可儲存一或多個原始影片,或包括提供後製的素材,如前述提供後製置入影片的模組化物件,除了形成可即時互動的影片外,並可儲存一或多個已經後製完成的影片。相關功能模組可參考圖3所示本發明立體影像物件互動系統的各端功能與電路單元的實施例圖。 The cloud server 21 is provided with a database 23, which can store one or more original movies, or includes a material for providing post-production, such as the module material for providing a post-production movie, in addition to forming a film that can be interactively interacted with. And can store one or more videos that have been completed. For related function modules, reference may be made to the embodiments of the functions and circuit units of the respective ends of the stereoscopic image object interactive system of the present invention shown in FIG.
此例顯示一雲端伺服器30,運作如一運算主機,其中設有運算單元301,如運算主機中的處理器或軟硬體合作執行特定運算工作的功能模組;設有記憶單元303,此為運算主機中的記憶體或儲存媒體,用以暫存運算過程需要的數據,以及運算結果;運算主機設有通訊單元305,用以與外部裝置通訊。雲端伺服器30設有資料庫32,資料庫32中提供影片相關資料,其中有原始影片庫321、模組化物件庫323與後製影片庫325。 This example shows a cloud server 30, which functions as a computing host, and is provided with an arithmetic unit 301, such as a processor in a computing host or a functional module in which software and hardware cooperate to perform a specific computing operation; and a memory unit 303 is provided. The memory or the storage medium in the computing host is used to temporarily store the data required for the operation process and the operation result; the computing host is provided with a communication unit 305 for communicating with the external device. The cloud server 30 is provided with a database 32. The database 32 provides video related materials, including an original movie library 321, a module material library 323 and a post-production movie library 325.
原始影片庫321儲存的影片包括為錄製真實影像的影片,但也不排除為人工完成的動畫。模組化物件庫323儲存提供後製置入原始影片的素材,特別是立體形式的圖形或擬真的圖案,可以適當地嵌入於真實環境所錄製的影片。後製影片庫325則是儲存了已經完成後製的影片。 The movie stored in the original movie library 321 includes a movie for recording a real image, but does not exclude a manually completed animation. The module material library 323 stores the material for providing the original film, especially the stereoscopic graphic or the immersive pattern, which can be appropriately embedded in the film recorded in the real environment. The post-production movie library 325 stores the movies that have been completed.
終端裝置34為使用者端的電腦裝置,設有處理單元341,可為電腦裝置中的處理器;設有記憶單元342,可為電腦裝置中的記憶體或儲存媒體;設有通訊單元343,可以網路等方式與外部裝置雙向通訊,包括傳遞資料;設有輸入單元344,此為提供使用者在終端裝置34上以特定手段執行輸入,可以軟體搭配硬體實現,如執行本發明提出的影像互動,包括可以觸控、聲控、體感或利用滑鼠、鍵盤等(並不限於在此所述的方式)的輸入手段控制顯示於後製影片上的互動元件;設有顯示單元345,此為終端裝置34上的顯示屏幕,目的之一是用以顯示影片。 The terminal device 34 is a computer device of the user terminal, and is provided with a processing unit 341, which may be a processor in the computer device; a memory unit 342 may be used as a memory or a storage medium in the computer device; and a communication unit 343 may be provided. The network and the like communicate with the external device in two-way communication, including the data transmission; and the input unit 344 is provided, which is to provide the user with the specific means to perform the input on the terminal device 34, and can be implemented by software and hardware, such as executing the image proposed by the present invention. The interaction includes controlling the interactive elements displayed on the post-production film by means of input, such as touch, voice control, body feeling or using a mouse, a keyboard, etc. (not limited to the manner described herein); One of the purposes of the display screen on the terminal device 34 is to display a movie.
圖4接著顯示本發明立體互動影片後製方法的實施例流程,在此實施例中描述製作後製影片的方式之一。 FIG. 4 is a flow chart showing an embodiment of the stereoscopic interactive film post-production method of the present invention. In this embodiment, one of the ways of making a post-production movie is described.
開始如步驟S401,使用者可以經由系統提供取得一原始影片,或是由他人取得,或是自行拍攝得到的原始影片,特別是真實世界的影像。為了要達到虛實整合的互動目的,這是本發明的目的之一,讓使用者可以在傳統無法實施互動的一般影片上執行互動。 Beginning with step S401, the user can provide an original movie via the system, or be obtained by others, or the original film obtained by himself, especially a real-world image. In order to achieve the interactive purpose of virtual reality, this is one of the purposes of the present invention, allowing users to perform interactions on a conventional movie that is traditionally unable to implement interaction.
接著如步驟S403,使用執行於電腦裝置(包括雲端伺服器)的特定軟體程序開啟原始影片,並能透過軟體工具設定多個描述影片中一或多個實體物件的錨點。當設定好描點時,描點的功能就是讓軟體工具經過影像處理後確認所依附的位置與影像的關係,使得即便影片因為各種運鏡手段產生變動時,都能定位到描點所描述的實體物件,也就能根據原始影片的運鏡判斷其中以多個錨點描述的一實體物件的角度變化。 Then, in step S403, the original movie is opened by using a specific software program executed on the computer device (including the cloud server), and multiple anchor points describing one or more physical objects in the movie can be set through the software tool. When the trace is set, the function of the trace is to let the software tool confirm the relationship between the attached position and the image after the image processing, so that even if the movie changes due to various mirrors, it can locate the description of the trace. The physical object can also determine the angular change of a physical object in which the plurality of anchor points are described according to the mirror of the original film.
之後,如步驟S405,使用者可以利用軟體工具決定模組化物件位置,軟體工具將以原始影片中設定的一或多個描點為參考點,也就是各模組化物件與原始影片中的多個錨點具有一參考位置關係,以此定位模組化物件的位置。 Then, in step S405, the user can determine the position of the module by using the software tool, and the software tool will use one or more points set in the original film as reference points, that is, each module and the original film. The plurality of anchor points have a reference positional relationship to position the module.
根據一實施例,各模組化物件可為一立體影像物件,隨著軟體工具的輔助,從原始影片中特定物件變化可以得知隨著運鏡的物件視角變化,如此可以取得立體資訊,並能根據實體物件的設置與角度變化,立體影像物件可根據原始影片錄製時的運鏡角度置入原始影片中,使得立體影像物件顯示時與原始影片有一致的空間角度,如此可以適當地置入在與影像的同一空間關係上。 According to an embodiment, each modular material can be a stereoscopic image object. With the aid of the software tool, it can be known from the change of the specific object in the original film that the stereoscopic information can be obtained as the viewing angle of the moving object changes. According to the setting and angle of the physical object, the stereoscopic image object can be placed into the original film according to the angle of the original film when the original film is recorded, so that the stereoscopic image object has a spatial angle consistent with the original film, so that it can be properly placed. In the same spatial relationship with the image.
再如步驟S407,決定參考位置關係的各模組化物件可以隨著影片的變化而變動,經過一3D軟體處理使額外立體物件與原始影片中結構物件於同一空間關係,根據原始影片的運鏡判斷攝影機拍攝時移動軌跡,並反映出其中實體物件的移動軌跡,因此可以據以設定,當即時操作時,產生一或多個模組化物件的移動軌跡,或其他模擬或物理變化。如步驟S409,配合虛擬攝影機套用拍攝時的移動軌跡,於原始影片中完成建立互動模組化物件,形成結合原始影片與模組化物件的互動化後製影片,如步驟S411,此為可以提供使用者可即時操作的互動影片。 In step S407, the module components that determine the reference position relationship may change according to the change of the movie. After a 3D software process, the additional three-dimensional object is in the same spatial relationship with the structural object in the original movie, according to the original film. Judging the movement trajectory of the camera when shooting, and reflecting the movement trajectory of the physical object therein, it can be set according to the setting, when the operation is instantaneous, the movement trajectory of one or more modular slabs, or other simulation or physical changes. In step S409, the virtual camera is used to apply the moving trajectory during shooting, and the interactive module material is created in the original movie to form an interactive post-production film combining the original film and the modular material. In step S411, this is provided. An interactive video that users can manipulate instantly.
完成可即時互動的後製影片後,除了一般終端裝置使用外,更可儲存於雲端的資料庫中,由雲端伺服器提供各終端裝置下 載,可供使用者操作終端裝置執行互動。 After the real-time interactive post-production movie is completed, in addition to the general terminal device, it can be stored in the cloud database, and the cloud server provides each terminal device to download, so that the user can operate the terminal device to perform interaction.
圖5顯示本發明立體影像物件互動方法的實施例流程。 FIG. 5 shows a flow of an embodiment of a method for interacting a stereoscopic image object of the present invention.
一開始,如步驟S501,運行於終端裝置的軟體程序先取得後製影片,經軟體程序解析,如步驟S503,可以得到後製影片中模組化物件的位置資訊,隨著影片的播放,除了直接以軟體程序以影像處理方法追蹤各個模組化物件外,亦可其中利用後製影片中設定好的錨點來追蹤模組化物件的相對位置。更者,隨著後製影片的完成,可以軟體程序設定一些互動條件,如設定一個互動反應與範圍的參數,互動範圍為互動元件與後製影片中模組化物件之間預設會引發一互動訊號的距離範圍。 Initially, in step S501, the software program running on the terminal device first obtains the post-production movie, and after parsing through the software program, in step S503, the position information of the module product in the post-production movie can be obtained, as the video is played, except The software module can be used to directly track each module by using an image processing method, and the relative position of the module can be tracked by using the anchor point set in the post-production movie. Moreover, with the completion of the post-production film, some interactive conditions can be set in the software program, such as setting an interactive reaction and range parameters, and the interaction range is a preset between the interactive component and the module component in the post-production movie. The range of distances for interactive signals.
這時,如步驟S505,使用者可以操作影片中提供一互動元件,讓使用者以觸控、聲控、體感、滑鼠或鍵盤等手段操作互動元件,互動元件可為顯示於終端裝置的顯示螢幕上,並受到控制後產生位置移動的圖形。在另一實施例中,互動元件可以是使用者直接以觸控、聲控、體感、滑鼠或鍵盤等手段操作後產生的動作,而與後製影片中的模組化物件互相影響,進而產生互動影像。其中,系統可以提供的使用者介面(如觸控面板)接收使用者於終端裝置上產生操作互動元件在後製影片的動作資訊。 At this time, in step S505, the user can operate the movie to provide an interactive component for the user to operate the interactive component by means of touch, voice, body, mouse or keyboard, and the interactive component can be a display screen displayed on the terminal device. On, and controlled to produce a graphic of positional movement. In another embodiment, the interactive component may be an action generated by the user directly after being touched, voiced, sensed, mouse, or keyboard, and interacts with the modularized component in the post-production film, thereby Generate interactive images. The user interface (such as a touch panel) that the system can provide receives motion information of the user operating the interactive component on the terminal device in the post-production movie.
如步驟S507,以上互動產生的動作資訊可以由終端裝置接收,或是傳送到雲端伺服器,當雲端伺服器接收到使用者操作互動元件在後製影片中的動作資訊,並如步驟S509,即時運算互動元件在後製影片中的動作,這時,執行於終端裝置或雲端伺服器的軟體程序將接收互動元件在此後製影片中動作的資訊,再如步驟S511,以偵測互動元件與模組化物件的互動位置關係,並判斷這個互動位置關係是否進入設定好的互動範圍內(步驟S513)。 In step S507, the action information generated by the above interaction may be received by the terminal device or transmitted to the cloud server, and when the cloud server receives the action information of the user operating the interactive component in the post-production movie, and as shown in step S509, The operation of the interactive component in the post-production movie. At this time, the software program executed by the terminal device or the cloud server receives the information of the action of the interactive component in the subsequent movie, and then, as in step S511, the interactive component and the module are detected. The interactive positional relationship of the chemical member and determining whether the interactive positional relationship enters the set interactive range (step S513).
若否,則是回到步驟S507,由執行於終端裝置或雲端伺服器中的軟體程序繼續根據收到的互動資訊判斷是否會觸發互動的幾個步驟。 If not, then returning to step S507, the software program executed in the terminal device or the cloud server continues to determine whether or not the interaction is triggered based on the received interactive information.
若是,表示根據互動位置關係與互動範圍的變化而判斷到互動元件進入所述互動範圍,也就是,當互動元件與後製影片中的模組化物件之間的互動位置關係符合此互動範圍,將可觸發產生互動訊號,使得終端裝置,或是雲端伺服器能根據此互動訊號,提供互動影像,包括與模組化物件連結的動畫,如步驟S515,產生互動影像。 If yes, it is determined that the interactive component enters the interaction range according to the interaction position relationship and the change of the interaction scope, that is, when the interactive positional relationship between the interaction component and the modularized component in the post-production movie conforms to the interaction range, The interactive signal can be triggered to enable the terminal device or the cloud server to provide an interactive image according to the interactive signal, including an animation linked to the module, and generate an interactive image in step S515.
最後,如步驟S517,此時顯示的影像(原始影片、模組化物件、互動元件等)可以被錄製起來,在終端裝置上形成互動影片。 Finally, in step S517, the images (original movie, modular material, interactive components, etc.) displayed at this time can be recorded to form an interactive movie on the terminal device.
值得一提的是,當以上程序係由雲端伺服器運行,也就是在雲端伺服器以其軟體程序運算互動元件在後製影片中的動作,當符合觸發互動影像的條件時,即產生一互動訊號,這時,互動訊號將被傳送到終端裝置,由終端裝置觸發模組化物件的動作(如圖1A與圖1B顯示掉落的模組化物件),形成互動影片。 It is worth mentioning that when the above program is run by the cloud server, that is, the cloud server uses its software program to calculate the action of the interactive component in the post-production movie, and when it meets the condition of triggering the interactive image, an interaction is generated. The signal, at this time, the interactive signal will be transmitted to the terminal device, and the terminal device triggers the action of the module compound (as shown in FIG. 1A and FIG. 1B shows the dropped module material) to form an interactive film.
根據一實施例,使用者操作互動元件時,運行於終端裝置或是雲端伺服器中的軟體程序將隨時掌握到互動元件在後續影片中的移動,而此移動軌跡將涵蓋一移動區域,若以碰撞到模組化物件為例,當模組化物件的影像判斷碰觸此移動區域,符合碰撞的條件,將觸發上述互動訊號。 According to an embodiment, when the user operates the interactive component, the software program running in the terminal device or the cloud server will grasp the movement of the interactive component in the subsequent movie, and the moving track will cover a moving area. For example, when the image of the module is touched, the image of the module is judged to touch the moving area, and the collision condition is met, and the interactive signal is triggered.
在另一實施例,若觸發互動訊號的條件並非碰撞(互動範圍為0),而可設定一個受到互動元件影響的範圍(互動範圍等於一個距離),使得當模組化物件與互動元件之間的距離進入此互動範圍後,即觸發產生互動訊號。 In another embodiment, if the condition for triggering the interactive signal is not a collision (the range of interaction is 0), a range affected by the interactive component can be set (the range of interaction is equal to a distance), so that between the modularized component and the interactive component After the distance enters this interaction range, an interactive signal is triggered.
是以,在揭露書所揭示的立體影像物件互動方法中,預備有一錄製真實世界的影片,經過後製程序後在當中置入一或多個模組化物件,讓使用者在此後製影片中操作一個互動元件,電腦即運算互動元件在後製影片中的動作,如移動軌跡,當偵測到互動元件與後製影片中各模組化物件的互動位置關係進入一互動範圍時,可以對應產生一個互動畫面,藉此實現在一般平面影片中執 行互動的功效,並能達成虛實整合後仍保有真實感的目的。 Therefore, in the interactive method of the stereoscopic image object disclosed in the disclosure book, a film for recording the real world is prepared, and after the post-production process, one or more modularized pieces are placed in the middle, so that the user can make a film in the film. Operating an interactive component, the computer is the action of the interactive component in the post-production movie, such as the movement track. When it is detected that the interactive position relationship between the interactive component and the module component in the post-production movie enters an interactive range, it can correspond Produce an interactive picture to achieve the effect of performing interactions in a general flat film, and achieve the purpose of realism after the virtual and real integration.
惟以上所述僅為本發明之較佳可行實施例,非因此即侷限本發明之專利範圍,故舉凡運用本發明說明書及圖示內容所為之等效結構變化,均同理包含於本發明之範圍內,合予陳明。 However, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Therefore, equivalent structural changes that are made by using the specification and the contents of the present invention are equally included in the present invention. Within the scope, it is combined with Chen Ming.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW106113465A TWI610565B (en) | 2017-04-21 | 2017-04-21 | Interactive method for 3d image objects, a system, and method for post-production of 3d interactive video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW106113465A TWI610565B (en) | 2017-04-21 | 2017-04-21 | Interactive method for 3d image objects, a system, and method for post-production of 3d interactive video |
Publications (2)
Publication Number | Publication Date |
---|---|
TWI610565B TWI610565B (en) | 2018-01-01 |
TW201840200A true TW201840200A (en) | 2018-11-01 |
Family
ID=61728274
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW106113465A TWI610565B (en) | 2017-04-21 | 2017-04-21 | Interactive method for 3d image objects, a system, and method for post-production of 3d interactive video |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI610565B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111984114B (en) * | 2020-07-20 | 2024-06-18 | 深圳盈天下视觉科技有限公司 | Multi-person interaction system based on virtual space and multi-person interaction method thereof |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050128212A1 (en) * | 2003-03-06 | 2005-06-16 | Edecker Ada M. | System and method for minimizing the amount of data necessary to create a virtual three-dimensional environment |
US10326978B2 (en) * | 2010-06-30 | 2019-06-18 | Warner Bros. Entertainment Inc. | Method and apparatus for generating virtual or augmented reality presentations with 3D audio positioning |
US20130249948A1 (en) * | 2011-08-26 | 2013-09-26 | Reincloud Corporation | Providing interactive travel content at a display device |
EP3974041B1 (en) * | 2011-10-28 | 2024-07-10 | Magic Leap, Inc. | System and method for augmented and virtual reality |
US20130155105A1 (en) * | 2011-12-19 | 2013-06-20 | Nokia Corporation | Method and apparatus for providing seamless interaction in mixed reality |
TW201631960A (en) * | 2015-02-17 | 2016-09-01 | 奇為有限公司 | Display system, method, computer readable recording medium and computer program product for video stream on augmented reality |
-
2017
- 2017-04-21 TW TW106113465A patent/TWI610565B/en active
Also Published As
Publication number | Publication date |
---|---|
TWI610565B (en) | 2018-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11861059B2 (en) | System and method for generating a virtual reality scene based on individual asynchronous motion capture recordings | |
US20240420508A1 (en) | Systems and methods for virtual and augmented reality | |
US10078917B1 (en) | Augmented reality simulation | |
US10447899B2 (en) | System and method for 3D projection mapping with robotically controlled objects | |
JP7523615B2 (en) | Augmented reality display device and augmented reality display method | |
WO2020050103A1 (en) | Virtual viewpoint control device and method for controlling same | |
US20230260235A1 (en) | Information processing apparatus, information processing method, and information processing system | |
CN112740284A (en) | Animation synthesizing device, animation synthesizing method, and recording medium | |
JP7157244B2 (en) | head mounted display | |
WO2017062730A1 (en) | Presentation of a virtual reality scene from a series of images | |
US20160345003A1 (en) | Extensible Authoring and Playback Platform for Complex Virtual Reality Interactions and Immersive Applications | |
JP7672283B2 (en) | Video processing device, control method and program thereof | |
TWI610565B (en) | Interactive method for 3d image objects, a system, and method for post-production of 3d interactive video | |
Ogi et al. | Usage of video avatar technology for immersive communication | |
WO2019241712A1 (en) | Augmented reality wall with combined viewer and camera tracking | |
US10719977B2 (en) | Augmented reality wall with combined viewer and camera tracking | |
CN113485547A (en) | Interaction method and device applied to holographic sand table | |
KR101895281B1 (en) | Apparatus for capturing stick-type object in augmented reality environment and method thereof | |
US11682175B2 (en) | Previsualization devices and systems for the film industry | |
JP7093996B2 (en) | Interior proposal system using virtual reality system | |
TWI794512B (en) | System and apparatus for augmented reality and method for enabling filming using a real-time display | |
Hamadouche | Augmented reality X-ray vision on optical see-through head mounted displays | |
Yun | Damien Rompapas Christian Sandor Alexander Plopski Daniel Saakes | |
Albuquerque et al. | An overview on virtual sets | |
Huang | Use of the frame synchronization technique to improve the visualization realism on the Web |