[go: up one dir, main page]

CA2517842A1 - Node structure for representing 3-dimensional objects using depth image - Google Patents

Node structure for representing 3-dimensional objects using depth image Download PDF

Info

Publication number
CA2517842A1
CA2517842A1 CA002517842A CA2517842A CA2517842A1 CA 2517842 A1 CA2517842 A1 CA 2517842A1 CA 002517842 A CA002517842 A CA 002517842A CA 2517842 A CA2517842 A CA 2517842A CA 2517842 A1 CA2517842 A1 CA 2517842A1
Authority
CA
Canada
Prior art keywords
node
field
image
depth
octree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002517842A
Other languages
French (fr)
Inventor
Alexander Olegovich Zhirkov
Leonid Ivanovich Levkovich-Maslyuk
In-Kyu Park
Alexey Victorovich Ignatenko
Mahn-Jin Han
Yuri Matveevich Bayakovski
Anton Konouchine
Dmitri Alexandrovich Timasov
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR10-2002-0067971A external-priority patent/KR100450823B1/en
Application filed by Individual filed Critical Individual
Priority claimed from CA2413058A external-priority patent/CA2413058C/en
Publication of CA2517842A1 publication Critical patent/CA2517842A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

A family of node structures for representing 3-dimensional objects using depth image are provided. These node structures can be adopted into MPEG-4 AFX for conventional polygonal 3D representations. Main formats of the family are DepthImage, PointTexture and OctreeImage. DepthImage represents an object by a union of its reference images and corresponding depth maps.
PointTexture represents the object as a set of colored points parameterized by projection onto a regular 2D grid. OctreeImage converts the same data into hierarchical octree-structured voxel model, set of compact reference images and a tree of voxel-image correspondence indices. DepthImage and OctreeImage have animated versions, where reference images are replaced by videostreams. DIBR formats are very convenient for 3D model construction from 3D range-scanning and multiple source video data. MPEG-4 framework allows construction of a wide variety of representations from the main DIBR
formats, providing flexible tools for effective work with 3D models.
Compression of the DIBR formats is achieved by application of image (video) compression techniques to depth maps and reference images (videostreams).

Description

USING DEPTH IMAGE
BACKGROUND OF THE INDENTION
J
1. Description of the Related Art The present invention relates to a node structure for representing depth image-based 3-dimensional (3D) objects, and more particularly, to a node structure for representing objects by images having depth information.
io 2. Description of the Related Art Since the beginning of researches on 3-Dimensional (3D) graphics, it is the ultimate goal of researchers to synthesize realistic graphic scene like a real image. Therefore, researches on traditional rendering technologies using 15 polygonal models have been carried out and as a result, modeling and rendering technologies have been developed enough to provide very realistic 3D environments. However, the process for generating a complicated model needs a lot of efforts by experts and takes much time. Also, a realistic and complicated environment needs a huge amount of information and causes to 20 lower efficiency in storage and transmission.
Gurrently, polygonal models are fiypically used for 3D object representation in computer graphics. An arbitrary shape can be substantially represented by sets of color polygons, that is, triangles. Greatly advanced software algorithms and development of graphic hardware make it possible to visualize complex objects and scenes as considerably realistic still and moving image polygonal models, Hawev~r, search for alternative 3D representations has been very active during the last decade. Main reasons for this include the difficulty of constructing polygonal models for real~world objects as well as the rendering so aomplt~xity and unsatisfactory quality for producing a truly photo-realistic scene.
Demanding applications require enormous amount of polygons; far example, detailed model of a human body contains several million triangles, which are not easy to handle. Although recent progress in range-finding techniques, such as laser range scanner, allows us to acquire d~nse range data with tolerable error, it is still very expensive and also very difficult to obtain seamlessly complete polygonal model of the whole object, On the other hand, rendering algorithms to obtain photo~realistic quality are computationally complex and thus far from fihe real-time rendering:
SUMMARY OF THE INVENTION
It is an aspect of this invention to provide a node structure for io representing 3-dimensional (3D) objects using depth image, for computer graphics and animation, called depth image-based representations (DIBR), that has been adopted into MPEG-4 Animation Framework eXtension (AFX).
In an aspect, a depth image-based node structure includes a t~xture field in which a color image containing the color for each pixel is recorded, and a depth field in which a depth value for each pixel its recorded.
In another aspect, a depth image-based node structure includes a size field in which size information of an image plane is rECOrded, a resolution field in which the resolution of the depth for each pixel is record~d, a depth field in which multiple pieces of depth information on each pixel are recorded, and a zo color field in which color information on each pixel is recorded.
In still anoth~r aspect, a depth imag~-based node structure includes a viewpoint field in which a viewpoint of an image plane is r~corded, a visibility field in which a visibility area from the viewpoint to the image plane is recorded, a projection method held in which a projection method from the viewpoint to the 2s image plane is recorded, a distance field in which a distance from a near plane to a far plane is recorded, and a texture field in which color image is recorded.
In y~t another aspect, a depth image-based node structure includes a resolution field in which the maximum value of octree leaves along the side of an enclosing cube containing an object, is recorded, an octree fiold in which a ~o structure of the internal node of the octree is recorded, an index field in which an index of the reference image corresponding to the internal node is recorded, and an image field in which the reference image is recorded.

According to the present invention, rendering time for image-based models is proportional to the number of pixels in the reference and output images, but in general, not to the geometric complexity as in polygonal case.
In addition, when the image-based repnaentation is applied to real-world objects and scene, photo-realistic rendering of natural scene becomes possible without use of millions of polygons and expensive computation.
BRIEF DESCRIPTION OF rHE DRAWINGS
The above objects and advantages of the present invention will become W more apparent by describing in detail preferred embodiments thereof with reference to the attached drawings in which:
FIG. 1 is a diagram of examples of IBR integrated in current reference sot~l~nrare;
FIG. 2 is a diagram of a structure of octree and the order of the children;
a a FIG. 3 is a graph showing Octree compression ration;
FIG. 4 is a diagram of examples of Layered depth image (LDI): (a) shows projection of the objECt, where dark cells (voxels) correspond to 1's and white cells to 0's, and (b) shows a 2D section in (x, depth);
FIG. 5 is a diagram showing color component of "Angel" model after zo rearranging its color data;
FIG. 6 is a diagram showing the orthogonal invariance of node occurrence probability: (a) shows the original current and parent node, and (b) shows the current and parent node, rotated around y axis by 94 degrees;
FIGs. 7, 8, and 9 are geometry compression figures for the best PPM-z , based method;
FIG. 10 is a diagram showing two ways of rearrangement of color field of "Angel" PointTexture model into 2D image;
FIG. 11 is a diagram of examples of lossless geometry and lossy color compression: (a) and (b) are original and compressed version of "Angel" made!
respectively, and (c) and (d) are original and compressed version of "Morton256" model respectively;
FIG. 12 is a diagram showing a BVO model and a TBVO model of :3 "Angel";
FIG. 13 is a diagram showing additional images taken by additional cameras in TBVO: (a) is a camera index image, (b) is a first additional image, and (c) is a second additional image;
FIG. 14 is a diagram showing an example of writing TBVO stream: (a) shows a TBVO tree structure. Gray cplor is "undefined" texture symbol. Each color denotes camera index, (b) shows the octree traversal order in a BVO node and camera indices, (c) shows the resultant TBVO stream, in which filled cubes and octree cube denote the texture-bytes and BVO-bytes, respectively;
io FIGs. 15, 17, 18, and 19 are diagrams showing the results of TBVO
compression of "Angel", "Morton", "PalmS12", and "Robots512", respectively;
FIG, 16 is a diagram showing peeled images of "Angel" and "Morton"
models;
FIG. 20 is a diagram of an example of the relief texture image and depth t ~ map;
FIG. 21 is a diagram of an example of layered depth image (t_DI): (a) shows Projection of the object, and (b) shows layered pixels;
FIG. 22 is a diagram of an example of Box Texture (BT), in which Six SimpIeTextures (pairs of image and depth map) are used to render the model 2c) shown in the center;
FIG. 23 is a diagram of an example of Generalized Sox Texture (GBT):
(a) shows camera locations for 'Palm' model, (b) shows reference image planes for the same model (21 SimpIeTextures are used);
FIG. 24 is a diagram an example showing Octree representation illustrated in 2D: (a) shows a 'point cloud', (b) shows the corresponding mid-maps;
F1G. 25 is pseudo-code for writing the TBVO bitstream;
FIG. 26 is a diagram showing the specification of the DIBR nodes;
FIG. 27 is a diagram of view volume model for Depthlmage: (a) is in ;3o perspective view, (b) is in orthographic view;
FIG. 28 is pseudo-code of OpenGl.-based rendering of SimpIeTexture;

FIG. 29 is a diagram ofi an example showing the compression of reference image in SimpieTexture: (a) shows the original reference image, and (b) shown the modified reference image in a JPEG format;
FIG. 3U is a diagram of an, example showing the rendering result of s "Morton" model in different formats: {a) Is In an original polygonal format, (b) is in a Oepthimage format, and (c) is in an Octreelmage format;
FIG. 31 is a diagram of rendering examples: (a) shows the scanned "Tower" model in a Depthimage format, (b) shows the same modes, in an Octreelmage forrrlat (scanner data were used without noise removal, hence the io black dots in the upper part of the model); .
FIG. 32 is a diagram of rendering examples of "Palm" model: (a) shows an original polygonal format, and (b) shows the same model, but In a Depthlmage format;
FIG. 33 is a diagram of rendering ~xample, showing a frame from > > "Dragon512" animation in Octreelmage;
FIG. 34 is a diagram of rendering example of "Angel512" model in a PointTexture format;
FIGS. 35A and 35B are diagrams showing the relationships of the respective nodes when representing an object in a Depthlmage format having zo SimpIeTexture nodes and PointTexturE nodes, respectively; and FIG. 3fi is a diagram shaving the structure of corresponding Octreelmage node when representing an object by Octreelmage nodes.
pESCRiPTION OF THE PREFERRED EMBODIMENTS
z~ x . ISO/IEC ,lTC 1lSC 29/WG 11 COPING OF MOVING PICTURES AND
AUDIp 1. Introduction In this document, the result of the core experiment on Image~based ~~ f~endering, AFX Ai3.3, is reported. This care experiment is for image-based rendering technology that uses textures with depth information. Also, based on the experiments after 57~' MPEG meeting and discussions during AFX AdHoc Group meeting in October, f~w Changes made to the node specification are presented.
2, Experimental Results 2.1. Test Models t For still objects t Depthlmage node with SimpIeTexture ~ Dog ~ Tirannosaurus Rex (Depthlmage, using about 20 cameras) ~ Terrasqu~ (a monster) (Depthlmage, about 20 cameras) ~ ChumSungDae (Depthlmage, scanned data) ~ Palmtree (Depthlmage, 20 cameras) t Depthlmage node with t_ayeredTexture 1 ~ ~ Angel t Depthlmage node with PointTexture ~ Angel Octreelmage node ~ Creature zo ~ For animated objects Depthlmage node with SimpIeTexture ~ Dragon ~ Dragon in scene environment Depthlmage node with LayeredTexture 2~s ~ Not provided Octreelmage node ~ Robot ~ Dragon in scene environment ~ More data (scanned or modeled) shall be provided in the future.
BO
2.2. Test Results s All the nodes proposed in Sydney are integrated infio blaxxun confiaet 4.3 reference software. However, the sources are not upload~d in the cvs server yet.
t The animated formats of the IBR needs to have synchronization between multiple movie files in such a way that images in the same key frame from each movie file must be given at the same time. However, current reference software does not support this synchronizafiion capability, which is possible in MP!=G
Systems. Therefore, currently, the animated formats can be visualized assuming all animation data are already in the file. Temporarily, movies files in an AVI format are used for each animated texture.
~ After some exp~riments with layered textures, we were convinced that LayeredTexture node is not efficient. This node was proposed for Layered Depth Image. However, fihere is also PointTexture node that can support it.
Therefore, we propose to remove the hayeredTexture node from the node specification. FIG. 1 shows examples of IBR integrafied in the current 7. ~ reference software.
3. Updates on IBR Node Specification The conclusion from the Sydney meeting on the IBR proposal was to have IBR stream that contains images and cam~ra information and IBR node 2o shall only have link (url) to it. However, during the AhG meeting in Rennes, 'the result of the discussion on IBR was to have images and camera information bath in ISR nodes and stream. Thus, the following is the updafied node specification far IBR nodes. The requirements for the IBR stream are given in the section that explains the url field.
ZJ
Decoder (Bitstreams) - Noda specification Depthlmage ( field SFVec3f position 0 0 10 :3o field SFRatation orientation 0 0 1 0 field SFVec2f fieIdOfView 0.785398 0.785398 field SFFloat nearPlane 10 field SFFlvat farPlane 100 field SFBooI orthogonal FALSE

a field SFNode dlTextute NULL

field SFStrlng depthlmageUrl ""

The Depthlmage node defines a single IBR texture. When multiple ~o Depthlmage nodes are related to each other, they are processed as a group, and thus, should be placed under the same Transform node.
The diTexture field specifies the texture with depth, which shall be mapped into the raglan defined in the Depthlmage node, It shall be one of the t a various types of depth image texture (SimpleTexture or PointTexture).
The position and orientation fields specify the relative location of the viewpoint of the IBR t~xture in the local coordinate system, Position is relative to the coordinate system's origin (0, 0, 0), while orientation sp~Cifies a rotation ~o relative to the default orientation. In the default position and orientation, the viewer is on the Z-axis looking down the -Z-axis toward the origin with +X to the right and +Y straight up. However, the firansformatiQn hierarchy affects the final position and on~ntation of the viewpoint.
2~~ The fieIdOfView field specifies a viewing angle from the camera viewpoint defined by position and orientation fields. The first value denotes the angle to the horizontal side and the second value denotes the angle to the vertical side. The default values are 45 degrees in radiant. However, when orthogonal field is set to TRUE, the fieIdOfView fieSd denotes the width and height of the near plane and far plane.
The nearPlane and farPlane fields specify the distances from the A

viewpoint to the near plane and far plane of the visibility area. The texture and depth data shows the area closed by fihe near plane, far plane and the fieIdOfVlew. The depth data are normalized to the distance from nearPlane to farPlane.
rz The orkhogonal field specifies the view type of the ISR texture. When set to TRUE, the IBR texture is based on orthogonal view. Otherwise, the IBR
tExture is based on perspective view.
The depthlmageUrl field specifies the address of the depth image stream, which may optionally contain the following contents.
position orientation fieIdOfView nearPlane farPlane orthogonal t diTexture (SimpleTexture or Poi~tTexture) '?a ~ 1 byte header for the on/off flags of the above fields SimpIeTexture ( field SFNode texture NU~.L
field SFNode depth NULL
2, }
The SimpleTexture node defines a single layer of IBR texture.
The texture field specifies the flat image that contains color for each ~~o pixel. It shall be one of the various types of texture nodes (ImageTexture, MavieTexture or PixeITexture).

The depth field specifies the depth for each pixel in the texture field. The size of the depth map shall be the same size as the image or movie in the texture field. It shall be ane of the various types of texture nodes (ImageTexture, MovieTexture or PixeITexture), If the depth node is NULL or the depth field is unspecified, the alpha channel in the texture field shall be used as the depth map.
PointTexture {

field SFInt32 width 256 io field SFInt32 height 256 field MFInt32 depth []

field MFColor color (]

i~~ The PointTexture node defines a multiple layers of IBR points.
The width and height field specifies the width and height of the texture.
The d~pth field species a multiple depths of each point (in normalized 2o coordinates) in the projected plane in the order of traversal, which starts from the point in the lower left corner and traverses to the right to finish the horizontal line before moving to the upper line. For each point, the number of depths (pixels) is first stored and that number of depth values shall follow.
The color field specifies color of current pixel. The order shall be the 2s same as the depth field except that number of depths (pixels) for each point is not included.
Octreelmage {

i=teld SFInt32 actreeresolution 256 3o field SFString octree ""

field MFNode octreeimages field SFString octreeUrl ""

io The Octreelmage node defines an octt'ee structure and their projected textures. The size of the enclosing cube of the total octree is 1 x 1 x 1, and the center of the octree cube shall be the origin (0, 0, 0) of the local coordinate system.
The actreeresolution field specifies maximum number of octree leaves along a side of the enclosing cube. The level of the octree can be determined from octreeresolution using the following equation . octreelevel -int(log2(octreeresolution-1 ))+1 ) The octree field specifies a set of octree internal nodes. Each internal node is represented by a byt~. 1 in ith bit of this byte means that the children 15 nodes exist for the ith child of that internal node, while 0 means that it does not.
The order of the octree internal nodes shall be the ordEr of breadth first traversal of the octree. The order of eight children of an internal node is shown in FIG. 2.
'l.U The octreeimages field specifies a set of Depthlmage nodes with SimpIeTexture for diTexture field. However, the nearPlane and farPlane field of the Depthlmage node and the depth field in the SimpIeTexture node are not used.
2~ The octreeUrl field specifies the address of the octreelmage stream with the following contents.
header for flags oci~reeresolution octree octreeimages {Multiple Depthlmage nodes) t nearPlang not used ~. a.

~ farPlana not used ~ diTexture ~ SimpIeTexture without depth II . ISOI~C f'tC 1/SC 29/WC~ 11. CODI1VG OF MOVING PICTUIt.l~S AhTlJ AUDIO
1. Introduction In this document, the result of the core experiment an Depth Image-based Rendering (DISK), AFX A8.3, is reported. This core experiment is for the depth image-based representation nodes that uses textures with depth o information. The nodes have been accepted and included in a proposal for Committee Draft during Pattaya meeting. However, the streaming of this information through octreeUrl fi~Id of Octreelmage node and depthlmageUrl field of Depthlmage node still remained on-going. This document describes the streaming format to be linked by these url fields. The streaming format includes the compression of octree field of Octreelmage node and depth/calar fields of PaintTexture node.
2. Streaming format for octreeUrl Z,1. Str~am Format 2o The Octreelmage node includes the octreeUrl field, which specifies fihe address of the octreelmage stream. This stream may optionally contain the following contents.
~ header for flags ~5 ~ octreeresolution octree actreeimages (Multiple Depthirnage nodes) ~ nearPlane not used farPlane not used :30 ~ diTexture ~ SimpIeTexture without depth The octree field specifies a set of octree internal nod~s. Each internal ~z node is represented by a byte. 1 in ith bit of this byte means that the children nodes exist far the ith child of that internal node, while 0 means that it does not, The order of the octree internal nodes shall be the order of breadth first traversal of the octres. The order of eight children of an internal node is shown in FIG, 2.
The octree field of Octreelmage node is in a compact format. However, this field may be further compressed in order to have efficient streaming. The following section describes the compression scheme for the octree field of to Octreelmage node, 2.2. Compression schEme for octree field In octree represEntation of DIBR, the data consists of the oetree field, which represents the geometry component. Octree is a set of points in the enclosing cube, completely representing the object surface.
15 Non,identica) reconstruction of the geometry from compressed representation leads to highly noticeable artifacts. Hence, geometry must be compressed without loss of information.
2.2.1. Octree Compression Far the compression of octree field repres~nt~d in the depth-first 2o traversal octree form, we developed a lossless compression method using some ideas of the PPM (Prediction by Partial Matching) approach. The main idea we use is "prediction" (i.e. probability estimation) of the next symbol by several previous symbols that are called 'context'. For each context, there exists a probability table, containing the estimated probability of occurrence of each 2 ~ symbol in this context.. This is used in combination With an arithmetic coder called range coder.
The two main features of the method are:
1. using parent node as a context for the child node;
2. using 'orthogonal invariance' assumption to reduce nurrtber of an contexts;
The second idea is based on the observation that 'transition probability' for pairs of 'parent-child' nodes is typically invariant under orthogonal transforms (rotation and symmetry). This assumption is illustrated in Annex 1. This assumption allows us to use more complex context without having too many probability tables. This, in turn, allowed us to achieve quite good results in terms of volume and speed, because the more contexts are used, the sharper is probability estimate, and thus the more compact is the cads.
Coding is the process of constructing and updating the probabilistic table according to the context model. In the proposed method, the context is modeled as the parent-child hierarchy in octree structure. First, we define ~o Symbol as a byt~ node whose bits indicate th~ occupancy of subcube after internal subdivision. Therefore, each node in octree can be a symbol and its numeric value will be 0-255. The probabilistic table (PT) contains 256 integer values. Value of i~th variable (0<_ i s255), divided by the sum of all the variables, equals to the frequency (estimate of probability) of the i-th symbol occurrence.
1 ~ The Probabilistic Context Table (PCT) is set of PTs. Probability of a symbol is determined from one and only one of the PTs. The number of the particular PT
depends on the context. An example of PCT is shown in Tabl~ 1.
Table 1. Component of a Probabilistic Context Tablss (PCT) ID of PTs 0 1 ... 255 Context description 0 Po,oPo,~ ... Pp,255 0-Context: Context independent 1..27 (27) P;,oP;,, ... P;,~$5 1-Context: Parent Symbol 28...243 2-Context: Parent Symbol and Node (27*3) Pl.oP~., ... P~,~ss Symbol ~~n Coder works as follows. It first uses 0-context model (i.e. a single PT for all the symbols, starkir~g from uniform distribution, G nd updating the PT
after each new coded symbol). The tree is traversed in depth-first order, Vllhen enough statistics is gathered (empirically found value is 512 cod~d symbols), z > the coder switch~s to 1,context model. It has 27 contexts, which are specified 19.

as follows.
Consider a set of 32 fixed orthogonal transforms, which include symmetries and rotations by 90 degrees about the coordinate axes (see Annex 2), Then, we can categorize the symbols according to the filling pattern of their subcubes. in our method, there will be 27 sets of symbols, called groups here, with the following property: 2 symbols are connected by one of these axed transforms, if and only if they belong to the same group.
In the byke notation the groups are represented by 27 sets of numbers (see Annex 2). We assume that the probability table depends not on the parent o node itself (in which case there would have been 256 tables), but only on th~
group (denoted ParentSymbol in FIG. 2) to which the parent node belongs (hence 27 tables).
At the switching moment, PT's for all the contexts are set to Copies of the 0-context PT Then, each of the 27 PTs is updated when it is used for coding.
> > After 208 (another heuristic value) symbols are coded in 1-context mode(, we switch to 2-context model, which uses the pairs (ParentSymbol, NodeSymbol) as contexts. NodeSymbol is simply position of the current node in the parent node. So, we have 27w8 contexts for 2-context model. At th~ moment of switching to thafi model, PTs obtained for each context are used for each 2o node 'inside' this context, and from this time are updated independently.
In some more technical detail, the encoding far 1-context and 2-context models proceeds as follows. For the context of the current symbol (i.e. the parent node), its group is determined. This is done by table lookup (geometric analysis was performed at the stage of the program development). Then, we 25 apply an orthogonal transform that takes our context into a "standard"
(arbitrary selected once and for all) element of the group it belongs to. The same transform is applied to the symbol itself (these operations are also implemented as table lookup, of course - ail the computations for all the possible combinations were done in advance). Effectivel;~, this is computation of the :3o correcfi position of the Current symbol in probability table for the group containing its context. Then the corresponding probability is fed to the RangeCoder.
~5 In shork, given a parent symbol and subnode position, ContextlD is determined which identifies the group ID and the positlan of PT in PCT. The probability distribution in PT and the ContextlD is fed into a range coder.
After encoding, PCT is updated to be used in next encoding. Note that the range coder is a variation of arithmetic coding which does renormalization in bytes instead of bits thus running twice faster, and with 0.01 % worse compression than a standard implementation of arithmetic coding.
The decoding process is essentially an inverse of the encoding process.
This is absolutely standard procedure which needs not to be described, since it uses exactly the same methods of determining the contexts, updating probabilities, etc.
2.3, Test Results FIG. 3 is a table for comparison of our approach, for both still and animated models (ordinates denote compression ratio.). Octree compression 1~ ratio varies around 1.5-2 times compared to original octree siz~, and outpertorms generahpurpose lossless compressions (l-ompel-Ziv based, like RAR program) by as much as 30%.
3. Streaming Format for depthlmageUrl zU 3.9. Stream Format The Depthlmage node includes depthlmageUrl field, which specifies the address of the depth image stream. This stream may optionally contain the following contents.
t 1 byte header for the on/off flags of the fields below position orientation fleIdOfiView nearPlane farPlane orthogonal diTexture (SimpIeTexture or PointTexture) is The definition of PointTexture node, which can be used in the diTexture field of Depthlmage node, is as follows.
s PointTexture {
field SFInt32 width 256 field SFInt32 ~ height 256 field MFI nt32 depth 0 field MFColor color The PointTexture node defines multiple layers of 18R points. The width and height field specifies the width and height of the texture. The depth field specifies a multiple depths of each point (in normalized coordinates) in the is projected plane in the order of traversal, which starts from the point in the lower (efP cornet and traverses to the right to finish the horizontal line before moving to the upper line. For each point, the number of depths (pixels) is first stored and that number of d~pth values shall follow. The color field specifies color of current pixel. The order shall be the same as the depth field except that number 20 of depths (pixels) for each point is not included.
The depth and valor fields of PointTexture are in a raw format, and the size of these fields will most likely be very large. Therefore, these fields need to be compressed in order to have efficient streaming. The following section describes the compression scheme far the fields of PointTexture node.
r ~ 3.2. Compression Scheme for PaintTexture 3.2.1. Compression of depth Feld The depth field of PointTexutre node is simply a set of points in a 'discretized enclosing cube'. We assume the botkom plane to be the plane of projection. Given the m*n*I dimension acids for a model, points being the ao centers of the cells (in octree case, we call them voxels) of this grid, we can consider occupi~d voxels as 1's and empty voxels as 0's. The resulting set of bits (m*n*I bits) is th~n organized in a stream of bytes. This is done by traversing voxels in the depth (orthogonal to projection plane) direction by layers ofi depth ~, and in usual ("column-wise") order in the projection plane (padding, if necessary, the last layer of bytes with zeros in case tha depth dimension is not a multiple of 8). Thus, we can think of our set of points as of a stack of 8-bit gray scale images (variant - 1fi-bit images). Correspondence of voxels and bits is illustrated in FIG. 4 (a).
For example, in FiG. 4 (b), black squares correspond to points on the object. Horizontal plane is the projection plane. Consider the 'slice' of the height to 16 (its upper boundary is shown by thick line). l.et us interpret the 'columns' as bytes. That is, a column above the point marked in th~ figur~ represents the stack of 2 bytes with values 18 and 1 (or a 16-bit unsigned integer 274). If we apply the best available PPM-based compression methods to the union of bytes obtained this way, quite good results are obtained. However, if a simple 1-l5 contexfi method is directly applied here (no orthogonal invariance or hierarchical contexts can be used here, of course), this results in slightly lower degree of compression. Below we give a table of volumes required for different types of LDI geometry representations: BVOC, the above byte array compressed by the best PPM compressor, and the same array compressed by our currently used 2o compressor (figures in Kbytes).
Model BVOC Best PPM Simple 1-context representation compression of compression of ofi byte array byte array geometry "Angel" 31.4 27.5 32 "Morton" 23.4 23.3 3Q.5 "Grasshopper" 16.8 17.0 19.7 ?.?.2. Compression of color 'field The color field of PointTexutre node is a set of colors attributed to points of the object. Unlike octree case, color field is in one-to-one correspondence ze with depth field. The idea is to represent color information as a single image, which could be compressed by one of the known lassy techniques. Ca~dinality of this image is much smaller than that of reference images in octree or Oepthlmage case, and it is a substantial motivation for such an approach. The image can be obtained by scanning depth points in this or that natural order.
Consider first the scanning order dictated by our original storage format for Lpl (PointTexture) - 'depth-first' scanning of the geometry. Multipixels are scanned in the natural order across the projection plane, as if they were simple pixels, and points inside the same multipixel are scanned in depth direction.
~.o This order of scanning produces a 1 D array of colors (1 st nonzero multipixel, 2nd nonzero multipixel, ete). As soon as depth is known, colors of points can be successively reconsfiructed from this array. To make image compression methods applicable, we must 1-1 map this long string onto 2D array. This can be done in many ways.
is The approach used in the tests below is so-called "blocky scan", when the color string is arranged in 8*8 blocks, and arrange those blocks in column-wise order ('blocky scan'). The resulting image is shown in FIG. 5.
Compression of this image was performed by several methods, including standard JPEG. It turns out that at I~ast far this type of color scan, far ?o better results are obtained when using texture compression method. This method is based on adaptive local palletizing of each 8'~8 block. It has two modes; 8- and 12- times compression (as compared to'raw' true-color 24-bit per pixel BMP-format). Success of this method in this type of images can be explained exactly from its palette character, which allows us to account for z~ sharp (even non edge-like!) local color variations, arising from 'mixing' the points from front and back surfaces (which can differ greatly, as in case of "Angel"). The aim of searching far optimal scan is to reduce these variations as much as possible.
~;0 3.3 Test Results Examples of models in the original and compressed formats are shown in Annex 3. Quality of same models (e.g., Angel) is still not quite satisfactory y after compression, while others are very good ('Grasshopper'). However, we feel that this problem can be solved with the aid of proper scanning.
Potentially, even 12-times compression mode could be used, so the overall compression increases still more. Finally, the (ossless compression will be improved so as to approach the best PPM-based results in geometry compression.
Here, we give a table of compression ratios.
Mode) Ratio for the best Rativ for simple 1-context PPM method method "Angel" ~ 7.1 6.7 "Morton" ~ 7.8 6.7 "Grasshopper" 7.8 7.4 4. Conclusion In this document, the result of the core experim~nt on Depth Image based Representation, AFX A8.3, is reported. The DIBR stream has been introduced, which are linked through ur1 fields of DIBR nodes. These streams consist of all the items in the pIBR node together with a flag for each item to make it optional. Also, the compression of ocfiree and PointTexture data are to investigated.
Annex 1. Geometric meaning of the context orthogonal invariance in BVO
compression algorithm.
Assumption of orfihogonal invariance is illustrated in FIG. 6. Consider wn rotation about the vertical axis by 90 degrees clockwise. Consider the arbitrary filling patterns of the node and its parent before (top picture), and after rotation (bottom picture). Then, two different patterns can be treated as same pattern.
Annex 2. Gro~.aps anc! Transforms.
1. ~2 fixed orthogonal transforms.
Each transform is specified by a 5-bit word. Combination of bits is composition of the following basic transforms (i.e., if k-th bit is 9, the corresponding transform is performed) 1 st bit - swap x and y coordinates;
2nd bit - swap y and z coordinat~s;
~ 3rd bit - symmetry in (y-z) plan~;
4th bit - symmetry in (x-z) plane;
t 5th bit - symmetry in (x-y) plane;
2. z7 groups.
to For each group, here's the order of the group and number of nonzero bits in its elements: NumberOfGroup, QuantItyOfGroup and NumberOfFiIIBit~(S~tVoxels).
Group order # (nonzero bits Group (number of in elements) each element ._.,_ of the group) o _ 1 0 _._ _ __ _ -.8 T_~ 4 T_ 18 1fi 5 ~
~

18 ~ 2 . ' 4 3. Symbols and transform.
For each symbol (s), here is the index of the group (g) it belongs to and value of the transform (t) flaking it Into the 'standard' element of the group.
Binary number of symbol maps to the voxel binary coordinates as follows; i-th bit of the number has binary coordinates x=i8~1,y=i&(1«1), z=i$~(1«2).
s D 1 2 ..~"4 5 6 7 8 g 10 ~ 12 13~ 14 ~. . 11 g 0 1 1 2 1 3 4 5 9 4 3 5 2 5 5 ~~
~

t 0 0 4 0 8 0 0 0 12 4 4 4 $ 8 12 ~~ T

s 15 16 ~ 1$ 19 20 2~122 23 24 26 26 27 28 29 "( ~

g 6 1 2 4 5 4 5 7 $ 9 10 10 11 10 12 ~~

t 0 16 ~ 1 1 2 2 0 0 0 0 5 Or 10 0 ~.o s 241242 243 244 245 246 247 2413249 250 251 252 253 ~5~ 25~

g 14 14 17 14 20 23 25 14 23 20 25 17 25 25 26 ZZ

Annex 3. PointTexture compression screenshots.
(n FIGS. 7, $, and 9, Geometry compression figures are given for the best PPM-based method.
III. Result of Core ~xpcrimen.t on pepit~ >:m~ge-based l~tepresen.tt~tioo (Al~X A$.3) 1. Introduction In this document, the result of the core experiment on Depth Image ao based Representation (DIBR), AFX A8.3, is reported. This core experiment is for the d~pth image-based representation nodes that uses textures with depth information. The nodes have been accepted and Included in a proposal for Committee Draft during Pattaya meeting. However, the streaming of this information through Octreelmage node and Depthlmage node still remained a ~ ongoing. This document describes the streaming format to be linked by these nodes. The streaming format includes the compression of octree field of Octreelmage node and depth/color fields of PointTexture node, 2. Compression of DIBR formats We describe here a novel technique for efficient lossless compression of linkless octree data structure, allowing a reduction in the volume of this already compact representation about 1.5 -~ 2 times in our experiments. We also suggest several techniques for lossless and lossy compression of the PointTexture format, using int~rmediate voxel representation in combination 2~ with entropy coding and specialized block-based texture compression method.
2.1. Octreelmage compression The fields of octreeimages and oetree in Octreelmage are compressed ~epara~ely. Tl~e described n~eiEwds i7ave b~en dcvoloped, basod on tho notion that octree field must be compressed losslessly while some degr~e of visually :~o acceptable distortion allowed for actreeimages. Octreeimages field are compressed by means of MPEO-4 image compression (for static model), or video compression tools (for animated model).
2.1.1. Octree field compression Octree compression is the most important part of the Octreelmage compression, since it deals with Compression of already very compact linkless binary tree representation. However, in our experiments, the method explained below reduced the volume of this structure to about half of the original. In the animated Octreelmage version, OCtree field is compressed separately for each ~0 3D frame.
2.1.1.1. Context model Compression is pertormed by a variant of adaptive arithmetic coding (implemented as 'range encoder') that makes explicit use of the geometric nature of the data.. The Octree is a stream of bytes. Each byte represents a i.s node (i.e., subcube) of the tree, in which its bits indicate the occupancy of the subcube after internal subdivision. The bit pattern is called filling pattern of the node. The described compression algorithm processes bytes one by one, in the following manner.
~o a A context for the current byte is determined.
~ 'probability' (normalized frequency) of occurrence of the current byte in this context is r~trieved from the 'probability table' (PT) corr~sponding to the context.
~ The probability value is fed to the range encoder.
2 ~ ~ Current PT is updated by adding 1 to the frepu~ncy of the current byte occurr~nce in the durrent context (and, if necessary, renormalized afterwards, see details below).
Thus, coding is the process of constructing and ~npdating thA pTs so according to the context model. In the context-based adaptive arithmetic cading schemes (such as 'Prediction wi h Partial Matching'), context of a symbol is usually a string of several preceding symbols. However, in our case, compression efficiency is increased by exploiting the octree structure and geometric nature of the data. The described approach Is based on the two ideas that are apparently new in the problem ofi octree compression.
A. Far the current node, the context is either its parent node, or the pair {parent node, current node position in the parent node};
B. It is assumed that 'probability' of the given node occurrence at the particular geometric location in the particular parent node is invariant with respect to a certain set of orthogonal (such as rotations or symmetries) transforms.
~.o Assumption 'B' is illustrated in the FIG. 6, for the transfiorm R, which is the rotation by -90° on the x-z plane, The basic notion behind 'B' is the observation that probability of occurrence of a particular type of child node in a particular type of parent node should depend only on their relative position.
This assumption is confirmed in our experiments, by analysis of probability tables.
It m allows us to use more complex context without having too many probability tables. This, in turn, helps to achieve quite good results in terms of data size and speed. Note that the more contexts are used, the sharper is the estimated probability, and thus the mar~ compact is the code.
Let us introduce the set of transforms for which we will assume the zo invariance of probability distributions. In order to apply in our situation, such transforms should preserve the enclosing cube. Consider a set G of the orkhogonal transforms in Euclidean space, which are obtained by all compositions in any number and order of the 3 basis transforms (generators) "~~ ~ "~Z ~ and 'n3 , given by o t o 1 0 0 -1 0 0 m,= 1 Q 0 , mz=: 0 0 1 , m,~ 0 1 U
0 0 1 0 1. 0 0 0 1 where, "'~ and ma are reflections to the plan~s x-y and y=z, r~spectively, and is reflection to the plane x=0. One of the classical results of the theory of groups generated by reflections states that G contains 48 distinct orthogonal transforms, and is, in a sense, the maximal group of orthogonal transforms that take the cube into itself (so-called Coxeter group). For example, rotation R
in F1~.6 is expressed through the generators as ~=~~'mx'mnmz where '~ ' is matrix multiplication.
Transform from G, applied to an octree node, produces a node with different filling pattern of subcubes. This allows us to categorize the nodes according to the filling pattern of their subcubes. Using the group theory language, we Say that G acts on the set of all filling patterns of the octree nodes.
Computations show that there ~xist 22 distinct classes (also called orbits in 1o group theory), in which, by definition, two nodes belong to the same class, if and only if they are connected by a transform from G. Number of elements in a class varies from 1 to 24, and is always a divisor of 48.
Th~ practical consequence of 'B' is that the probability table depends not on the parent node itself, but only on the class to which the parent node 1~ belongs. Note that there would be 256 tables for a parent-based context and additional 256x$ = 2048 tables far parsnt~-and-child position-based context in former case, while we need only 22 tables for parent-class-based context plus 22x8=1 TC tables in latter Case. Therefore, it is possible to use equivalently complex context with relatively small number of probability tables. The 2o constructed PT would have the form as shown in Table 2.
Table 2, EnumBration of probability tables.
ID of PTs 0 1 ... 255 Context description 0 P0,0 P0,1 ... P0,255 0-Context : Context independent 1..22 (22)Pi,O Pi,1 ,.. Pi, 255 1-Context : {parent node class 23...108 2-Context : {parent node class, (176) Pa'0 P),1 ... Pj, 255 current node position}

2.1.1.2. Encoding process To make the statistics for probability tables more accurate, it is collected in different ways at three stages of encoding process.
zr ~ At the first stage we do not use contexts at all, accepting the '0-context model', and keep a single probability table with 256 entries, starting from the uniform distribution;
~ As soon as the first 592 nodes {it is an empirically found number) are encoded, ws switch to the '9-context model' using parent node as a context. At the switching moment, the 0-context PT is copied to the PTs for all 22 contexts.
t After 2048 nodes (another heuristic value) are encoded, we switch to '2~
~n context model'. At this moment, the 1-context PTs of the parent patterns are copied to the PTs for each position in the same parent pattern.
Key point of the algorithm is the determination of context and probability for the current byte. This is implemented as follows. In each class we fix a t.5 single element, which is called 'standard element'. We store a class map table (CMT) indicating the class to which each of the possible 256 nodes belongs, and the precamputed transform from G that takes this particular node into the standard element of its class. Thus, in order to determine the probability of the current node N, we perform the following sfieps:
~ hook at the parent P of the current node;
~ Retrieve the class from CMT, to which P belongs, and the transform T that takes P into the standard node of the class. Let the class number be c;
~ Apply T to P, and find the child position p in standard node to which current node N is mapped;
t Apply T to N. Then, newly obtained filling pattern TN is at the position p in the standard node of the class c, RetrIEVe the required probability from the entry TN of the probability table corresponding to the class-position combination (c, p).
:j0 For the 1-context model, the above steps are modified in an obvious way. Needless to say, all the transforms are precomputed, and implemented in a lookup table.
Note that at the stage of decoding of the node N its parent P is already decoded, and hence transform T is known. All the steps at the stage of decoding are absolutely similar to the corresponding encoding steps.
Finally, let us outline the probabilifiy update process. bet P be a probabilifiy table for some context. Denote P(N) the entry of P corresponding to the probability of occurrence of the node N in this context. In our implementation, P(N) is an integer, and after each occurrence of N, P(N) is updated as:
P(N)= P(N)+A, 7~ where A is an integer increment parameter varying typically from 1 to 4 for different context models. Let S(P) be the sum of all entries in P Then the 'probability' of N that is fed to the arithmetic coder (range coder in our case) is computed as P(N)/S(P). As soon as S(P) reaches a threshold value 2~6 , all the entries are renorm~lized; in order to avoid occurrence of zero values in P, 1, entries ~qual to 1 are left intact, while the others are divided by 2.
2.2. PointTexture compression The PointTexture node contains two fields to be compressed, that is, depth and color. Th~ main difficulti~s with PointTe~tture data compression are due fio the following requirements:
2n Geometry must be compressed in a lossless fashion, since distortions in this type of geometry representation are often highly noticeable.
Color information has no natural 2D structure, and thus image comprESSion techniques are not immediately applicable.
z~>
In this section we suggest three methods far PointTexture model compression:
Lnssless rnekt~od fQr the statZ~~rd t~acle rep~eser~t~ti~~~o.
Lossless m~thod for lower resolution node representation.
Lossless geometry and lossy color compression for lower resolution node representation.
The methods correspond to three levels of 'fidelity' of the object description. First method assumes that we must store the depth information up to its original 32 bits precision. Mowever, in practice, the depth information can be often quantiized by much smaller number of bits without loss of quality. In particular, when the PointTexture model is converted from polygonal model, the quantization resolution is chosen according to actual size of visible details the original model possesses, as well as to the desirable output screen resolution.
to In this case 8-11 bits may well satisfy the requirements, and depth values are initially stored in this lower resolution format. Now, our second method deals with lossless compression of this 'lower resolution' representation. The key observation here is that for such a relatively small (compared to standard 32) number of bits, an intermediate voxel representation of the model can be used, m and allows us to compress the depth field substantially without loss of information. Color information in both cases is losslessly compressed and stored in a PNG format, after arranging the color data as an auxiliary 21~
image.
Finally, the third method allows us to achieve much higher compression, combining fossless compression of the geometry with lossy compression of the zo color data. The latter is performed by a specialized block-based texture compression technique. In the following three subsections the methods are described in full detail.
2.1.1. Lossless PointTexture compression for the standard node representation This is simple lossless coding method, which woks as follows.
~ depth field is compressed by the adaptive range coder, similar to the one used in ~ct.re~ field compression. For this format, we use a version in which probability table is kept far each of 1-symbol contexts, and Context so is simply the previous byte. Therefore, 256 PTs are used, The depth field is considered as a stream of bytes, and geometrical structure is not used explicitly.
z~

~ color field is comer~ssed after conversion to a planar true color image.
Colors of the points in the PointTexture model are first written in temporary 1D array, in the same order as depth values in depth field. If the total number of points in th~ model Is L, then we compute the smallest integer I such that ~ ~ 1 ~ ~ , and 'wrap' this long 'string' of color values into the square image with side ( (if necessary, padding by black pixels). This image is then compressed by one of the MPEG-4 lossless image compression tools. In our approach, we us~d a Portable Network Graphics (PNG) format. Image obtained in this way from the 'Angel' io model is shown in FIG. 10 (a).
2.2.2. ~ossless PointTexture comer~ssion for the lower resolution node representation (n many cases 16-bit resolution far depth information is exceedingly fine.
In fact, resolution in depth should correspond to resolution of the screen an which the model is to ba visualized. In situations where small variations in model depth at different points lead to displacement in the screen plane much smaller than pixel size, it is reasonable to use lower resolution in depth, and models are often represent~d in the format where depth values occupy 8-11 bits.
zo Such models are usually obtained from other formats, e.g., polygonal model, by discretizing the depth and color values on the proper spatial grid.
Such a reduced resolution representation can itself be considered as a compressed form of standard model with 32-bit depth. Mowever, there exists mare compact r~presentation for such models, using the interm~diate voxel 2~~ space. Indeed, points of the model can b~ assumed to belong to nodes of uniform spatial grid with spacing determined by discretization step. We can always assume that the grid is uniform and orthogonal, since in case of perspective model we can work in parametric space. Using this obs~rvation, dB~th ~zn~ colcr fields of fo~,r~er rcso!uti:.~n POIi eTe~t::r 2 ui C
compressed as so follows.
~o ~ color field is compressed by a lossless image compression technique, as in the previous mathad;
~ depth field is first transformed into voxel representation, and then compressed by the variant of range coder described in the previous subsection.
Intermediate voxel model is constructed as follows. According to the depth resolution s of the model, consider the discrete voxel space of the size wi~r.~ x heighr x 2g (~N,idth' and 'height' parameters are explained in PointTexture to specification). For our purposes, we don'fi need to work with a potentially huge voxel space as a whale, but only with its 'thin' cross~sectlons. Denote (r, c) the row-column coordinates in the projection plane, and let d be depth coordinate.
We transform 'slices' {c=const}, i.e., cross-sections of the model by 'vertical planes', into the voxel representation. Scanning the slice along the 'columns' 1 ~ parallel to the projection plane, we set voxel (r, c, d) to 'black' if and only if there exists a point of the model with depth value d that projects into (r, c). The process is illustrated in FIG. 4.
As soon as the slice is constructed, it is compressed by the 1-context zo range coder, and compression of the next slice begins. In this way, we avoid working with very large arrays. Probability tables are not initialized for each new slice. For a wide range of models only a tiny fraction of voxels are black, and this allows us to achieve rather high compression ratio. Decompression is performed by obvious inversion of the described operations.
Comparison of the depth field compression by this method and by the octree representation will be described. Overall comer~ssion ratio of the model is determined, however, by the color field, since such an ircegular image cannot be strongly compressed without distortions. In the next subsection we consider combir~tlGp G; lossloss geometry and lossy color compression technique.
;io 2.2.3, ~ossless geometry and lossy color compression for lower resolution pointTexture representation hike the previous one, this method transforms the depth field into the voxel representation, which is then compressed by adaptive 1-context range coder. color field is also wrapped onto the 2D image. However, we make an attEmpt to organize the mapping so that points that are close in 3D space map into nearby points in 2D image plane. Then a specialized texture compression method (adaptive block partitions, ABP) is applied to the resulting image.
Main steps of the algorithm are as follows.
1. Transform a 'slice' of four Successive 'vertical planes' of the PointTexture to model into voxel representation 2. Scan the obtained ~'l~th x 4x 2'voxel array by:
Traversing the vertical 'plane' of 4x 4x 4 voxel subcubes along the 'columns' parallel to the projection plane: fiirst the column Closest to the projection plane, then the next closest column, etc (i.e., in usual 2D array traversal order).
Traversing voxels inside each 4x 4x 4 subcube in the order analogous to the one used in Octreelmage nodes subcubes traversal.
3. Write the colors of points of the model encountered in this traversal order, into an a~rxiliary 10 array:
zo 4, Rearrange the obtained array of Colors into a 2D image, so that:
5. Cons~cutive 64 color samples are arranged, column-wise, into 8-by-8 pixel block, next 64 samples arranged into adjacent 8-by~8 pixel block, and so on.
6. Compress the obtained image by the ASP technique.

This method of scanning 3D array and mapping the result onto the 2D
image Was chosen from th~ following considerations. Note that 4x4x4 subcubes and g"~ image blocks contain the same number of samples. If several successively scanned subcubes contain enough color samples fo fill the Rx8 block, it Is highly probable that this block will b~ rather uniform and thus distortion will be hardly noticeable on the 3D model after decompression. ABP
,. 32 algorithm compresses $xg blocks independently of one another, with the aid of local palletizing. In our tests, distortion introduced by ABP compression in the final 3D model was drastically smaller than that of JPEG. Another reason for choosing this algorithm was the great speed of decompression (for which it was ~~ originally designed). Compression ratio can take two values, 8 and 12. In the PointTexture compression algorithm we fix compression ratio 8.
Unfortunately, this algorithm is not universally applicable. Although th~
image obtained this way from the color field, shown in FIG, 10 (b), is much more uniform than for the 'natural' scanning order, sometimes 217 ~ X ~ blocks may ~n contain color samples corresponding to distant points in 30 space. In this case lossy A8P method may 'mix' Colors form distant parts of the model, which leads to local but noticeable distortion after decompression.
However, for many models the algorithm works fine. In FIG. 11, we show the 'bad' case ('Angel' model) and the 'good' case ('Morton25B' model).
lr> Reduction of the model volume in both cases is about 7 times.
3. Test Results In this section we compare the results of compression of two models, 'Angel' and 'Morton256', in two different formats -- Octreelmage and ~« PointTexture. Dimensions of reference images for each model were 256x256 pixels.
3.1. PointTexture compression In Table 3 ~ Table 5, the results of different compression methods are given. Models for this experiment were obtained from models with 8-bit depth z5 field, pepth values were expanded aver the ~1~~3~~ range by using quantization step z2~ ~-~ , so as to make bits distribution in 32-bit depth values more uniform, imitating to some extent 'true' 32~bit values.
High compression ratios are not to ba expected from this method.
Volume reduction is of the same order as for typical lossless compression of true color images. Compressed d~pth and color fields are of quite comparable size, since geometric nature of the data is not captured by this approach.

Now let us look how much the same models can be losslessly compressed when taken at their'true' depth resolution. Unlike the previous case, depth field is (osslessly compressed about 5-6 times. This is due to the intermediate voxel representation that makes the geometric data redundancy much more pronounced - indeed, only a small fraction of voxels are black.
However, since uncompressed size of the models is smaller than for 32-bit case, color field compression ratio now determines the overall compression ratio, which is even smaller than for 32-bit case (although the output files are also smaller). So, it is desirable to be able to compress color field at (east as good as 1o depth field.
Our third method uses lossy compression technique called ABP [6] for this purpose. This method gives much higher compression. However, like all the lossy compression techniques, it may lead to unpleasant artifacts in some cases. An example of an object for which this happens is 'Angel' model. in 1~ process of scanning the points of the model, spatially distant points do sometimes drop into the same 2D image block. Colors at distant points of this model can differ very much, and local palletizing cannot provide accurate approximation if there are too many different colors in a block. On the other hand, it is local palletizing that allows us to accurately compress a vast majority zo of the blocks, for which distortion introduced by, say, standard JPEG
becomes absolutely unbearable after the reconstructed colors are put back at their 3D
locations. However, visual quality of 'Morton256' model compressed by the same method is excellent, and this was the case far most of the models in our experiments.
Table 3. Lossless PointTexture compression for the 32-bit depth field (In Bytes).
...._.w__.~., de th color Compression d p Total sizeratio l M

e field field Depth Color Total o Original 6J1,032 321,660 1,012,698 "Morton256 __ _ _ 3.1 1.2 2.0 Compressed 226,385 270,597 424,62 _.. _."Angel"Original 665,488 302,508 967,996 3.3 1.2 2.'l Compressed 204,364 262,209 466,604 Table 4. Lossless PointTexture compression far the Power resolution node representation (In bytes).
depth Total Comprt~ssion Model color ratio field field size pepth Color Total Original 172,758 321,666 494,424 "Morton256" 5.4 1.2 1.63 Compressed 31,979 270,597 302,576 Original 166,372 302,508 468,880 "Angel" 5.2 1.2 1.6 _.. 32,047 262,209 294,256 Compressed Table 5. Lossless geometry and lossy color compression for lower resolution PointTexture (In Bytes).
l ld color Total Compression th fi field size ratio d Mode e pepth Color Totat ep Original 172,758 321,666 494,424 "Morton256" 5.4 8.0 6.8 Coitnpressed31,979 40,352 72,331 Original 166,372 302,508 468,880 el" 5.2 7.9 6,7 "An g . 32,047 38,408 70,455 Compressed 3.2. Octreelmage compression The Table 6 presents sizes of compressed and uncompressed octree components for our two test models. We see that reduction of this field is about 1,t;-1.9 times.
However, compared to uncompressed PointTexture models, even with tca i3-bit depth field, Octreelmage is much more compact. The Table 7 shows compression ratios 7.2 and 11.2. This is mare than PointTextures can be compressed without converting to Octreelmage (6.7 and 6.8 times, rasp~ctively).
However, as we alrEady mentioned, Octreelmage may contain incomplete color infiormation, which is the case with 'Angel' model. In such cases 3D
interpolation of colors is used, To sum up, we can conclude that the experiments presented above prove the efficiency of the developed compression tools. Choice of the best tool for given model depends o~ its geometrical complexity, character of solar distripution, required speed of rendering and other factors.
Table 6. Compression ratios given by the method described in 4.1.2, for Octreelmage models and their components (file sizes rounded to Kbytes).
Model octree size Compressed Octree sizeCompression ratio ~~~"Angel" 59 31 1.6 "Morton256" 41 22 ~ 1.9 Table 7. Noncompressed PointTexture (8-bit depth field), and compressed Octreelrnage representations for the same models (file sizes rounded to Kbytes).
Model PointTexture Compressed Octreelmage Compression ratio "Angel" 469 55 7.2 "Morton256 V ~ 494 44 11,2 "

5. Comments on Study of ISO/IEC 14496-1/PDAM4 ao After applying following revisions to Study of ISOIIEC 14498-1IPDAM4 (N4627), the revised Study of ISOIIEC 14496-1/PDAM4 should be incorporated into ISO/IEC 14496-1/FPDAM4.
Clause 6.5.3.1.1, Technical Problem: The default value of orthographic should be the most generally used value.
Solution: replace the default value of orthographic field from "FALSE" to "TRUE"
as follows.
Froposed revision:
::o field SFBooI orthographic TRUE

Clause 6.5.3.1.1, Technical Problem: The streamingofi DIBR be done with the uniform shall streaming method for AFX.

Solution: Remove the depthlmageUrl held from Depthlmage node.

Proposed revision:

Depthlmage {

field SFVec3f position o 0 1 D

field SFRotation orientation Q 0 1 0 fiield SFVec2f fieIdOfVi~w 0.7$5398 0.78539$

field SFFloat nearPlane 10 field SFFloat farPlane 100 Feld SFBooI orthographicTRUE

field SFNode diTexture NULL

m Clause 6.5.3.1.2, Editorial Problem: The term 'normalized' is misleading, as applied to the d~pth field in current context.
Solution: In 5th paragraph, change'normalized'to'scaled'.
2a Proposed revision;
The nearPl~ne and farPlane fields specify the distances from the viewpoint to the noar plane and far plane of the visibility area. The texture and depth data shows the area closed by the near plane, fiar plan~ and the fieIdOfiView. The depth data are scaled to th~ distance from nearPlane to a farPlane.
Clause 8.5.3.1.2, Technica) Problem: The streaming ofi DIBR shall be done with the uniform streaming method for AFX.
so Solution: Remove the explanation of depthlmageUrl field (th~ 7th paragraph and below).
Proposed revision:

Clause fi.5.3.2.2, Editorial Problem: The semantics of the depth field is incompletely specified.
Solution: Change the depth field specification in the 3rd paragraph as follows, ~> PfOpOSed r9VISlOn:
The depth field specifies the depth for each pixel in the texture field.
The size of the depth map shall be the same size as the image or movie in the texture field. Depth fiield shall be one of the various types of texture nodes (ImageTexture, MoviETexture or PixeITexture), where only the nodes Zo representing gray scale images are allowed. If the depth field is unspecified, the alpha channel in the texture field shall be used as the depth map, If the depth map is not specified through depth field or alpha channel, the result is undefined.
DEpth field allows us to compute the actual distance of the 3D points of o, the model to the plane which passes through the viewpoint and parallel to the near plane and far plane:
Mist = nearl~'lane -i- (1---./ ~ ~ )( farPlune - neurPlane) , ~nnYx where d is depth value and d"~x is maximum allowable depth value. It is assumed that for the points of the model, d > 4, where d=1 corresponds to far 2o plane, d=d ; corresponds to near plane.
~ri,x This formula is valid for both perspective and orthographic case, since d is distance between the point and the plane. c!r",~x is the largest d value that can be represented by the bits used for each pixel:
(1 ) If the depth is specified through depth fiQld, then depth value d equals to the gray scale.
(2) If the depth is specified through alpha channel in the image defined via texture field, then the depth value d is equal to alpha channel value.
The depth value is also used to indicate which points belong to th~
;~o model: only the point for which d is nonzero belong to the model.

For animated Depthlmage-based model, only Oepthlmag~ with SimpIeTextLres as dfTextures are used.
Each of the Simple Textures can be animated in one of the following ways:
(1 ) depth field is still Image satisfying fibs abave condition, texture field is arbitrary MovieTexture (2) depth field is arbitrary MovieTexture satisfying tha above condition on the depth field, texture field is still image (3) both depth and texture are MovieTextures, and depth field satisfies the Lo above condition (4) dr~pth field is not used, and the depth information is r~trieved from the alpha channel of the MovieTexture that animates the texture field Clause 6.5.3.3.2, Editorial Problem: The semantics of the depth field incompletely specified.
Solution: Replace the depth field sp~cification (3rd paragraph) with the proposed revision.
Proposed revision:
Geometrical meaning of the depth values, and all the conventions on zo their interpretation adopted for the SimpIeTexture, apply here as well.
The depth field specifies a multiple depths of each point in the projection plane, which is assumed to be farPlane (see above) in the order of traversal, which starts from the point in the lower left corner and traverses to the right to finish the horizanta) line before moving to the upper line. For each point, z ~ the number of depths (pixels) is first stored and that number of depth values shall follow.
Clause 6,5.3.4.1, H.1, Technical Problem: The field type SFString used for octres fi~Id might lead to inconsistent ate values Solution: Change the field type for octree field to MFInt32 Proposed revision:
;i9 In clause 6.5.3.4.1 field MFInt32 octree ""
In clause H.1, table for Octree, change the octree column as follows:
Field DEF !n OUT DYN [m,M] Q A
name id id id id octree MFInt32 01 [C1,255]13,8 Clause 6.5.3.4,1, Technical Problem: The streaming of DIBR shall be done with the uniform streaming method for AFX, Solution: Remove the octreeUrl field from Octre~Image node.
io Proposed revision:
Octreelmage {
field SFInt32 octreeresolution 256 field MFInt32 octree '"' field MFNode octreeimages p ~.s }
Clause 6.5.3.4.2, Editorial Problem: octreeresolution field definition (2nd paragraph) allows misinterpretation.
z0 Solution: Revise the description by adding the word 'allowed' Proposed revision:
The octreeresolutian field specifies maximum allowable number of octree leaves along a side of the enclosing cube. The level of the octree can be determined from octreeresolution using the following equation: octreelevel =
int(Iog2(octreeresolution-1 ))+1 ) Clause 8.5.3.4.2, Technical Problem: The streaming of DISR shall ba done with the uniform streaming method for AFX, Solution: Remove the explanation of octreeUrl field (the 5th paragraph and below).
Proposed revision:
Clause 6.5.3.4.2, Editorial Problem: Anirnatian ofi the Octreelmage was described incompletely Solution: Add a paragraph at the end of clausE 6,5.3.4.2 describing the Octreelmage animation Proposed revision:
Animation of the Octreeimage can be performed by the same approach ~o as the first three ways of Depthlmage-based animation described above, with the only diff~rence of using octree field instead ofi the depth field.
Clause H.1, Technical Problem: The range of depth data in PointTexture node may be too small for m future applications. Many graphics tools allow 24 bits or 36 bits depth for their z-buffer. However, depth field in PointTexture has the range ofi [0, 65535, which is 16 bits.
Solution: In clause H.1, table for PointTexture, change the range of depth column as proposed.
2o Proposed revision:
Field pEF In OUT DYN id [m,M) Q
name id id id Depth MFInt32 10 [0, I]

N. ISO/IEC JTC 1/SC 29/WG 11 GODING OF MOVING PICTURES AND
AUDIO
25 1. I ntroduction In this document, an improvement ofi Octreelmage in Aepth Image-Based Representation (DIBR), AFX A8.3, is described. The Octreelmage node has been accept~:d aid included in a proposal for COn~riNite6 Gra ~ uurli~g Pattaya meeting. However, it has been observed that the rendering quality ~o would be unsatisfactory in some special cases, due to the occlusion of object ~l geometry. This document describes the improved version of the Octreelmage node, i.e., Textured Binary Volumetric Octree (TBVO), as well as its compression method for streaming.
2. Textured Binary Volumetric Octree (TBVO) 2.1. TBVO overview The objectiv~ of TBVO is to contrive a more flexible representation/compression format with fast visualization, as an improvement of the Binary Volumetric Octree (BVO). This is achieved by storing some 1o additional information on the basis of BVO. BVO-based representation consists of (octree structure + set of reference images), while TBVO-based representation consists of (BVO octree structure ~ set of reference images +
camera indices).
The main 8V0 visualization problem is that we must determine > > corresponding camera index of each voxel during rendering, To this end, we need not only project to the Cameras, but also make reverse ray casting procedurta. We must at least determine the existence of a camera, from which the voxel is visible. Therefore, we must find all the voxels that are projected to a particular camera. But this procedure is very slow if we use brute-fore~
2o approach. We have developed an algorithm that performs it fast and accurate for majority of object shapes. However, there are still same troubles for voxels that is not visible from any cameras.
A posslbl~ solution could be storing explicit color to each voxel.
However, in this case, we have experienced some problem iin compressing ~ea color information. That is, if we group voxel colors as an image format and compress it, the color correlation of neighboring voxels is destroyed such that the compression ratio would be unsatisfactory.
In TBVO, the problem is solved by storing camera (image) index for every voxel. The index is usually same for large Braun of voxels, and this allows 3o the use ofi octree structure for economic storage of the additional information.
Note that, on the average, only 15% volume increas~ was observed in the experiments with our models. It's modeling is a little bifi more complex, but A.2 allows more flexible way of representing objects of any geometry.
The advantages of TBVO over BVO are that it's rendering is simpler and much faster than BVO's and virtually no restrictions an the object geometry is imposed 2.2. TBVO example In this section, we show a typical example, which illustrates the efficacy and key ingredients of TBVO representation, In FIG. 92 (a), a BVO model of "Angel" is shown. Using the usual 6 textures of BVO, a few parts of the body and wing are not observed from any camera, yielding rendered image with a tot of visible 'cracks'. in TBVO representation of the same model, a total of $
cameras are used (6 faces of a box + 2 additional camera). In FIG. 13, (a) is the image of camera index. Different color denotes the different index of camera.
1~ Additional cameras are placed Inside the cube, watching the front and back face orthographically. In FIG. 13, (b) and (c) are additional Image$ taken by the additional cameras. As a result, we have obtained a seamless and clear rendering result of the model, as shown in FIG. 12 (b).
20 2.3. Uncompressed TBVO stream description We suppose that 255 cameras are enough, and assign up to 1 byte for the index. The T6V0 stream is stream of symbols. Every TBVO-symbol is BVO-symbol or Texture-symbol. Texture-symbol denotes camera index, which could be a specific number or a code of "undefined". Let "undefined" code be '?' for 2~ further description.
The TBVO stream is traversed in breadth first order. Let us describe how to write TBVO stream if we have BVO and every leaf voxel has camera number. This musfi be done In modeling stage. It will travers~ all BVO nodes including leaf nodes (which da not have BVO-symbol) in breadth first Qrder.
The ,o following pseudo-code will complete wPiting the stream.

If CurNode is not leaf node Write currenfi BVO-symbol corresponding to this node if all the children have identical camera index (texture-symbol) if parent of CurNode has '? ' camera index Write camera index equal for sub-nodes ) else ( ) W rite '? ' symbol According to the procedure, for the TBVO tree Shawn in FIG. 14 (a), a stream of symbols can be obtained as shown in FIG, 14 (b). In this example, the texture-symbols are represented in byte. However, in the actual stream, each texture-symbol would only need 2 bits because we only need to represent three values (two cameras and the undefined code).
2.4. TBVO compression The fields of octr~eimages and octree, in Octreelmage node, are 1o compressed separately. The described methods have been developed, based on the notion that octree field must be compressed losslessly while some degree of visually acceptable distortion is allowed for octreeimages.
2.4.1. Octreeimages field compression Octreeimages field is compressed by means of MPEG-4 image > > compression (for static model), or video compression tools (for animated model) that are allc~wou i;~ "~",P~G-4. !r~ our apprcac~t, we used the Jf~FG format for Octreeimages (after some preprocessing which we call 'minimization' of the JPEG images, retaining for each texture, only the points necessary far 3D

visualization; in other words, the parts of given texture that are never us~d at 3A
rendering stage, can be compressed as roughly as we like).
2.4.2 Octree field compression Octree compression is the most important part of the Octreelmage compression, since it deals with compression of already very compact linkless binary tree representation. However, in our experiments, the method explained below reduced the volume of this structure to about half of the original. In the animated Oetreelmage version, octree field is compressed separately for each u~ 3a frame.
2.4.2.1. Context model Compression is performed by a variant of adaptive arithmetic coding (implemented as 'range encoder') that makes explicit use of the geometric nature of the data. The Octree is a stream of byt~s. Each byte represents a l.a node (i.e., subcube) of the tree, in which its bits indicate the occupancy of the subcube after internal subdivision. The bifi pattern is called filling pattern of the node. The described compression algorithm processes bytes one by one, in the following manner.
zo a A context far the current byte is determined.
t The 'probability' (normalized frequency) of occurrence of the current byte in this context is refirieved from the 'probability table' (PT) corr~sponding to the context.
~ The probability value is fed to the range encoder.
~ Current PT is updated by adding 1 to the frequency of the current byte occurrence in the current context (and, if necessary, renormalized afterwards, see details below).
Thus, coding is the process of constructing and updating th~ PTs according to the context model. In th~ context-based adaptive arithmetic coding :~o schemes (such as 'Prediction with Partial Matching'), context of a symbol is usually a string of several preceding symbols, However, In our case, compression efficiency is increased by exploiting the octree structure and ~5 geometric nature Qf the data. The described approach is based on the two ideas that are apparently new in the problem of octree compression.
A. For the current node, the context is either its parent node, or the pair f parent node, current node position in the parent node};
B. It is assumed that 'probability' of the given node occurrence at the particular geometric location in the particular' parent node is invariant with respect to a Certain set of orthogonal (such as rotations or symmetries) transforms.
Assumption 'B' is illustrated in the FIG. 6, for the transform R, which is the rotation by -90° on the x-z plane, The basic notion behind 'B' is the observation that probability of occurrence of a particular type of child node in a particular type of parent node should depend only on their relative position.
This m assumption is confirmed in our experiments, by analysis of probability tables. It allows us to use mor~ complex context without having too many probability tables. This, in turn, helps to achieve quite good results in terms of data size and speed. Note that the more contexts are used, the sharper iS th~ estimated probability, and thus the more compact is the code.
zo i_et us introduce the set of transforms for which we will assume the invariance of probability distributions. In order to apply in our situation, such transforms should preserve the enclosing cube. Consider a set G of the orthogonal transforms in Euclidean space, which are obtained by all compositions in any number and order of th~ 3 basis transforms (generators) '~~ ~~'=° and 'n3 , given by m,= 1 4 0 , m2= 0 0 t , m~= 0 1 0 0 0 1 0 1 0 0 0 1.
where, '~'~ and ~"a are reflections to the planes x=y and y=z, respectively, and "'A is reflection to the plane x=0, One of the classical results of the theory of groups generat~d by reflections states thafi G contains 48 distinct orthogonal transforms, and is, in a sense, the maximal group of orthogonal transforms that take the cube into itself (so-called Coxeter group). For example, rotation R
in FIG. 6 is expressed through th~ generators as R~ma.mz.mi.ma where '. ' is matrix multiplication.
Transform from G, applied to an octree nod~, produces a node with different filling pattern of subcubes. This allows us to categorize the nodes according to the filling pattern of their subcubes. Using the group theory language, we say that G acts on the set of all filling patterns of the octree nodes.
io Computations show that there exist 22 distinct classes {also called orbits in group theory), in which, by definition, two nodes belong to the same class, if and only if they ar~ connected by a transform from G, Number of elements in a class varies from 1 to 24, and is always a divisor of 48.
The practical consequence of assumption 'B' is that the probability table depends not on the parent node itself, but only on the class to which the parent node belongs. Note that there would be 256 tables for a parent-based context and additional 256x$ = 2048 tables for parent-and-child position-based context in former case, while we need only 22 tables far parent-class-based context plus 22x$=17fi tables in latter case. Therefore, it is possible to use equivalently 2o complex context with relatively small number of probability tabl~s. The constructed PT would have the form as shown in Table 8, Table 8. Enumeration of probability tables.
ID of 0 1 ... 255 Context description PTs 0 P0,0 P0,1 ." P0,255 0-Context : Context independ~nt 1..22 Pi,O Pi,1 ... Pi,255 1-Context : {parent node class}
{22) 23...198 2-Context : {parent node class, (176) PJ,O Pj,1 ... Pj,255 current r>Ode ~OSItiUf?}

2.4.2.2. Encoding process To make the statistics for probability tables more accurate, it is collected in different ways at three stages of encoding process.
~ At the fiirst stage we do not use contexts at ail, accepting the 'O..context model', and k~~p a single probability table with 256 entries, starting from s the uniform distribution:
~ As soon as the first 512 nodes (it is an empirically found number) are encoded, we switch to the '1-context model' using parent mode as a context. At the switching moment, the 0-context PT is copied to the PTs for all 2~ contexts.
~ After next 2048 nodes (another heuristic value) are encoded, we switch to '2-context model'. At this moment, the l~context PTs of the parent patterns are copied to the PTs for each position in the same parent pattern.
t ~ Key point of the algorithm is the determination of context and probability for the current byte. This is implemented as follows. In each class we fix a single element, which is called 'standard element'. We store a class map table (CMT) indicating the class to which each of the possible 256 nodes belongs, and the precomputed transform from G that takes this parkicular node into the 2o standard element of its class. Thus, in order to determine the probability of the current node N, we perform the following steps:
~ Look at the parent P of the current node;
~ Retrieve the class from CMT, to which P belongs, and the transform T that '~ ~ takes P into the standard node of the class. Let the class number be c;
~ Apply T to P, and find the child position p in standard node to which current node N is mapped;
~ Apply T to N. Then, newly obtained filling pattern TN is at the position p in the standard node of the class c.
:30 ~ Retrieve the required probability from the entry TN of the probability table corresponding to the class-position combfnatiotl (c , p).
4;~

For the 1-context model, the above steps are modified in an obvious way, Needless to say, all the transforms are precomputed, and implemented in a lookup table.
Note that at the stage of decoding of the node N, its parent P is already decoded, and hence transform T is known. All the steps at the stage of decading are absolutely similar to the corresponding encoding steps, Finally, let us outline the probability update process. bet P be a probabil'Ity tabls for some context, penote P(N) the entry of P corresponding to the probability of occurrence of the node N in this context. in our implementation, 1o P(N) is an integer, and after each occurrence of N, P(N) is updated as:
P(N)= P(N)+A, where A is an integer increment parameter varying typically from 1 to 4 far different context models. l_et S(P) be the sum of all entries in P. Then the 'probability' of N that is fed to the arithmetic coder (range coder in our case) is a.~ computed as P(N)/S(P). As soon as S(P) reaches a threshold value 2~~ , all the entries are renormalized: in order to avoid occurrence of zero values in P, entries equal to 1 are left intact, while fihe others are divided by 2.
2.4.2.3 Encoding of the 'camera nodes' The stream of symbols determining the texture (camera) numbers for each voxel, is compressed using ifis own probability table. In the terms used above, it has a single context. PT entries are updated with larger increment than entries for octree nodes; in the rest, there's no difference with node symbols coding.
2, 2.5. Results of TBVO compression and rendering FIGs. 15, 1T, 18, and 19 are the results of T~VO compression. In FIG.
16, peeled images of "Angel" and "Morton" models are illustrated. The ~ompressoci size is compared with tho campr6ssed 3VC: in the third column :~a the number in brackets is compressed geometry volume, while the first number is total volume of TBVO-based campressed model (i.e. textures are tak~n into h9 account). As a measure of visual distortion, PSNR was computed to estimate the color difference after SDI-a(T)BVO~LDI transform. Compressed model size is site of al) the textures (stored as minimized JPEGs, see 0), plus compressed geometry size. In TBVO case, compressed geometry includes also camera r information. The PSNR of TBVO is improved significantly compared with f~VO.
TSVO achieves faster rendering than BVO. For the "Angel" model, the frame rate of TBVO-12 is 10.8 fps, while that of CVO is 7.5. for the "Morton"
model, TBVO-12 Is 3.o fps, wnile BVO is 2.1 (on celeron 85o MHz), on the other hand, it is observed that the rendering is accelerated much further in ~.o animated TBVO. For the "Dragon" model, the frame rate of TBVO-12 is 73 fps, while that of BVO is 29 fps (on Pentium IV 1.8GHz).
A TBVO format provides great flexibility. For example, 2 ways of using 12 cameras are illustrated in FIG. 6 - TBVO-12 and TBVO-(6+6). TBVO-12 ~.~ uses fi BVO cam~ras (cube faces) plus 6 images taken from the cube center, and parallel to the faces. (6+6) configuration uses 6 BVO cameras, and then it removes ('peels') all the voxels visible by these cameras and 'photographs' the parts that became visible by the same f cameras. Examples of such images are shown in FIG. 16.
Note the drastic difference in quality (subjective and PSNR value) between BVO and TBVO-6 Angel models. Although the same camera locations are used, TBVO allows us to assign camera numbers to all the voxels, even those invisible from all the cameras. These numbers are chosen so as to best 5 match the original colors (i.e. for each point the best color match in all the 'camera images' is selected, regardless of direct visibility, In the Angel case it gives great result).
Note also the very modest 'geometry' (i.e. 8V0+cameras) volc~me difference befiween 6 and 12 camera cases. In fact, additional cameras cover, 3o typically, small regions, and thus their identifiers are rar~, and their textures are sparse (and well compressed). All this applies not only to 'Angel', but also to 'Morton', 'Pa1m512', and 'robots512'.
2.6. Node specification Octreelmage {
field SFInt32 octreeresolution 256 field MFInt32 octree ~ #%q=13,8 field MFInt32 cameralD [] #%q=13,8 field MFNode octreelmages to The Octreelmage node defines a TBVO structure, In which an octree structure, corresponding camera index array, and a set of oCtreeimages exist.
The actreeimages field specifies a set of Depthlmage nodes with SimpIeTexture for diTexture field; depth field in these SimpIeTexture nodes is is not usEd. The orthographic field must be TRUE for the D~pthlmage nodes. For each of SimpIeTexture, texture field stores the color information of the object, or part of the object view (for example, its cross-section by a camera plane) as obtained by the orthographic camera whose position and orientation are specified in the cprresponding fields of Depthlmage. Parts of the object zo corresponding to each camera are assigned at the stage of model construction.
The object partitioning, using the values of pasltion, orientation, and texture fields, is performed so as to minimize the number of cameras (or, equivalently, of the involved octreeimages), at the same time to include all the object parts potentially visible from an arbitrary chosen position. The orientation fields must z~ satisfy the condition: camera view vector has only one nonzero component (i,e., is perpendicular to one of the enclosing cube faces). Also, sides of the SimpIeTexture image must be parallel to corresponding sides of enclosing cube.
The octree held completely describes object gepmetry. Geometry is represented as a set of voxels that constitutes the given olaject. An octrer~
is a 3o tree-like data structure, in which each node is represented by a byte. 1 in ith bit of this byte means that the children nodes exist for the ith child of that internal node; while 0 means that it does not. The order of the octree internal nodes Cs1 shall be the order of breadth first traversal of the octree. The order of eight children of an internal node is shown in FIG. 14 (b). The size of the enclosing cube of the total octree is 1 x1 x1, and the center of the octree cube shall be the origin (0, 0, 0) of the local coordinate system.
The cameralD field contains an array of camera indices assigned to voxels. At the rendering stage, Color attributed to an octree leave is determined by orthographically projecting the leave onto one of the octreeimages with a particular index. The indices are stored In a octree-like fashion: if a particular camera can be used for all the leaves contained in a specific node, the node .to containing index of the camera is issued into the stream; otherwise, the node containing a fixed 'further subdivision' code is issued, which means that camera index will be sp~cified separately for the child subnodes of the current node (in the same recursive fashion). If the cameralD is empty, then the camera indices are determined during rendering stage (as in BVO case).
os The octreeresalution field specifies maximum allowable number of octree leaves along a side of the enclosing cube. The level of the octree can be determined from octreeresolution using the following equation:
octreedeved ~rlo~z(octreerecodution)1 ~0 2.7. Bitstream specification 2,7.1. Octree Compression 2.7.1.1. Overview The Octreelmage node in Depth Image-Based Representation defines the octree structure and their projected textures. Each texture, stored in the z5 octreelmages array, is defined through Depthlmage node with SimpIeTexture.
The other fields of the Octreelmage mode can be compressed by octree compression.
2.7.1.2. Octrae 2.7.1,2.1. Syntax ;io class Octree () f OctreeHeader ();
aligned bit (32)* next;
while (n~xt == 0x000001 G8) aligned bit (32) octree_frame_stark~code;
Octree~rame(octree~.evel);
aligned bit (32)* next;
) 2.7.1.2.2. Semantics The compr~ssed stream of the octree contains an octres header and one or more octree frame, each preceded by oCtree frame~start code. The value of the octree frame~,start code is always 0x000001 C8. This value is a detected by look-ahead parsing (next) of the stream.
2,7.1.3. OctreeHeader 2.7.1.3.1. Syntax class OctreeHeader () unsigned int (5) octresResolutionBits;
unsigned int (octreeResolutionBits) octreeResolution;
int octreehev~I = ceil{log(octreeResolution)/log(2));
unsigned int (3) textureNumBits;
unsigned int (textureNumBits) numOfTextures;
2.7.1.3.2. Semantics ;~o This class reads the header information for the octree compression.
The octr~~Resolution, which length is described by octreeResolutionBits, contains the value of octreeResolution field of Octreelmage node. This value is used to derive the octree level.
The numOfTextures, which is textureNumBits long, describes the number of textures (or cameras) used in the Octreelmage node. This value is used for the arithmetic coding of camera ID for each node of the octree. If the value of textureNumBits is 0, then the texture symbols are not coded by s~tting the curTexture of the root node to 255.
2.7.1.4. ()ctreeFrame 2.7.1.4. 1. Syntax ~.(a class OctreeFrame (int octree~evel) f for (int curLevel~0; curLevel < octreeLevel; curLevel++0 for (int nodelndex=0; nodelndex < nNodesInCur~.evel; nodelndex++) f int nodeSym ~ ArithmeticDecode5ymbol (cpntextlD);
if (curTexture == 0) curTexture =ArithmeticDecodeSymbol (textureContextlD);
zn far (int nod~Index-0; nodelndex < nNodesInCurLevel; nodelndex++) if (curTexture == 0) z ~ curTexture = ArithmeticDecod~Symbol (textureContextlD);
2.7.1.4.2. Semantics This class reads a single frame of octree in a brcadtt~ first traversal :io order. Starting from 1 st node in the lev~I d, after reading every node in the current level, the number of nod~s in the next level is known by counting all the 1's in each node symbol. In the next level, that number of nodes J ~r (nNodesInCurLevel) wilt be read from the stream.
For decoding of each node, an appropriate contextlD is given, as described In clause 2.7.1.6.
If the texture {or camera) lD for the current node (curTexture) is not defined by the parent node, then the texture ID is also read from the stream, using the context for texture ID, defined by textureContextlD. If a non-zero value is retrieved (the texture ID is defined), then this value will also be applied to all the children nodes in the following levels. After decoding every node, the texturelD will be assigned to the leaf nodes of the octree that still have not been to assigned the texturelD value.
2.7.1.5. Adaptive Arithmetic Decoding In this section, the adaptive arithmetic coder used in octree compression is described, using the C++ style syntactic description.
> > as decode() is the function, which decodes a symbol, using a model specified through the array cumul_freq[J and PGT is an array of probability context tables, as described in Clause 2.7.1.6.
Int ArithmeticDecodeSymbol (int contextlD) zo unsigned int MAXCUM = 1 «13;
unsigned int TextureMAXCUM = 256;
int "'p, allsym, maxcum;
if (contextlD != textureContextlD) zs p = PCT[cantextlD];
allsym = 256;
maxcum = MAXCUM;
so else f p - TexturePCT;
J

allsyrn = numOfTexfiures;
maxcum = TextureMAXCUM;
int cumul freq[allsym];
int cum=0;
for (int r=allsym-1; i>=Q; i--) cum *_ p[i];
cumul freq[i] - cum;
) if (curn > maxcurn) f cu m=0;
for (int i=allsym-1; i~-0; i--) PCT[contextlD][i] _ (PCT[contextlD~[i]+1)/2;
cum +- PCT[contextlD][i];
cumul freq[i] = cum;
2n }
return as decode(cumul freq);
2 ~ 2.7.1.6. Decoding Process The overall structure of decoding process is described in clause 0 (s~~
also encoding process description abov~). It shows how one obtains the TBVO
nodes from the stream of bits that constitute the arithmetically encoded (compressed) TBVO model.
an At each step of decoding process we must update the context number (i.e. the index of probability table we use), and the probability table itself. We call Probabilistic model the union of all probability tables (integer arrays).
j-th element of i-th probability table, divided by the sum of its elements, estimate the probability of occurrence of the a-th symbol in i-th context.
The process of updating the probability table is as follows. At the start, probability tables are initialized so that afl the entries arE equal to 1.
Before decoding a symbol, the context number (ContextlD) must be chosen. ContextlD
is determined from previously decoded data, as indicated in 0 and 0 below.
When ContextlD is obtained, the symbol is decoded using binary arithmetic decoder. After that, the probability table is updated, by adding adaptive step to the decoded symbol frequency. If the total (cumulative) sum of table elements ac) becomes greater than cumulative threshold, than the normalization is performed (see 2.7.1,5.1 ).
2.7.1.6.'1. Context modeling of texture symbol Texture symbol is modeled with only one context. This means that only l~s one probability table is used. The size of this table is equal to number numOfTextures plus one. At the start, this tabl~ is initialized to all '1'-s.
The maximum allowable entry value is set to 256. The adaptive step is set to 32.
This combination of parameter values allows adapting tQ highly variable stream of the texture numbers.

2.7.1.6.2. Context modeling of node symbol There are 256 different node symbols, each symbol representing a 2x2x2 binary voxel array. 3D orthogonal transformation may be applied to these arrays, transforming the corresponding symbols into each ofiher.
2r Consider a set of ~8 fixed orthogonal transforms, that is, rotations by 90'°n (n=0,1,2,3) degrees about the coordinate axes, and symmetries.
Their matrices are given below, in the order of their numbers:
Orthogonal Transforms [48]=
(I !.~ U1 (U I 01 fl n Ol ~-r a o1 (o ~ a o ..1 p p o ~~ ~ I 0 01 Illo r cr , i o nJ, IIIIO n I , o , n,II, III\o n I , I a o i a a , a n Jlhr o n I a n i n I n o o r r a o a n I o i n o i o fa /0 /n !p (0 (0 a (0 o n 1 r -I 0 o a 11 . n1 01 01 I1 11 -11 Il a I IF II 'I -1 ~ IYo I n ll IF t 0 It I' nll,o .I a a o a -t It n o p 'h I a o I a l o J ' l o , ) t 1 l J o I a o a 0 0 I p a a I
p J o a 1 J o a t o l a (na- ooll (onll(ao.Il(Ipa (Ipououl wa a y ~ l I ~ ~ ~
0 I n 1 4 , ' I
I a -1 a a a p '0 I p 0 -1 -I 0 p 0 a a I 1 I l t J l l ) l n l o 4 J o l J
o 0 0 1 1 0 0 a p n I

n (I (1 n p 0 r p n n a 1 I -f 0 n 0 0 n 0 Ol -1 ~ ~ ~ ~ ~ ~ ~ I
y o a ~ ~ p 0 0 n I 0-I o i 0 a a a a '/ a 0 -I 1 -1 1 a [rr a n n 1 a 1 a o I a 1 o p p 1 t n 1 a a I a a a ' J l [ JJ JJ
JJ

0 p o l0 o t '/I -I
o a a 1 -1 o a o I~ 1'1 .n' a o a'' o a -I ~a 1 ~0 ~I ~0 /a ~p a -1 ~ n 0 a -1 U
p/, a' 1 1~ p~ 1' 0~ -t~

0' 0 110aJo InnJ0p n I,OA,4 IUJ 10 r 1pJ I 1 Ja (a o -I ro (u rp n -I
a a n I -I 1 I o Il .-Il nl a, al a n p ' ~ ~ ' ' I -1 a I [ n a I 0 o I
a 0 n p -1 0 a a 1 o I n .1 n o r~ J JJ 0 J J l a p -1 -t 0 -l l -1 o a a a 1 a a n .I -1 n o o a o i There are 22 sets of symbols - called classes, - such that 2 symbols are connected by such a transform if and only if they belong to the same class.
to The coding method constructs PCT's as follows: ContextlD of a symbol equals either to the number of class to which its parent belongs, or to a combined number (parent class, current node position in the parent node). This allows a great reduction in the number of contexts, reducing the time needed to gain meaningful statistics.
15 For each class, a single base symbol is dEtermlned (see Table 9), and far each symbol, the orthogonal transform that takes it into the basE symbol of its class is precomputed (in actual encoding/decoding process, look-up table is used.?. After the GantexlD for a symbol is determined, the transform, inverse (i.e.
transposed matrix) to the on~ taking its parent into the base element is applied.
2o in Table 10, contexts and the corresponding direct transforms for each symbol are given.
Table 9. Example of base symbol for each class Class order Example of base Class (Number of symbol elements) 5~

j _.

fi3 12 21 255 1 _ ..

The context model depends on the number N of already decoded symbols;
For N < 512 there is only one context. Probability table is initialized to all '1'-s. Number of symbols in probability table is 256. Adaptive step is 2.
Maximum cumulative frequency is 8192.
For 512 5 N < 2560 (--2048512), 1-context (in the s~nse that context number is single parameter, number of the class) model is used. This model USGS 22 PCT's, ContextlD is number of the class to which the parent of the ~n decaded node belongs. This number can always be determined from the lookup table (see Table 10), because the parent is deCaded earlier than the child.
Each 6s of the 22 PCT's is initialized by the PCT from previous stage. Number of symbols in each probability table is 256. Adaptive step is 3, Maximum cumulative frequency is also 8192. After symbol is decoded it is transformed using inverse orthogonal transform defined above. The orthogonal transform number can be found in Table III witfi Node Symbol Ip equal to parent of the current node symbol, When 2560 symbols are d~coded, the decoder switches to 2-context (in the sense that context number is now composed of the two parameters as explained below). This model uses 176 (=22'°8, i.e. 22 classes by 8 positions) io PCT's. ContextfD here depends on the parent class and the position of the current node in the parent node. Initial probability tables for this model depend only on its context, but not position. for all 8 positions PCT is a clone of the PCT
obtained for the given class at the previous stage, Number of symbols in each probability table is 256. Adaptive step is 4. Maximum cumulative frequency is also 8192.
After symbol Is decoded it is also transformed using the inverse orthogonal transform (to the one given in the Table 10) as is in the previous model.
One can easily obtain the geometry of base elements far each class, 2o using the Table 10. Base elements are exactly the symbols for which the Transform ID is 0 (number 0 is assigned to the identical transform).
Table 10. point look up tabl~ for node symbol, its class number and orthogonal transform that takes the symbol to the fixed base element of this class Node OrthogonalNade OrthogonalNode Orthogonal SymbolClassTransform Symbol Class TransformSymbol ClassTransform Ip (D ID ID ID ID ID iD ID

4 0 04 85 b 6 170 5 9 1 1 0 86 11 6 ~ 171 12 9 ao 18 3 2~~ 103 14 4 188 ~14 39 31 12 11C~ 10 19 201 -G

36 8 2 121 17 4 20~ 12 26 ~

4$ 2 22 133 9 24 218 14 34 61 14 0 ~ 146 6 30 231 79 11 B3 15 0-_-~ - 148 6 ~ 32 I 233 ~ I 20 - ~ _17 -74 9 26 159 1$ 3 244 12 38 79 12 10 164 9 39 249 18 3~

Hereinafter, MP~G-4 node specification and compression techniques of octree image formats used in the depth image-based 3A representing apparatus and method according to the present invention will be described in a detail.
This invention describes a family of data structures, depth image-based representations (pI~R), that provide effective and efficient representations based mostly on images and depth maps, fully utilizing the advantages described above. Let us briefly characterize main DIBR formats -SimpIeTexture, PointTexture, and Octreelmage.
FIG. 20 is a diagram of an example of the relief texture image and depth map, and FIG. 21 is a diagram of an example of Layered depth image (Lpl). (a) s shows Projection of the object and (b) shows layered pixels.
SimpIeTexture is a data structure that consists of an image, corresponding depth map, and camera description (its position, orientation and type, orthogonal or perspective). Representation capabilities of a single SimpIeTexture are restricted to objects like facade of a building: a frontal image 1o with depth map allows reconstruction of facade views at substantial range of angles. However, collection of SimpIeTextures produced by properly positioned cameras allows representation of the whole building - in case reference images cover all the pot~ntially visible parts of the building surface. Of course, the same applies to trees, human figures, cars, etc. Moreover, union of a ~ SimpIeTextures provides quite natural means for handling 3D animated data.
In this case reference images are replaced with reference videostreams. Depth maps for each 3D frame can be represented either by alpha-channel values of these videostreams, or by separate gray-scale videostreams. In this type of representation, images can be stored in lossy comer~ss~d formats like, say, zo JPEG. This significantly reduces the volume of the color information, especially in animated case. Wowever, geometry information (depth maps) should be compressed losslessly, which affects the overall reduction in storage.
For the objects of complex shape, it is sometimes difficult to cover the whole visible surface with reasonable number of reference images. Preferable o representation for such cases might be PointTexture. This format also stores reference image and depth map, but in this case both are multivalued: for each line of sight provided by the camera (orthographic or perspective), color and distance ar~ stored for every intersection of the line with th~ object. Number of intersections may varh from line to line. Jniart of several PointTextures provides a3o a very detailed representation even for compl~x objects. But the format lacks most of 2D regularity of SimpIeTexture, and thus have no natural image-based compressed form. For the same reason it is only used for still objects.

Octreelmage format occupies an intermediate position between 'mostly 2D' SimpIeTexture and 'mostly 3D' PoIntTexture: it stores geometry of the object in the octree-structured volumetric representation (hierarchically organized voxels of usual binary subdivision of enclosing cube), while the color component is represented by a set of images. This fomnat contains also additional octree-like data structure, which stores, for each leaf voxel, the index of a reference image containing its color. At the stage of rendering of the Octreelmage, color of the leaf voxel is determined by orthographically projecting it on the corresponding reference image. We have developed a very efficient io compression method for the geometry part of Octrselrnage. It is a variant of adaptive context-based arithmetic coding, where the contexts are constructed with the explicit usage of geometric nature of the data. Usage of the compression together with (ossy compressed reference images makes Octreelmage a very space-efficient representation. Like SimpIeTexture, t5 Octreelmage has animated version: reference videostreams instead of reference images, plus two additional streams of octrees representing geometry and voxel-to-image correspondence for each 3D frame. Very useful feature of an Octreelmage format is its built-in mid-mapping capability.
The DIBR family has been developed for the new version of MPEG-4 2o standard, and adopted for inclusion into MPEG's Animation Framework eXtension (AFX). AFX provides more enhanced features for synthetic MPEG-4 environments, and includes a collection of interoperable tools that produce a reusable architecture for interactive animated contents (compatible with oxisting MPEG-4). Each AF'X tool shows the compatibility with a BIFS node, a synthetic 25 stream, and an audio-visual stream. The current version of the AFX consists of higher-level descriptions of animation (e.g., bone and skin based animation), enhanced rendering (e.g., procedural texturing, light-field mapping), compact representations (e.g., NURBS, solid representation, subdivision surfaces), low bit-rate animations (e.g., interpolator compression) and others, as well as our :3o proposed DIBR.
DIBR formats were designed so as to combine advantages of different ideas suggested earlier, providing a user with flexible tools best suited for a particular task. For example, non-animated SimpIeTexture and PointTexture are particular cases of the known formats, while OCtreelmage is an apparently new representation. But in MPEG-4 context, all the three basic DIBR formats can be considered as building blocks, and their combinations by means of MPEG-~
constructs not only embrace many of the image-based representations suggested in the literatures, but also give a great potential for constructing new such formats.
Now, Depth Image-Based Representation will be described.
Taking into account the ideas outlined in the previous section, as well as io some of our own developments, we suggested the following set of image-based formats for use in MPEG-4 AFX: SimpIeTexture, PointTexture, Depthlmage, and Octreeimage. Note that SimpIeTexture and Octreelmage have animated versions.
SimpIeTexture is a single image combined with depth image. It is a ~ equivalent to RT, while PointTexture is equivalent to LDI.
Based on SimpleTexture and PointTexture as building blocks, we can construct a variety of representations using MPEG-4 constructs. Formal specification will be given later, and here we describe the result geometrically.
Depthlmage structure defines either SimpIeTexture or PointTexture zo together with bounding box, position in space and some other information. A
set of Depthlmages can be unified under a single structure called Transform node, and this allows construction of a variety of useful representations. Most commonly used are the two of them that do not have a specific MPEG-~ name, but in our practice we called them Box T~xture (BT), and Generalized Box 2r T~xture (GBT), BT is a union of six SimpIeTextures corresponding to a bounding cube of an object or a scene, while GBT is an arbitrary union of any number of SimpIeTextures that together provide a consistent 3D representation. Example of BT is given in FIG. 22, where reference images, depth maps and the resulting 3D object are shown. BT can be rendered with the aid of incremental warping :3o algorithm, but we use different approach applicable to GBT as well. An example of GBT representation is shown in FIG. 23, where 21 SimpIeTextures are used to represent a complex obj~ct, the palm tree.
6~

It should be noted that unification mechanism allows, for instance, the use of several LDIs with different cameras to represent the same object, or parts of the same object. Hence, data structures like image-based objects, cells of LDI tree, cells of surfels-based tree structure, are all particular cases of this format, which obviously offers much greater flexibility in adapting location and resolution of SimpleTextures and PointTextures to the structure of the scene.
Next, Octreelmage; Textured Binary Volumetric Octree (TBVO}, will be described.
In order to utilize multiresolution geometry and texture with more flexible io representation and fast rendering, we develop Octreelmage r~presentation, which is based on Textured Binary Volumetric Octree (TBVO). The objective of TBVO is to contrive a flexible representation/compression format with fast high quality visualization. TBVO consists of three main components - Binary Volumetric Octree (BVO} which represents geometry, a set of reference images, iii and image indices corresponding to the octree nodes.
Geometric information in BVO form is a set of binary (occupied or empty) regularly spaced voxels combined in larger cells in usual octree manner.
This representation can be easily obtained from Depthlmage data through the intermediate 'point cloud' form, since each pixel with depth defines a unique 2o point in 3D space. Conversion of the point cloud to SVO is illustrated in FIG. 24.
An analogous process allows converking polygonal model to BVO. Texture information of the BVO can be retrieved from reference images. A reference image is texture of voxels at a given camera position and orientation. Hence, BVO itself, together with reference images, does already provide the model v ~ representation. However, it turned out that additional structure storing the reference image index for each BVO leave allows visualizing much faster and with better quality.
The main BVO visualization problem is that we must determine corresponding camera index of each vox~I during rendering. To this end, we 3o must at least determine the existence of a camera, from which the vaxel is visible. This procedure is very slow if we use brute-force approach. In addition to this problem, there are still some troubles fdr vaxels that are not visible from any cameras, yielding undesirable artifacts in the rendered image.
A possible solution could be storing explicit color to each voxel.
However, in this case, we have experienced some problem in compressing color information. That is, if we group voxel colors as an image format and compress it, the color correlation of neighboring voxels is destroyed such that the compression ratio would be unsatisfactory.
!n TBVO, the problem is solved by storing camera (image) index for every voxel. The index is usuahy same for large group of voxels, and this allows the use of octree structure for economic storage of the additional information.
Note that, on the average, only 15% volume increase, in comparison to representation using only BVO and reference images, was observed in the experiments with our models. It's modeling is a little bit more complex, but allows more flexible way of representing objects of any geometry.
Note that TBVO is a very convenient representation for rendering with the aid of splats, because splat size is easily computed from voxel size.
Voxel color is easily determined using the reference images and the image index of the voxel.
Now, streaming of textured binary volurn~tric actree will be described.
zo We suppose that 255 cameras are enough, and assign up to 1 byte for the index. The TBVO stream is stream of symbols. Every TBVO-symbol is BVO-symbol or Texture-symbol. Texture-symbol denotes camera index, which could be a specific number or a code of "undefined".
~.et "undefined" code be '?' for further description, The TBVO sfiream is z~ ttaversed in breadth first order. Let us describ~ how to write TBVO stream if we have BVO and every leaf voxel has image index. This must be done in modeling stage. !t will traverse all BVO nodes including leaf nodes (which do not have BVO-symbol) in breadth first order. tn FiG. 25, the pseudo-code, which completes 4vriting the stream, is shown.
is An example of writing TBVO bitstream is shown in FIG. 14. For the TBVO tree shown in FIG, 14(a), a stream of symbols can be obtained as shown in FIG. 14(c), according to the procedur~. In this example, the textur~-symbols are represented in byte. however, in the actual stream, each texture-symbol would only need 2 bits because we only need to represent three values {two cameras and the undefiined cadet.
Next, DIBR Animation will be described.
Animated versions were defined for two of the DIBR formats:
Depthlmage containing only SimpIeTextures, and Octreelmage. Data volume is one of the crucial issues with 3D animation. We have chosen these particular formats because video streams can be naturally incorporated in the animated versions, providing substantial data reduction.
For Depthlmage, animation its performed by replacing reference images by MPEG-4 MovieTextur~s. Migh-quality lossy video compression does not seriously affect appearance of the resulting 3D objects. Depth maps can be stared (in near lossless mode) in the alpha channels of reference video streams.
At rendering stage, 3D frame is rendered after all the reference image and ns depth frames are received and decompressed.
Animation of Octreelmage is similar ~- reference images are replaced by MPEG-4 MovieTextures, and a new stream of octree appears.
MPEG-~L Node Specification will now be defined.
The DIBR formats are described in detail in MPEG-4 AFX nodes 2o specifications (4]. Depthlmage contains fiields determining the parameters of view frustum for either SimpleTexture or PointTexture. Octreelmage node represents object in the form of TBVO-defined geometry and a set of reference image formats, Scene-dependent information is stored in special fields of the DIBR data structures, allowing the correct interaction of DIBR objects with the 2!~ rest of the scene. Tie definition of DIBR nodes is shown in FIG. 26.
FIG, 27 illustrates spatial layout of the Depthlmage, in which the meaning of each field is shown. Note that the Depthlmage node defines a single DIBR object. When multiple Depthlmage nodes are related to each other, they are processed as a group, and thus, should be placed UndEf the same :3o Transform node. The diTexture field specifies the texture with depth (SimpIeTexture or PointTexture), which shall be mapped into the region defined in the Depthlmage node.

The Octreelmage node defines an octree structure and their projected textures. The octreeResolution field specifies maximum number of octree leaves along a side of the enclosing cube. The octree field specifies a set of octree internal nodes. Each internal node Is represented by a byte. 1 in ith bit of s this byte means that the children nodes exist for the ith child of that internal node, while 0 means that it does not. The order of the octree internal nodes shall be the order of breadth first traversal of the oCtree. The order of eight children of an internal node is shown in FIC. 14 (b). The voxellmagelndex field confiains an array of image indices assigned to voxel. At the rendering stage, m color attributed to an octree leaf is determined by orthographically projecting the leaf onto one of tire images with a particular index. The indices are stored in an octres-like fashion: if a particular image can be used far all the leaves contained in a specific voxel, the voxel containing index of the image is issued into the stream; otherwise, the voxel cont2~ining a fixed 'further subdivision' code is ~ 5 issued, which means that image index will be Specified separately far each children of the current voxel (in the same recursive fashion). If the voxellmagelndex is empty, then the image indices are determined during rendering stage. The images field specifies a set of Depthlmage nodes with SimpIeTexture for diTexture field. However, the nearPlane and farPlane field of ~o the Depthlmage node and the depth field in the SimpIaTexture node are not used.
Rendering m~thods for DIBR formats are not part of AFX, but it is necessary to explain the ideas used to achieve simplicity, speed and quality of DIBR objects rendering. Our rendering methods are based on splats, small fiat z~ color patches used as 'rendering primitives'. Two approaches outlined below are oriented at two different r~presentations: Depthlmage and Octreelmage. In our implementation, OpenGL functions are employed for splatting to accelerate the rendering. Neverkhele$s, software rendering is also possible, and allows optimized computation using th~ simple structure of Depthlmaoe or :3o Octreelmage.
The method we use for rendering pepthlmage objects is extremely Simple. It should be mentioned, however, that it depends on the OpenGl_ ~o functions and works much faster with the aid of hardware accelerator. In this method, we transform all the pixels with depth from SimpIeTextures and PointTextures that are to be rendered, into 3p points, then position small polygons (splats) at these points, and apply rendering functions of OpenGL.
:~ Pseudo-code of this procedure for SimpIeTexture case is given in FIG. 28, PointTexture case is treated exactly in the same way.
Size of splat must be adapted to the distance between the point and the observer. We used the following simple approach. First, the enclosing cube of given 3D object is subdivided into a coarse uniform grid. Splat size is computed t0 for each cell of the grid, and this value is used for the points inside the cell. The computation is pertormed as follows:
- Map the cell on the screen by means of OpenGL.
- Calculate length L of the largest diagonal of projection (In pixels).
_L
Estimate D (splat diameter) as ~ N, where N is average number of points i ~ per cell side and C is a heuristic constant, approximately 1.3.
We'd like to emphasi2e that this method could certainly be improved by sharper radius computations, more complex splats, antialiasing. However, even this simple approach provides good visual quality.
The same approach works for Octreelmage, where the nodes of the 2n octree at one of coarser levels are used in the above computations of splat size.
However, for the Oetroelmage color information should first be mapped on the set of voxels. This can be done very easily, because each vox~I has its corresponding reference image index. The pixel position in a reference image is also known during the parsing of octree stream. As soon as the colors of z ~ Octr~elmage voxels are determined, splat sizes are estimated and the OpenGL-based rendering is used as described above.
DIBR formats have been implemented and tested on several 317 models.
One of the models ("Tower") was obtained by scanning actual physical object (Cybeivrara color 3D scanner was used), the others were convert~d from the 3U 3DS-MAX demo package. Tests were performed on Intel Pentium-IV 1.8GHz with OpenGL accelerator.

In the following subsections, we explain the methods of conversion from polygonal to D18R formats, and then present the modeling, representation, and compression results of the different DIBR formats. Most of the data is for Depthlmage and Octreelmage models; these formats have animated versions and can be effectively compressed. All the presented models have been constructed with the orthographic camera since it is, in general, preferable way to represent 'compact' objects. Note that the perspective camera is used mostly for economic DIBR representation of the distant environments.
DII~R model generation begins with obtaining suffiicient number of SimpIeTextures. For polygonal object the SimpIeTextures are computed, while for the real-world object the data is obtained from digital cameras and scanning devices. Next step depends on the pIBR format we want to use.
Depthlmage is simply a union of the obtained SimpIeTextures, Although, depth maps may tie stored in compressed form, only lossless compression is acceptable since even small distortion in geometry is often highly notic~able.
RefErence images can be stored in lossy compressed form, but in this case a preprocessing is required. While it is generally tolerable to use popular methods like JPEO lossy compression, the boundary artifacts become more noticeable in the 3D abject views generated - especially due to the boundaries zo botween object and background of the reference image' where the background color appears to 'spill' into the object. The splution we have used to cope with the problem is to extend the image in the boundary blocks into the background using average color of the block and fast decay of intensity, and then apply the JPEG compression. The effect resembles 'squeezing' the distortion into the s background where it is harmless since background pixels are nofi used for rendering. Internal boundaries in lossy compressed reference images may also produce artifacts, but these are generally less visible.
To generate Octreelmage models we use an intermediate point-based representation (PBR). Set of points that constitute PBR is union of the colored :;o points obtained by shifting pixels in reference images by distances specified in the corresponding depth maps. Original SimpIeTextures should be constructed so that the resulting PBR would provide sufficiently accurate approximation of the object surface. After that, PBR is converted into Octreelmage as outlined in FIG. 24, and is used to generate a new complete set of reference images that satisfy restrictions imposed by this format. At the same time, additional data structure voxellmag~lndex representing reference image indices for octree voxels, is generated. In Case ref~rence images should be stored in lossy formats, they are first preprocessed as explained in previous subsection.
Besides, since TBVO structure explicitly specifies the pixel containing its valor of each voxel, redundant pixels are discarded, which further reduces the volume of voxellmagelndex. Examples of the original and processed reference images to in the JPEG format are shown in FIG. 29.
Note that quality degradation due to lossy compression is negligible for Octreelmages, but sometimes still noticeable for Depthlmage objects.
PointTexture models are constructed using projection of the object onto a reference plane, as explained in Section 2.1. If this does not produce enough samples (which may be the case for the surface parts nearly tangent to vector of projection), additional SimpIeTextures are constructed to provide more samples. The obtained set of points is then reorganized into the PointTexture structure, Data on rendering speed will now be presented. Rendering speed of 2o Depthlmage "Pa1m512" is about 2 fps (note that it is 21 Simple textures), while other static models we tested with reference image side 512 are rendered at 5-fps. Note that rendering speed depends mostly on the number and resolution of the reference images, but not on the complexity of the scene. This is an important advantage ov~r th~ polygonal representations, especially in animated 2 ~ case. Animated Octreelmage "Dragon512" is visualized at 24 frames per second (fps).
"Angel256" Depthlmage model is shown in FIG. 22. FIGS, 30 through 34 show several other DIBR and polygonal models. FIG. 30 compares appearance of pol~~gona! and Depfih!:ruge "I'~o;ton" mcuel. Depthlmaga model :3n uses reference images in the JPEG format and rendering is pertormed by simplest splitting described in Section 5, but image quality is quite acc~:ptable.
FIG. 31 compares two versions of the scanned "Tower" model. Black dots in the upper part of the model are due to noisy input data. FIG. 32 demonstrates more complex "Palm° model, composed of 21 SImpIeTextures. Ix also shows good quality, although leaves are, in general, wider than in the 3DS-MAX
original -- which is a consequence of simplified splitting.
FIG. ~3 presents a 3D frame from "i7ragan512" Octreelmage animation.
FIG. 34 demonstrates ability of a PointTexture format to provide models of excellent quality.
The depth Image-based node structure according to the present invention includes SimpIeT'exture node, PointTexture node, Depthlmage mode io and Octreelmage node. The Depthlmage node is composed of depth information and color image. The color image is selected from the SimpleTexture node and PointTexture node.
When an object is viewed from Six viewpoints (front, back, plan, rear, left end right sides), the object can be represented by six pairs of SimpIeTexture 15 nodes. The specification of the SimpIeTexture node is shown in i;IG. 26.
Referring to FIG. 26, the SimpIeTexture node is composed of a Texture field in which a color image containing the color for each pixel is recorded, and a depth fi~Id in which the depth for each pixel is recorded. The SimpIeTexture node defines a single ISR texture. Here, a texture means a colored plane 2o image.
A plane image containing the color for each pixel forming the image is in the texture field. The depth for each pixel forming the image is recorded in the depth field. A set of depths in the depth field form the depth images corresponding to the plane image in the texture field. The depth imag~s are za plane images represenfied in gray scales according to the depths. (n the case of a video format for generating animated objects, depth information and color information are multiple sequences of image frames.
The plane image in the texture field (that is, the color~d image) and the plane image in the depth field (that is, the image represented in gray scales}
;ie constitute a SimpIeTexture node. FIG. 20 shows "Morton" objects represented by the SimpIeT'exture nodes for front viewpoints. Conclusively, the objects are represented by six SimpIeTexture nodes, which ar~ pairs of images generated for six viewpoints. FIG. 22 shows "Angel" objects represented by six SimpIeTexture nodes.
The color image can be represented by PointTexture nodes. FIG. 21 shows Point textures generated by projecting an object onto a reference plane (in this case, a plane spaced a predetermined distance apart from the object to face the back face of the object).
FIG. 28 also shows the specification of a PointT'exture node.
Referring to FIG. 26, the PointTexture node is composed of a size field, a resolution field, a depth field and a color field. Siz~ information of an image ~o plane is recorded in the size field. The size field is composed of width and height fields where the width and height of the image plane are recorded, respectively. The size of the image plane is set to a size enough to cover the entire object projected onto the reference plane.
The resolution information on the depth far each pixel is recorded in the t s resolution field. For example, when a number "8" is recorded in the resolution field, the depth of an object is repr~sented by 256 scales based on the distance from the ref~rence plane.
Multiple pieces of depth information on each pixel are recorded in the depth field. The depth information is a seauence of numbers of pixels 2n projected onto the image plane and depths for the respective pixels. Color information on each pixel is recorded in the color field. The color information is a sequence of colors corresponding to the respective pixels projected onto the image plane.
The viewpoint information constituting the pepthlmage node includes 2 ~ several fields such as viewpoint, visibility, projection method, or distance.
In the viewpoint field, viewpoints from which an image plane is viewed are recorded. The viewpoint fi~Id has position and orientation fields where the position and orientation of the viewpoint are recorded. The position in the position field is a relative lor:ation of the ~iewpoir~t to the coordinate system's :30 origin (0, 0, 0), while the orientation in the orientation field is a rotation amount of the viewpoint relative to the default orientation.
In the visibility field, a visibility area from th~ viewpoint to the image plane is recorded, In the projection method field, a projection method from the viewpoint to the image plane is recorded. In the present invention, the projection method includes an orthogonal projection method in which the visibility area is represented by width and height, and a perspective projection method in which the visibility area is represented by a horizontal angle and a vertical angle. When the orthogonal projection method is selected, that is, when the projection method field is set to TRUE, the width and the height of the visibility area correspond to the width and height of an image plane, respectively.
When the perspective projection method is selected, the horizontal and vertical W angles of the visibility area correspond to angles formed to horizontal and vertical sides by views ranging from a viewpoint to the image plane.
In the distance field, a distance from a viewpoint to a closer boundary plane and a distance from the viewpoint to a farther boundary plane are recorded. The distance field is composed of a nearPlane field and a farPlane field. The distance field defines an area for depth information.
FIGS. 35A and 35B are diagrams showing the relationships of the respective nodes when represenfiing an object in a Depthlmage format having SimpIeTexture nodes and PointTexture nodes, respectively.
Referring to FJG. 35A, the object can be represented by sets of 2o Depthlmage nodes corresponding to six viewpoints. Each of the respective Depthlmage nodes consists of viewpoint information and SimpIeTexture. The SimpIeTexture consists of a pair of color image and depth image.
Referring to FIG. 358, the object can be represented by a Depthlmage nude. The specification of the Depthlmage node is described as above. A
PointTexture nodE is compos~d of plane information wher~ information on a plane onto which the object is projected, and depth information and color information of various points of the objects projected onto the image plane.
In an Octreelmage node, an object is represented by the structure of internal nodes constituting voxels containing th~ object and reference images.
:3o The specification of the Octreslmage node is shown in FIG. 26.
Referring to FiG. 26, the Octreelmage node includes fields of octreeResolution, octree, vexellmagelndex and images.
7s (n the octreeResolution field, the maximum number of octree (eaves along ~ side of the enclosing cube containing the object is recorded. In the octree field, an internal node structure is recorded. An internal node is a node for a subcube generated after subdividing the enclosing cube containing the whole object. Subdivision of each subcube is iteratively performed to gener8~te 8 subcubes until a predetermined number of subcubes is reached. In the case of iteratively performing ~ subdivision 3 times, assuming that a node for a subcube after the second subdivision iteration is referred to as a current node, a node for a subcube after the first subdivision iteration and a node for a subcube no after the third subdivision are referred to as a parent node and a child node, respectively. The order of 8 divid~d subcubes is given by the order of priority in width. FIG. 14 shows a method of assigning priority numbers of subcubes.
Each internal node is represented by a byte. Node information recorded in bitstreams eon$tituting the byte r~presents presence or absence of children > > nodes of children nodes belonging to the internal nade.
In the index field, reference image indices corresponding to the respective internal nodes are recorded. In the image field, reference images corresponding to indices recorded in the index field are recorded. The reference images are Depthlmage nodes and the structure th~reof is described to as above.
FIG. 36 is a diagram showing the structure of a pertinent Octreelmage node in representing an object using Octreelmage nodes.
Referring to FIG. 36, the Octreelmage nodes are encapsulated by bitwrappers. Each bitwrapper includes an Octreelmage node. When an z~ object is represented in SimpleTexture nodes, the Octreelmage node includes Depthlmag~ nodes, each Depthimage node containing a SimpIeTexture node.
On the other hand, when an object is represented in PointTexture nodes, the Octreelmage node includes a single Depthlmage node.
The present invention can be implement~d on a computer-readable 3o recording medium by computer readable codes. Tha computer-readable recording medium includes all kinds of recording apparatus from which data readable by a computer system can be read, and examples thereof are ROM, RAM, CD-ROM, magnetic tapes, floppy disks, optical data storage devices or the like, and also embodied in a carrier wave, e.g., from the Internet or other transmission medium. Also, the computer-readabl~ recording medium is distributed in a computer system connected to a network so that computer readable codes are stored and implemented by a distributed method.
According to the pr~sent invention, in Image-based representations, since perfect Information on a colored 3p object is encoded by a $et of 2D
images-simple and regular structure instantly adopted into well-known methods for image processing and compression, the algorithm is simple and can be m supported by the hardware in many aspects. In addition, rendering time for image-based models is proportional to the number of pixels in the reference and output images, but in general, not to the geometric complexity as in polygonal case. In addition, when the image-based representation is applied to real-world objects and scene, photo-reaiistfc rendering of natural scene becomes possible without use of millions of polygons and expensive computation.
The foregoing description of an implementation of the invention has been presented for purposes of illustration and description. it is not exhaustive and does not limit the invention to the precise form discl4sed. Modifications and variations are possible in light of the above teachings or may be acquired from 2o practicing of the invention, The scope of the invention is defined by the claims and their equivalents

Claims (7)

1. A node structure stored on a computer readable medium for representing a depth image-based 3D object, the node structure comprising:
a resolution field in which the maximum value of octree leaves along a side of an enclosing cube containing an object, is recorded;
an octree field in which a structure of an internal node of an octree is recorded;
an index field in which an index of a reference image corresponding to the internal node is recorded; and an image field in which the reference image is recorded.
2. The node structure according to claim 1, wherein an internal node is represented by a byte, and node information recorded in bitstreams constituting the byte represents presence or absence of children nodes belonging to the internal node.
3. The node structure according to claim 1, wherein a reference image is a depth image including viewpoint information and color image corresponding to the viewpoint information.
4. The node structure according to claim 1, wherein viewpoint information comprises:
a viewpoint field in which viewpoints from which an image plane is viewed are recorded;
a visibility field in which a visibility area from the viewpoint to the image plane is recorded; and a projection method field in which a projection method from the viewpoint to the image plane is recorded.
5. The node structure according to claim 4, wherein the viewpoint field comprises:
a position field where the position of a viewpoint is recorded; and an orientation field where the orientation of the viewpoint is recorded, the position being a relative location to a coordinate system's origin, and the orientation being a rotation amount relative to a default orientation.
6. The node structure according to claim 4, wherein the projection method is an orthogonal projection method, the width and the height of the visibility area correspond to the width and height of an image plane, respectively.
7. The node structure according to claim 3, wherein the color image is a Simple Texture consisting of a plane image containing the color for each pixel and depth value for the pixel.
CA002517842A 2001-11-27 2002-11-27 Node structure for representing 3-dimensional objects using depth image Abandoned CA2517842A1 (en)

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
US33316701P 2001-11-27 2001-11-27
US60/333,167 2001-11-27
US36254502P 2002-03-08 2002-03-08
US60/362,545 2002-03-08
US37656302P 2002-05-01 2002-05-01
US60/376,563 2002-05-01
US39530402P 2002-07-12 2002-07-12
US60/395,304 2002-07-12
KR2002-67971 2002-11-04
KR10-2002-0067971A KR100450823B1 (en) 2001-11-27 2002-11-04 Node structure for representing 3-dimensional objects using depth image
CA2413058A CA2413058C (en) 2001-11-27 2002-11-27 Node structure for representing 3-dimensional objects using depth image

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CA2413058A Division CA2413058C (en) 2001-11-27 2002-11-27 Node structure for representing 3-dimensional objects using depth image

Publications (1)

Publication Number Publication Date
CA2517842A1 true CA2517842A1 (en) 2003-05-27

Family

ID=35311275

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002517842A Abandoned CA2517842A1 (en) 2001-11-27 2002-11-27 Node structure for representing 3-dimensional objects using depth image

Country Status (1)

Country Link
CA (1) CA2517842A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110892453A (en) * 2017-07-10 2020-03-17 三星电子株式会社 Point cloud and mesh compression using image/video codecs
CN111862053A (en) * 2020-07-22 2020-10-30 上海米哈游天命科技有限公司 Method, device, equipment and medium for searching gap
EP3699870A4 (en) * 2017-10-16 2020-12-23 Sony Corporation INFORMATION PROCESSING DEVICE AND METHOD
CN118898595A (en) * 2024-07-23 2024-11-05 冰晶智能医疗科技(北京)有限公司 3D positioning and annotation method, device, equipment and medium for intracardiac ultrasound images

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110892453A (en) * 2017-07-10 2020-03-17 三星电子株式会社 Point cloud and mesh compression using image/video codecs
EP3699870A4 (en) * 2017-10-16 2020-12-23 Sony Corporation INFORMATION PROCESSING DEVICE AND METHOD
US11657539B2 (en) 2017-10-16 2023-05-23 Sony Corporation Information processing apparatus and information processing method
CN111862053A (en) * 2020-07-22 2020-10-30 上海米哈游天命科技有限公司 Method, device, equipment and medium for searching gap
CN111862053B (en) * 2020-07-22 2023-11-28 上海米哈游天命科技有限公司 Method, device, equipment and medium for searching gap
CN118898595A (en) * 2024-07-23 2024-11-05 冰晶智能医疗科技(北京)有限公司 3D positioning and annotation method, device, equipment and medium for intracardiac ultrasound images

Similar Documents

Publication Publication Date Title
CA2413058C (en) Node structure for representing 3-dimensional objects using depth image
US8022951B2 (en) Node structure for representing 3-dimensional objects using depth image
CA2413056C (en) Apparatus and method for depth image-based representation of 3-dimensional object
RU2237284C2 (en) Method for generating structure of assemblies, meant for presenting three-dimensional objects with use of images having depth
RU2237283C2 (en) Device and method for presenting three-dimensional object on basis of images having depth
US6445389B1 (en) Compression of polygonal models with low latency decompression
EP1566769B1 (en) Method and apparatus for encoding and decoding 3D data
US6452596B1 (en) Methods and apparatus for the efficient compression of non-manifold polygonal meshes
KR100513732B1 (en) Method and apparatus for encoding and decoding 3 dimensional data
CA2514655C (en) Apparatus and method for depth image-based representation of 3-dimensional object
Okuda et al. Joint geometry/texture progressive coding of 3D models
CA2517842A1 (en) Node structure for representing 3-dimensional objects using depth image
JP3955177B2 (en) Method and apparatus for creating data for animation of graphic scene
Santa-Cruz et al. Compression of parametric surfaces for efficient 3D model coding
Bayakovski et al. Depth Image-based Representations and Compression for Static and Animated 3D Objects
KR20050103297A (en) Method for the management of descriptions of graphic animations for display, receiver and system for the implementation of said method
CN117896536A (en) Point cloud decoding and encoding method, medium, electronic equipment and product
CN117157671A (en) Methods, encoder and decoder for encoding and decoding 3D point clouds
Di Benedetto Multiresolution Techniques for Real-Time Visualization of Urban Environments and Terrains
Mudur et al. Graphics Pipeline for Interactive Visualization of Very Large 3D Models

Legal Events

Date Code Title Description
EEER Examination request
FZDE Dead