[go: up one dir, main page]

AU747180B2 - A method and apparatus for coding an image - Google Patents

A method and apparatus for coding an image Download PDF

Info

Publication number
AU747180B2
AU747180B2 AU64507/99A AU6450799A AU747180B2 AU 747180 B2 AU747180 B2 AU 747180B2 AU 64507/99 A AU64507/99 A AU 64507/99A AU 6450799 A AU6450799 A AU 6450799A AU 747180 B2 AU747180 B2 AU 747180B2
Authority
AU
Australia
Prior art keywords
image
pixels
original image
regions
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU64507/99A
Other versions
AU6450799A (en
Inventor
Michael Richard Arnold
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AUPP7719A external-priority patent/AUPP771998A0/en
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU64507/99A priority Critical patent/AU747180B2/en
Publication of AU6450799A publication Critical patent/AU6450799A/en
Application granted granted Critical
Publication of AU747180B2 publication Critical patent/AU747180B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Description

Alk.
S&F Ref: 484972
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT
ORIGINAL
Name and Address of Applicant: Canon Kabushiki Kaisha 30-2, Shimomaruko 3-chome Ohta-ku Tokyo Japan Michael Richard Arnold Actual Inventor(s): Address for Service: Spruson Ferguson St Martins Tower 31 Market Street Sydney NSW 2000 A Method and Apparatus for Coding an Image Invention Title: ASSOCIATED PROVISIONAL APPLICATION DETAILS [33] Country [31] Applic. No(s) AU PP7719 [32] Application Date 15 December 1998 The following statement is a full description of this invention, including the best method of performing it known to me/us:- 5815c -1- A METHOD AND APPARATUS FOR CODING AN IMAGE FIELD OF THE INVENTION The present invention relates to the field of video compression and, in particular, to a method for achieving compression in cases where relatively low quality video is acceptable. The present invention relates to a method and apparatus for encoding, and subsequently decoding, a first image. The invention also relates to a system and a computer program product including a computer readable medium having recorded thereon a computer program related to encoding, and subsequently decoding, a first image.
BACKGROUND
The field of digital data compression and in particular digital image compression has attracted great interest for some time.
In the field of digital image compression, many different techniques have been utilised. One popular technique is the JPEG standard which utilises the discrete cosine transform (DCT) to transform standard size blocks of an image into corresponding cosine components. The JPEG standard also provides for the subsequent lossless compression of the transformed coefficients.
20 Recently, the field of wavelet transforms has gained great attention as an alternative form of data compression. The wavelet transform has been found to be highly suitable in representing data having discontinuities such as sharp edges. Such discontinuities are often present in image data or the like.
Another technique which has recently attracted much attention is that of fractal models. Fractal modelling is particularly useful in relation to natural objects such as (CFP1559AU Open51) (484972AU); 08/12/99:; 02:34 PM [1:\ELEC\CISRA\OPEN\OPEN51\484972au.doc -2mountains, and recursive modelling using successively smaller fractal objects can be used to model natural forms down to any desired resolution.
SUMMARY OF THE INVENTION In accordance with one aspect of the present invention, there is disclosed a method of [encoding a first] compressing an original image, said method comprising the steps of: [detecting edges within the first image to form a grey-scale difference image] generating a grey-scale image corresponding to said original image, wherein the greyscale image has pixels of higher intensity in areas where the original image has regions of 10 pixels of higher intensity change and has pixels of lower intensity in areas where the i original image has regions of pixels of lower intensity change; halftoning the 090: [difference] grey-scale image to form a binary image comprising a plurality of [on] ON and [off] OFF pixels, wherein the binary image has fewer said ON pixels in areas where the original image has regions of lower intensity change, and more said ON pixels in 15 regions where the original image has regions of higher intensity change; extracting
S
information associated with those pixels in the [first] original image that correspond to the [on] ON pixels in the binary image; and combining the binary image with the extracted associated information to form [an coded] a compressed representation of the [first] original image.
In accordance with another aspect of the present invention, there is disclosed a method of decoding a [coded] compressed representation of [a first] an original image, wherein said [coded] compressed representation comprises a binary image comprising a plurality of [on] ON and [off] OFF pixels, and information associated with those pixels in the [first] original image corresponding to the [on] ON pixels of the binary image, 25 wherein the binary image has fewer said ON pixels in areas where the original image has 484972 484972 AMENDMENTS02_underined.doc[ -3regions of lower intensity change, and more said ON pixels in regions where the original image has regions of higher intensity change, the method comprising the steps of: (a) generating polygons connecting the [on] ON pixels of said binary image; and (b) rendering each said polygon according to the associated information to form a [second image] reproduction of the [first] original image.
In accordance with another aspect of the present invention, there is disclosed an apparatus for [encoding a first] compressing an original image, said apparatus comprising: [detecting means for detecting edges within the first image to form a grey-scale difference image] generating means for generating a grey-scale image corresponding to **0 10 said original image, wherein the grey-scale image has pixels of higher intensity in areas where the original image has regions of pixels of higher intensity change and has pixels of lower intensity in areas where the original image has regions of pixels of lower intensity change; halftoning means for halftoning the [difference] grey-scale image to form a binary image comprising a plurality of [on] ON and [off] OFF pixels, wherein the binary 15 image has fewer said ON pixels in areas where the original image has regions of lower intensity change, and more said ON pixels in regions where the original image has regions S* of higher intensity change; extracting means for extracting information associated with those pixels in the [first] original image that correspond to the [on] ON pixels in the S* binary image; and combining means for combining the binary image with the extracted associated information to form [an coded] a compressed representation of the [first] original image.
In accordance with another aspect of the present invention, there is disclosed an apparatus for decoding a [coded] compressed representation of [a first] an original image, wherein said [coded] compressed representation comprises a binary image comprising a plurality of [on] ON and [off] OFF pixels, and information associated with those pixels in 484972 484972_AMENDMENTSO2_underlined.doc[ -4the [first] original image corresponding to the [on] ON pixels of the binary image, wherein the binary image has fewer said ON pixels in areas where the original image has regions of lower intensity change, and more said ON pixels in regions where the original image has regions of higher intensity change, the apparatus comprising: generating means for generating polygons connecting the [on] ON pixels of said binary image; and rendering means for rendering each said polygon according to the associated information to form a [second image] reproduction of the [first] original image.
In accordance with another aspect of the present invention, there is disclosed a computer program for [encoding a first] compressing an original image, said program o10 comprising: [detecting means for detecting edges within the first image to form a greyscale difference image] generating means for generating a grey-scale image corresponding 0:00 to said original image, wherein the grey-scale image has pixels of higher intensity in areas where the original image has regions of pixels of higher intensity change and has pixels of lower intensity in areas where the original image has regions of pixels of lower intensity 15 change; halftoning means for halftoning the [difference] grey-scale image to form a binary image comprising a plurality of [on] ON and [off] OFF pixels, wherein the binary image has fewer said ON pixels in areas where the original image has regions of lower intensity change, and more said ON pixels in regions where the original image has regions S* of higher intensity change; extracting means for extracting information associated with those pixels in the [first] original image that correspond to the [on] ON pixels in the binary image; and combining means for combining the binary image with the extracted associated information to form [an coded] a compressed representation of the [first] original image.
In accordance with another aspect of the present invention, there is disclosed a computer program for decoding a [coded] compressed representation of [a first] an 484972 484972_AMENDMENTSO2_underlined.doc[ original image, wherein said [coded] compressed representation comprises a binary image comprising a plurality of [on] ON and [off] OFF pixels, and information associated with those pixels in the [first] original image corresponding to the [on] ON pixels of the binary image, wherein the binary image has fewer said ON pixels in areas where the original image has regions of lower intensity change, and more said ON pixels in regions where the original image has regions of higher intensity change, said program comprising: (a) generating means for generating polygons connecting the [on] ON pixels of said binary image; and rendering means for rendering each said polygon according to the associated information to form a [second image] reproduction of the [first] original image.
10 BRIEF DESCRIPTION OF THE DRAWINGS **0 S: Embodiments of the invention are described with reference to the drawings, in
S
which: Fig. 1 presents a block diagram representation of one preferred embodiment of the invention; 15 Fig. 2 depicts an input image; Fig. 3 illustrates a difference image, formed by performing edge detection on the S* input image; Fig. 4 depicts a pixel-level segment of a halftoned binary image; 0 484972 484972_AMENDMENTS02 underined.doc[ -6- Fig. 5 illustrates a segment of a polygon image, where the polygons are Delaunay triangles; Fig. 6 depicts a single Delaunay triangle which is to be rendered using an average colour; Fig. 7 is the single Delaunay triangle, which is to be rendered using a merging method; Fig. 8 depicts alternate polygons comprising Delaunay triangles or Voronoi diagrams; Fig. 9 depicts a height map using an averaging method; Fig. 10 depicts a height map using a merging method; and Fig. 11 illustrates a conventional general-purpose computer upon which the embodiments can be practiced.
DETAILED DESCRIPTION 15 The terms "grey-scale" and "monochrome" are used interchangeably throughout the specification unless the contrary intention is expressed.
The preferred embodiment finds its main application in relation to compression of video signals, particularly where relatively low quality video images are acceptable.
The embodiment can, however, be arranged to be applicable in a number of other areas.
20 When applied to 2-dimensional image data, interesting and varied colourisation effects S" can be produced, through selection of different colour rendering methods. Furthermore, an embodiment can be arranged to compress data-sets associated with height maps.
Height maps comprise height information which is associated with the 2-dimensional image data, which data typically represents geographic or other mapping information.
(CFP1559AU Open51) (484972AU); 08/12/99; 02:34 PM [1:\ELEC\CISRA\OPEN\OPEN51\484972au.doc -7- Finally, an embodiment can also be arranged to approximate continuous non-linear scientific data in three or more dimensions Fig. 1 depicts an overall block diagram of a preferred embodiment of the present invention. An input image 200 (see Fig. 2) is input on line 100 to an edge detection process 102.
The following description is presented in terms of a colour image, however the description can in general, unless a contrary intention is evident, equally apply to a grey scale image. In a first embodiment, the input image 200 is typically a two-dimensional colour image in which colour information is associated with each pixel in the twodimensional image 200. In a second embodiment, the input image 200 takes the form of a height map where one dimensional height information is associated with each pixel in the two-dimensional image 200. In a third embodiment, the inventive concept is generalised to applications to data in three or more dimensions.
Considering the first embodiment of a colour image in which colour information o 15 is associated with each pixel in the two-dimensional image 200, the edge detection process 102 converts the input image 200 to a difference image 300 (see Fig. which is oo output on the line 104. The edge detection process 102 measures areas of "local change" OV.in the input image 200, and produces a grey-scale image the difference image 300).
This difference image 300 has higher intensity corresponding to regions of greater change S 20 in input image 200, and lower intensity corresponding to regions of low change in input image 200. The difference image 300 output on the line 104 therefore, emphasises areas of changing intensity in input image 200, and de-emphasises areas where changes of intensity in the input image 200 are low, or absent. The monochrome difference image 300 output on the line 104 is input to a halftoning process 106. The halftoning process 106 converts the grey-scale difference image 300 to a monochrome binary image, which (CFP1559AU OpenS1) (484972AU); 08/12/99; 02:34 PM [1 :\ELECkCISRA\OPEN\OPEN51 \484972au.doc -8is output on the line 108. The halftoning process 106 may implement error diffusion, or dithering techniques.
The input image 200 on the line 100 is also input to a corresponding pixel information extraction process 118. In the first embodiment, the information extraction process 118 extracts colour information from the input image 200. Such extraction however is preferably only performed in relation to the pixels which have been turned "on" or "enabled" in the binary image output on the line 108 which has been produced by the halftoning process 106. A decision on which pixels to select is provided to the information extraction process 118 by the halftoning process 106 via line 122. In relation to the second embodiment, the information extraction process 118 extracts height information from the input image 200 rather than colour information. This is discussed in more detail with reference to Fig. 9.
The combined effect of edge detection, halftoning and colour information extraction can be seen at plane 122 in Fig. 1, at which plane the input image 200 has been processed to produce two data sets namely the monochrome halftone image on the line oO..
108, and (ii) colour information for the pixels in the halftone image corresponding to the 5055 S.o.
equivalent pixels the input image 200 on the line 120. The combined quantity of information in the data sets and (ii) is significantly less that the quantity of information in the original input image 200, and thus significant compression of the input image 200 20 has been performed.
S• The difference image 300 is an "edge transformed" version of the input image 200, and has the same dimensions. The difference image 300 can be produced as a greyscale image as described above, or alternatively, can comprise three or four separate greyscale images, each image representing an image in relation to a single colour in the input image 200.
(CFP1559AU Open5l1) (484972AU); 08/12/99; 02:34 PM [I:\ELEC\CISRA\OPEN\OPEN51\484972au.doc -9- The binary image output on the line 108 is comprised of a plurality of dots, having few dots in areas where the original image 200 has regions of low intensity change, and many dots in regions where the original image 200 has regions of high intensity change.
The binary image output on the line 108 together with the extracted colour information output on the line 120 comprises a set of data from which the original input image 200 can be reconstructed, albeit at a lower resolution than the original image 200.
The binary image on the line 108 plus the colour information on the line 124 therefore, together represent a compressed version of the input image 200. This compression effect is an unusual and unexpected use of the edge detection process 102 and the halftoning process 106.
The binary image is now input on line 108 to a polygon processing block 110.
This processing block 110 performs polygonisation on the 2-D binary image, by treating the "on" pixels as locations in a 2-dimensional plane. This polygonisation process forms *ooo 15 a set of multiply-connected polygons which substantially completely covers the 2D image, except, possibly, in some spaces near the edges of the image. The polygons can be Delaunay triangles, Voronoi diagrams, or other polygons which retain the vertex density of the "on" pixels in the binary image. Considering the 2D image, for Delaunay triangles, f* V the vertices of such triangles lie on pixels which have been switched "on" in the binary 20 image. Voronoi diagrams have pixels situated within them. This is explained in more detail below in relation to Figs. 4 and 5. The polygonised image which is output on line 112 is a low-resolution approximation to the original input image 200.
The extracted colour information on the line 120 is utilised in a rendering/interpolation process 114 which in the present embodiment renders the (CFP1559AU Open51) (484972AU); 08/12/99; 02:34 PM ll:\ELEC\CISRA\OPEN\OPEN51\484972au.doc polygonised images on line 112 to produce a coloured, substantially lower-resolution image on line 116.
Fig. 2 illustrates a continuous tone input image 200, comprising a face made up of a grey-scale outline 206, two grey-scale eyes 202 and 204, and grey-scale nose 208.
The background surrounding the face 210 is a constant white intensity, as is the actual surface of the face 212.
Fig. 3 depicts the difference image 300, which results from the application of the edge detection process 102 to the input image 200. Any method of edge detection can be employed, such methods including difference methods, and Laplacian methods. The effect of the edge detection process 102 can be understood by considering the effect on the grey-scale face boundary 206 in the input image 200, and the corresponding edges 302 and 304 which are produced in the difference image 300. The two edges 302 and 304 correspond to the intensity change between the background 210 and the face outline 206 in Fig. 2, and the intensity change between the face surround 206 and the texture of the oooo face 212 respectively.
oFig. 4 depicts a small region of binary dots in the halftone image produced by the halftoning process 106 and output on the line 108. Dots 406 and 408 which are closely spaced represent areas of rapid intensity change in the original image 200. This might occur, for example, at the boundary between the face surround 206 and the background 20 region 210. Dots 400 and 402 which are more widely spaced represent areas of slow oooe, intensity change in the input image 200.
Fig. 5 illustrates the Delaunay triangulation process which is performed in the polygon processing block 110. Individual triangles 500 are formed, each such triangle having individual pixels e.g. 400, 402 and 404 at its vertices.
(CFP1559AU Open5l1) (484972AU); 08/12/99; 02:34 PM [1:\ELEC\CISRA\OPEN\OPEN5184972au.doc -11 Fig. 6 depicts a single Delaunay triangle 500 with pixels 400, 402 and 404 at its vertices. The colour rendering performed by rendering/interpolation process 114 can be performed by forming an average colour based upon the colours at the vertices 400, 402 and 404. The triangle 500 is then rendered using this average colour.
Fig. 7 depicts the same triangle 500 sub-divided into three smaller triangles 702, 704 and 700, the three smaller triangles forming a common vertex 706. The triangle 500 can be colour rendered by rendering/interpolation process 114 by forming a convex combination of the colours at the vertices 400, 402 and 404. Thus in Figure 7, where vertices A, B and C have colours a, P, and y, and where p is an interior point in the triangle, then the colour P of the pixel at position p can be expressed mathematically as follows: P(p) aa b yc where: 15 a+b+c= and: O a, b, c The aforementioned rendering can be performed on triangle 500 using the 20 colours at vertices 400, 402 and 404. Alternatively, rendering can be performed on the smaller triangles 700, 702, and 706 to obtain a different colourisation effect. Thus, for example, triangle 700 can be rendered by using the colours at vertices 400, 404, and the calculated colour at vertex 706.
Fig. 8 illustrates how the halftone image can be overlayed with multiplyconnected polygons of a more general type. Delaunay triangles 500) and Voronoi (CFP1559AU Open51) (484972AU); 08/12/99; 02:34 PM [1:\ELEC\CISRA\OPEN\OPEN51\484972au.doc -12diagrams 800) are illustrated. Detailed information in regard to Voronoi diagrams and Delaunay triangles can be found in L. Guibas and J. Stolfi, "Primitives For The Manipulation of General Sub-Divisions and the Compution of Voronoi Diagrams", ACM Transactions on Graphics, Volume 4, No. 2, April 1985.
In a second embodiment where the input image 100 is a height map, the combined effect of edge detection, halftoning and height information extraction can again be seen at plane 122 in Fig. 1, at which point the input image 200 has been processed to produce two data sets namely the monochrome halftone image on the line 108, and (ii) height information for the pixels in the halftone image corresponding to the equivalent pixels the input image 200 on the line 120. The volume of information in the data sets (i) and (ii) again is significantly less that the volume of information in the input image 200, and thus significant compression of the input image 200 has been performed.
Fig. 9 depicts how the rendering/interpolation process 114 produces an approximation of the height map. In Fig. 9 pixels 902, 904 and 906 represent three spatial 15 dots associated with the Delaunay triangle 922 which forms a single facet of the height map. Where the input image 100 is a height map, the polygon processing block 110 forms a Delaunay triangle 924 in the 2-D plane 900. The Delaunay triangle 924 represents a decision made by the polygon processing block 110 as to the location where the Delaunay triangle 922 should, thereafter, be formed. The Delaunay triangle 922 is "i 20 formed by the rendering/interpolation process 114. In order to form the Delaunay triangle 922, the rendering/interpolation process 114 utilises height information associated with each pixel 902, 904, and 906, the height information being represented by solid lines 916, 918, and 920 respectively. The rendering/interpolation process 114 forms the Delaunay triangle 922 as indicated, this triangle 922 lying in a plane 914 which is parallel to the plane 900 in which the three spatial dots 902, 904 and 906 lie. This illustration represents (CFP1559AU Open51) (484972AU); 08/12/99; 02:34 PM [1:\ELEC\CISRA\OPENOPEN51 \484972au .doc -13use of the "average height" method for approximating a height map. The overall height map using this method will comprise a large number of triangles e.g. 922, each of which will be parallel to the base plane 900 upon which the 2-dimensional representation of the map is depicted.
Fig. 10 depicts another method by which height map approximations can be formed. The same base plane 900 with its associated pixels is shown. The polygon processing block 110 again forms a Delaunay triangle 1002, this triangle representing the projection upon the plane 900 where the Delaunay triangle 1000 will be formed. In the present embodiment, Delaunay triangle 1000 is formed spanning the vertices of the heights shown by solid lines 916, 918 and 920, and thus, the Delaunay triangle 1000 is not parallel to the base plane 900. Using this method, the height map 100 will be approximated by a multitude of triangles e.g. 1000, which will form a more continuous approximation of the height map 200 than the aforementioned method described in relation to Fig. 9.
S' 15 It is noted that images approximated by Delaunay triangles scale gracefully.
Thus, as the scale of the image is altered, the overall image representation remains stable.
The average colourisation method produces colourisation which has a "crystal" or "technical" effect. In contrast, the continuous rendering method produces a "water colour" or "impressionist" effect. Thus, the choice of colour rendering methods, provides 20 a user with significant flexibility in creating artistic effects in the colourisation of input image 100.
Fig. 11 shows how the system is preferably practised using a conventional general-purpose computer 1100 wherein the various processes described above are implemented as software executing on the computer 1100. In particular, the various process steps are effected by instructions in the software that are carried out by the (CFP1559AU Open51) (484972AU); 08/12/99; 02:34 PM [1:\ELEC\CISP\OPEN\OPEN5 1\484972au.doc -14computer 1100. The software is stored in a computer readable medium, is loaded onto the computer 1100 from the medium, and is then executed by the computer 1100. The use of the computer program product in the computer creates an apparatus for processing source video, image, or data-set inputs, and performing compression and/or colourisation. The computer system 1100 as illustrated is equipped for image processing and includes a computer module 1102, a graphics input card 1116, and input devices 1118 and 1120. In addition, the computer system 1100 can have any of a number of other output devices including a graphics output card 1110 and output display 1124. The computer system 1100 can be connected to one or more other computers using an appropriate communication channel such as a modem communications path, a computer network, or the like. The computer network can include a local area network (LAN), a wide area network (WAN), an Intranet, and/or Internet.
Thus, for example, images 200 can be input via graphics input card 1116.
Control commands can be input via keyboard 1118, and/or mouse 1120. The computer 1102 itself includes one or more central processing unit(s) (simply referred to as a processor hereinafter) 1104, a memory 1106 which can include random access memory oo.* (RAM) and read-only memory (ROM), an input/output interface 1108, a graphics input interface 1122, and one or more storage devices generally represented by a block 1112. The storage device(s) 1112 can include one or more of the following: a floppy disk, 20 a hard disk drive, a magneto-optical disk drive, CD-ROM, magnetic tape or any other of a number of non-volatile storage devices well known to those skilled in the art. Each of the components 1104, 1106, 1108, 1112 and 1122, is typically connected to one or more of the other devices via a bus 1114 that in turn can include data, address, and control buses.
The graphics interface 1122 is connected to the graphics input 1116 and graphics output (CFP1559AU Open5l) (484972AU); 08/12/99; 02:34 PM [:\ELEC\CISRA\OPEN\OPEN5 1\44972au.doc 1110 cards, and provides graphics input from the graphics input card 1116 to the computer 1102 and from the computer 1102 to the graphics output card 1110.
The foregoing describes only one some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including" and not "consisting only of". Variations of the word comprising, such as "comprise" and "comprises" have corresponding meanings.
(CFP1559AU Open51) (484972AU); 08/12/99; 02:34 PM [[:\ELEC\CISRA\OPEN\OPEN51 \484972au .doc

Claims (23)

1. A method of [encoding a first] compressing an original image, said method comprising the steps of: [detecting edges within the first image to form a grey-scale difference image] generating a grey-scale image corresponding to said original image, wherein the grey-scale image has pixels of higher intensity in areas where the original image has regions of pixels of higher intensity change and has pixels of lower intensity in areas 0 10 where the original image has regions of pixels of lower intensity change; halftoning the [difference] grey-scale image to form a binary image •0 comprising a plurality of [on] ON and [off] OFF pixels, wherein the binary image has fewer said ON pixels in areas where the original image has regions of lower intensity change, and more said ON pixels in regions where the original image has regions of 15 higher intensity change; *0SS extracting information associated with those pixels in the [first] original o image that correspond to the [on] ON pixels in the binary image; and combining the binary image with the extracted associated information to 0 form [an coded] a compressed representation of the [first] original image. 0
2. A method according to claim 1, wherein the associated information in step is colour information.
3. A method according to claim 1, wherein the associated information in fk step is height information. 484972 484972 AMENDMENTSO2_underlined.doc[ -17-
4. A method according to claim 1, wherein the [edge detection in] generating step uses a difference method.
5. A method according to claim 1, wherein the [edge detection in] generating step uses a Laplacian method.
6. A method according to claim 1, wherein the halftoning in step is 0 dither matrix halftoning. 0
7. A method according to claim 1, wherein the halftoning in step is error diffusion halftoning.
8. A method of decoding a [coded] compressed representation of [a first] S* 15 an original image, wherein said [coded] compressed representation comprises a binary image comprising a plurality of [on] ON and [off] OFF pixels, and information associated S. .5 with those pixels in the [first] original image corresponding to the [on] ON pixels of the binary image, wherein the binary image has fewer said ON pixels in areas where the original image has regions of lower intensity change, and more said ON pixels in regions where the original image has regions of higher intensity change, the method comprising the steps of: generating polygons connecting the [on] ON pixels of said binary image; and rendering each said polygon according to the associated information to T form a [second image] reproduction of the [first] original image. 484972 484972_AMENDMENTS02_underlined.doc(
9. A method according to claim 8, wherein said polygons comprise Delaunay triangles.
10. A method according to claim 8, wherein the polygons comprise Voronoi diagrams.
11. A method according to claim 9, wherein the [first and second images] 00 O original image and the reproduction are height maps. 6.'
12. A method according to claim 8, wherein the associated information is height map information, and wherein the [second image] reproduction is an output height map.
13. A method according to claim 9, wherein the associated information is colour information, and each triangle is rendered in accordance with the colour information associated with vertices of the triangle, said method comprising, for each triangle, the steps of: 00 00 forming an average colour based on the colour associated with each vertex of the triangle; and rendering the triangle using the average colour.
14. A method according to claim 9, wherein the associated information is colour information, and each triangle is rendered in accordance with the colour 484972 484972 AMENDMENTSO2_underlined.doc[ Oe S *066 S 0 0 0000 S000 060 0 0 006 0 06 6600 S00 0 0 S. S 006e 6 S 0 0 information associated with vertices of the triangle, said method comprising, for each triangle, the steps of: forming a convex combination of the colours associated with the vertices of the triangle, said convex combination to vary as a function of location within the triangle; and rendering the triangle according to the convex combination. A method according to claim 11, wherein the associated information is height information, and a three dimensional surface, comprising a plurality of triangular 10 surface elements, is formed according to the steps of: determining, for each triangle, an average height dependent upon on a height associated with the vertices of the triangle; and forming, for each triangle a corresponding triangular surface element for the triangle dependent upon said average height.
16. A method according to claim 1, wherein the original image is generalised to a first data-set in three or more dimensions, and wherein the [processes of edge detection] generating and halftoning steps are each generalised to be operative in relation to a corresponding number of dimensions.
17. A method according to claim 8, wherein the [second image] reproduction is generalised to a second data-set in three or more dimensions, and wherein the [processes] steps of generating polygons and rendering are each generalised to be operative in relation to a corresponding number of dimensions. 484972 484972_AMENDMENTS02_underlined.doc[
18. An apparatus for [encoding a first] compressing an original image, said apparatus comprising: [detecting means for detecting edges within the first image to form a grey-scale difference image] generating means for generating a grey-scale image corresponding to said original image, wherein the grey-scale image has pixels of higher intensity in areas where the original image has regions of pixels of higher intensity change and has pixels of lower intensity in areas where the original image has regions of pixels of lower intensity change; halftoning means for halftoning the [difference] grey-scale image to 10 form a binary image comprising a plurality of [on] ON and [off] OFF pixels, wherein the °binary image has fewer said ON pixels in areas where the original image has regions of lower intensity change, and more said ON pixels in regions where the original image has regions of higher intensity change; extracting means for extracting information associated with those pixels 15 in the [first] original image that correspond to the [on] ON pixels in the binary image; and 0 combining means for combining the binary image with the extracted associated information to form [an coded] a compressed representation of the [first] original image. 00
19. An apparatus for decoding a [coded] compressed representation of [a first] an original image, wherein said [coded] compressed representation comprises a binary image comprising a plurality of [on] ON and [off] OFF pixels, and information associated with those pixels in the [first] original image corresponding to the [on] ON RA pixels of the binary image, wherein the binary image has fewer said ON pixels in areas where the original image has regions of lower intensity change, and more said ON pixels 484972 484972_AMENDMENTS02_underlined.doc[ in regions where the original image has regions of higher intensity change, the apparatus comprising: generating means for generating polygons connecting the [on] ON pixels of said binary image; and rendering means for rendering each said polygon according to the associated information to form a [second image] reproduction of the [first] original image. A computer program for [encoding a first] compressing an original S0* image, said program comprising: 10 [detecting means for detecting edges within the first image to form a p s S grey-scale difference image] generating means for generating a grey-scale image corresponding to said original image, wherein the grey-scale image has pixels of higher intensity in areas where the original image has regions of pixels of higher intensity change and has pixels of lower intensity in areas where the original image has regions of pixels of o. 15 lower intensity change; halftoning means for halftoning the [difference] grey-scale image to 0° 0 form a binary image comprising a plurality of [on] ON and [off] OFF pixels, wherein the binary image has fewer said ON pixels in areas where the original image has regions of 0 lower intensity change, and more said ON pixels in regions where the original image has 0 regions of higher intensity change; extracting means for extracting information associated with those pixels in the [first] original image that correspond to the [on] ON pixels in the binary image; and combining means for combining the binary image with the extracted associated information to form [an coded] a compressed representation of the [first] original image. 484972 484972 AMENDMENTS02_underined.doc[
21. A computer program for decoding a [coded] compressed representation of [a first] an original image, wherein said [coded] compressed representation comprises a binary image comprising a plurality of [on] ON and [off] OFF pixels, and information associated with those pixels in the [first] original image corresponding to the [on] ON pixels of the binary image, wherein the binary image has fewer said ON pixels in areas where the original image has regions of lower intensity change, and more said ON pixels in regions where the original image has regions of higher intensity change, said program comprising: 10 generating means for generating polygons connecting the [on] ON pixels of said binary image; and 8:00 ago" rendering means for rendering each said polygon according to the associated information to form a [second image] reproduction of the [first] original image.
22. A method of encoding an image, substantially as described herein, with S" 15 reference to any one of the embodiments, as that embodiment is shown in the sees accompanying drawings.
23. A method of decoding an image, substantially as described herein, with S reference to any one of the embodiments, as that embodiment is shown in the r accompanying drawings.
24. An apparatus for encoding an image, substantially as described herein, with reference to any one of the embodiments, as that embodiment is shown in the accompanying drawings. 484972 484972_AMENDMENTS02_underlined.doc( 484972 484972_AMENDMENTSO2..underlined.doc An apparatus for decoding an image, substantially as described herein, with reference to any one of the embodiments, as that embodiment is shown in the accompanying drawings.
26. A computer program for encoding an image, substantially as described herein, with reference to any one of the embodiments, as that embodiment is shown in the accompanying drawings. S6* 27. A computer program for decoding an image, substantially as described 0090 10 herein, with reference to any one of the embodiments, as that embodiment is shown in the s* 6 9 accompanying drawings. :DATED this thirteenth Day of March, 2002 Canon Kabushiki Kaisha 15 Patent Attorneys for the Applicant 00 SPRUSON FERGUSON o 484972 484972AMENDMENTS02_underlined.doc
AU64507/99A 1998-12-15 1999-12-14 A method and apparatus for coding an image Ceased AU747180B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU64507/99A AU747180B2 (en) 1998-12-15 1999-12-14 A method and apparatus for coding an image

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AUPP7719A AUPP771998A0 (en) 1998-12-15 1998-12-15 A method and apparatus for coding an image
AUPP7719 1998-12-15
AU64507/99A AU747180B2 (en) 1998-12-15 1999-12-14 A method and apparatus for coding an image

Publications (2)

Publication Number Publication Date
AU6450799A AU6450799A (en) 2000-06-22
AU747180B2 true AU747180B2 (en) 2002-05-09

Family

ID=25634388

Family Applications (1)

Application Number Title Priority Date Filing Date
AU64507/99A Ceased AU747180B2 (en) 1998-12-15 1999-12-14 A method and apparatus for coding an image

Country Status (1)

Country Link
AU (1) AU747180B2 (en)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"REGION SEGMENTATION USING EDGEBASED CIRCLE GROWING" (SHIMBASHI ET. AL.) ICIP '95 PP23-26 OCT 1995 *

Also Published As

Publication number Publication date
AU6450799A (en) 2000-06-22

Similar Documents

Publication Publication Date Title
US8218908B2 (en) Mixed content image compression with two edge data representations
EP0841636B1 (en) Method and apparatus of inputting and outputting color pictures and continually-changing tone pictures
Criminisi et al. Object removal by exemplar-based inpainting
US7397946B2 (en) Color distribution for texture and image compression
US5754697A (en) Selective document image data compression technique
US10825128B2 (en) Data processing systems
US6281903B1 (en) Methods and apparatus for embedding 2D image content into 3D models
JP3376129B2 (en) Image processing apparatus and method
RU2340943C2 (en) Method of simulating film grain by mosaicking precomputed models
US20050063596A1 (en) Encoding of geometric modeled images
EP3649618A1 (en) Systems and methods for providing non-parametric texture synthesis of arbitrary shape and/or material data in a unified framework
Hou et al. Image companding and inverse halftoning using deep convolutional neural networks
JP7371691B2 (en) Point cloud encoding using homography transformation
CN110383696B (en) Method and apparatus for encoding and decoding super-pixel boundaries
CN112184585A (en) Image completion method and system based on semantic edge fusion
US20200366938A1 (en) Signal encoding
US5915046A (en) System for and method of processing digital images
Desai et al. Edge and mean based image compression
CN110766117B (en) A method and system for generating a two-dimensional code
AU747180B2 (en) A method and apparatus for coding an image
JP2005275854A (en) Image processor, image processing method, image processing program and recording medium with this program stored thereon
WO2003045045A2 (en) Encoding of geometric modeled images
JPH04236574A (en) Picture coding system
US20190295293A1 (en) System and method for compressing and decompressing surface data of a 3-dimensional object using an image codec
US20030063812A1 (en) System and method for compressing image files while preserving visually significant aspects

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)