AU747064B2 - Editing adviser - Google Patents
Editing adviser Download PDFInfo
- Publication number
- AU747064B2 AU747064B2 AU22325/00A AU2232500A AU747064B2 AU 747064 B2 AU747064 B2 AU 747064B2 AU 22325/00 A AU22325/00 A AU 22325/00A AU 2232500 A AU2232500 A AU 2232500A AU 747064 B2 AU747064 B2 AU 747064B2
- Authority
- AU
- Australia
- Prior art keywords
- positions
- vision
- image
- frame
- field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Landscapes
- Television Signal Processing For Recording (AREA)
Description
V
S&F Ref: 495345
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT
ORIGINAL
*5
S
S..
Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Canon Kabushiki Kaisha 30-2, Shimomaruko 3-chome, Ohta-ku Tokyo 146 Japan John Richard Windle Spruson Ferguson St Martins Tower 31 Market Street Sydney NSW 2000 Editing Adviser ASSOCIATED PROVISIONAL APPLICATION DETAILS [33] Country [31] Applic. No(s) AU PP9265 [32] Application Date 17 Mar 1999 The following statement is a full description of this invention, including the best method of performing it known to me/us:- 5815c Editing Adviser Field of the Invention The present invention relates to image editing and, in particular, to image editing using position and orientation information to assist the editing task.
Background Art In recent years, various proposals have been made for video camera recordings to incorporate positioning information. Known arrangements for position measurements include a navigation satellite receiver for receiving a plurality of microwave radio transmission signals from a constellation of orbiting navigation satellites forming part of the global positioning system (GPS). The GPS is operated by the United States Department of Defence. These positioning measurement data can be recorded on recording media together with image data. The recorded positioning information is then 15 used in special effects postproduction editing.
Summary of the Present Invention It is an object of the present invention to provide a system for editing images using position and orientation information to assist the editing task.
20 In accordance with one aspect of the present invention there is provided a *g 4 method of automated editing of a first image, said first image having associated position Sand orientation data of a camera used for capturing said first image, said orientation data includeing at least camera direction and camera inclination data, said method comprising the steps of: determining a field of vision of said camera from said position and orientation data; determining a first group of positions from a plurality of positions, R wherein each position of said first group of positions fall within said field of vision; and -1Aadding to said first image a marker associated with each position of said first group of positions, each said marker indicating where the associated position appears in said first image.
In accordance with another aspect of the invention there is provided a method of automated editing of a first video sequence, each frame of said first video sequence having associated position and orientation data of a camera used for capturing said frames, said orientation data includeing at least camera direction and camera inclination data, said method comprising the steps of: determining for at least one frame a field of vision of said camera from said position and orientation data; determining a first group of positions from a plurality of positions, wherein each position of said first group of positions fall within said field of vision; and adding to said at least one frame a marker associated with each position of said first group of positions, each said marker indicating where the associated position 15 appears in said frame.
In accordance with another aspect of the present invention, there is provided an 0 o• oo o oo*o o o•• o• o -2apparatus for automated editing of a plurality of video clips, said apparatus is configured to perform the aforementioned method.
In accordance with yet another aspect of the present invention, there is provided a computer program product including a computer readable medium incorporating a computer program for performing the aforementioned method.
Brief Description of the Drawings Preferred embodiments of the present invention is described hereinafter with reference to the drawings in which: .i 10 Fig. 1 is a schematic block diagram of a general-purpose computer upon which the preferred embodiment of the present invention can be practiced; Fig. 2 is a frame of a video clip taken from location 1;.
Fig. 2 is a frame of a video clip taken from location 1; Fig. 3 is a frame of a video clip taken from location 2; Fig. 4 is a side view of a field of vision of the frame taken from location 2; Fig. 5 is a top view of a field of vision of the frame taken from location 2; Fig. 6 is a side view of a field of vision of the frame taken from location 4; and Fig. 7 is a simplified flow diagram of a method for automated video editing according to one embodiment of the present invention.
eoooe Detailed Description The preferred embodiment of the present invention can be implemented as a computer application program using a conventional general-purpose computer system, such as the computer system 100 shown in Fig. 1, in which the application program to be described with reference to the other drawings is implemented as software executed on the computer system 100. The computer system 100 includes a computer module 102, input devices such as a keyboard 110 and mouse 112, and output devices including a display device 104. A Modulator-Demodulator (Modem) transceiver device 106 is used by the computer module 102 for communicating to and from a communications network, for example connectable via a telephone line or other functional medium. The modem CFP1180AU MMEDIA36 449793 I:\ELEC\CISRA\MMEDIA\MMEDIA36\495345.DOC 106 can be used to obtain access the Internet and other network systems, which allows access to third party databases (not illustrated).
The computer module 102 typically includes at least one processor unit 114, a memory unit 118, for example formed from semiconductor random access memory (RAM) and read only memory (ROM). A number of input/output interfaces including a video interface 122, and an 1/0 interface 116 for the keyboard 110 and mouse 112 are also included. A storage device 124 is provided and typically includes a hard disk drive 126 and a floppy disk drive 128. A CD-ROM drive 120 is typically provided as a non-volatile source of data, such as audio-visual data. The components 114 to 128 of the .i 10 computer module 102, typically communicate via an interconnected bus 130 and in a manner which results in a conventional mode of operation of the computer system 100 known to those in the relevant art. Examples of computers on which the embodiments can be practised include IBM-PC's and compatibles, or alike computer systems evolved therefrom. Typically, the application program of the preferred embodiment is resident on the hard disk drive 126 and read and executed using the processor 114. Intermediate storage of the program and any data processed can be accomplished using the semiconductor memory 118, possibly in concert with the hard disk drive 126. In some •instances, the application program can be supplied to the user encoded on a CD-ROM or •floppy disk, or via a computer network such as the Internet.
However, the present invention is not limited to implementation on a conventional general-purpose computer system. For example, the present invention may be implemented on a camera (not illustrated) including a processor, user controls, a storage device, and a display.
The method of automated editing performed by a Video Editing Adviser and described in relation to Figs 2 to 7 herein, is performed in accordance with instructions contained in the software, executed on system 100.
A library of pre-digitised video clips, each containing a number of frames, together with their position, a position title and orientation information is stored on the storage device 124 or CD-ROM 120. Additional video clips may be made available, from CFP1 180AU MMEDIA36 449793 I:\ELEC\CISRA\MMEDIA\MMEDIA36\495345.DOC the database connected to the system 100 via the communication network 140. The orientation information typically includes camera direction and inclination, as well as focal distance of the frame.
The video clips include video footage taken at location 1. As an example, the video clips include frames 250 taken from a location 1 of the British Museum 255 as schematically illustrated in Fig. 2. The position tile can be set to be "British Museum".
Location 1 has position coordinates (X 1
,Y
1
,Z
1 In the preferred embodiment of the invention, these coordinates are obtained using a GPS receiver. Alternatively, the coordinates can be recorded using more conventional means including maps or S. 10 trigonometric methods.
The video clips also include video footage taken from another location 2 with position coordinates (X 2
,Y
2
,Z
2 A frame 260 taken from location 2 is illstrated in Fig. 3.
In the example used in Figs. 2 and 3, location 2 corresponds to the Tower of London.
When the video clip from location 2 was taken, the pan shot swept past the museum 255, which was also captuted in location 1.
~During the automated editing process of the video clips which includes frames 250 and 260 of the video clips taken from locations 1 and 2, the system 100, as illustrated in Fig. 7, commences in step 200 by detecting that the video clip from location 2 swept S: past location 1 by analysing the position and orientation data of all the video clips loaded on the system 100. The system 100 informs its user of such an occurrence in step 202.
In analysing each of the frames 250 and 260 of the video clips for 'overlap' in step 200, the system 100 considers the field of vision of each frame 250 and 260. Figs. 4 and 5 illustrate the field of vision of frame 260, taken from location 2, from a side view 301 and a top view 401 respectively. The fields of vision 301 and 401 are dependent on the orientation of the camera lens. Data indicating this orientation is stored with the video frames 250 and 260. As can be seen, the fields of vision 301 and 401 include the position 1 with coordinates (XI,Y 1
,Z
1 It is clear that a further location 3 with coordinates
(X
3
,Y
3
,Z
3 does not fall inside the field of vision 301 and would therefore not be suggested by the system 100 for automated editing.
CFP1 180AU MMEDIA36 449793 I:\ELEC\CISRA\MMEDIA\MMEDIA36\495345.DOC Returning to Fig 7, in addition to informing the user in step 202 of an overlap, the user is also provided with a preview in steps 204 to 210 of an automated edit result.
In step 204 the system 100 shows on display 104 the video clip taken from location 2.
This video clip is called an overview video clip. The system 100 then additionally shows the video clip taken from location 2 from the end of the video clip, backwards (in a reverse direction) in step 206, up to the point where location 1 is again in frame 260.
Preferably, location 1 is as close to the centre of the frame 260 as possible. This frame 260 is held on the display 104 for a moment in step 208, with a title for location 1, say "British Museum", added to a position on the frame 260 that corresponds to coordinates 10 (XI,Y 1
,Z
1 Alternatively, an icon or visual marker representing the British Museum can be used in place of a title. After a further few seconds, a digital zoom is performed on this frame 260 in step 210, towards the position on the frame that corresponds to coordinates
(X,,Y
1
,Z
1 At the same time, the video clip taken from location 1 fades in, and is played in full. In an alternative embodiment, the transition effect can be selected by the user.
The user is provided with an option to accept or reject the proposed edit in step 214. Accepted edits are saved in step 216, upon which the system 100 returns to step 200.
Unacceptable edits are rejected and the system 100 loops back to step 202.
In an alternative embodiment, the system 100 has access to a spatial volume °"associated with each location. This can be derived from a database of tourist attractions, or alternatively the system 100 can calculate a volume using the extremes of the position coordinates of the frames of the location. This allows the system to more accurately place the title of the location in the frame in step 208.
In a further embodiment, the system 100 detects whether the footage is taken outdoors or indoors. The system 100 advantageously only suggests outdoor footage as overview video clips. Lighting measurements can be used to detect footage taken indoors.
Fig. 6 shows a simular situation to that illustrated in Fig. 4, but in this situation video clips are taken from a location 4 which is at a lower altitude than location 2. The field of vision 504 of a frame, taken from location 4, includes both coordinates (XI,YI,Z,) CFP1180AU MMEDIA36 449793 I:\ELEC\CISRA\MMEDIA\MMEDIA36\495345.DOC -6and (X 3
,Y
3
,Z
3 However, using the known camera focal length, the system 100 determines that the footage from location 4 was taken of location 3. Position (X 1
,YI,Z
1 which corresponds with location 1, can be ignored because it is either out of focus or, in this case, assumed to be obstructed by location 3.
The system 100 in an alternative embodiment, also has a user selectable minimum radius setting, allowing only video clips taken with a focal length longer than the minimum radius setting to be considered as an overview video clip. Alternatively, the system 100 can, using the difference in the height coordinate of the position coordinates and a heuristic of a common level or ground level, reduce the minimum radius setting as 10 the difference in height become smaller. This follows from an assumption that the closer the camera is to the ground level, the shorter its unobstructed view is. The minimum radius feature prevents overuse of the transitions suggested by the system 100 thereby o. diluting the artistic effect.
The system 100 can have access to a database, either from the storage device 0. 15 124, CD-ROM 120 or from other sources via the communication network 140. The o system 100 can access the database for location or feature name of locations where the video clips has been taken, allowing for automatic titling. It further allows for inclusion of third party content, such as commentaries, music and graphics to enhance the edited result.
The foregoing describes only some embodiments of the present invention, and modifications, can be made thereto without departing from the scope of the present invention.
In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including" and not "consisting only of'. Variations of the word comprising, such as "comprise" and "comprises" have corresponding meanings.
CFP1180AU MMEDIA36 449793 I:\ELEC\CISRA\MMEDIA\MMEDIA36\495345.DOC
Claims (39)
1. A method of automated editing of a first image, said first image having associated position and orientation data of a camera used for capturing said first image, said orientation data includeing at least camera direction and camera inclination data, said method comprising the steps of: determining a field of vision of said camera from said position and orientation data; determining a first group of positions from a plurality of positions, wherein each position of said first group of positions fall within said field of vision; and adding to said first image a marker associated with each position of said first group of positions, each said marker indicating where the associated position appears in said first image. •go: 15
2. A method according to claim 1 wherein each of said plurality of positions has an associated title, and said marker is said title.
3. A method according to claim 1 or 2 wherein said orientation data further includes a focal distance, and said field of vision is limited to an in-focus region. S"
4. A method according to any one of claims I to 3 wherein said field of vision is limited to a predetermined distance from said position of said first image.
A method according to any one of claims 1 to 3 wherein said field of vision is limited to a distance from said position of said first image, said distance being a function of a height component of said position of said first image. i
6. A method of automated editing of a first video sequence, each frame of said first video sequence having associated position and orientation data of a camera used for capturing said frames, said orientation data includeing at least camera direction and camera inclination data, said method comprising the steps of: determining for at least one frame a field of vision of said camera from said position and orientation data; determining a first group of positions from a plurality of positions, wherein each position of said first group of positions fall within said field of vision; and adding to said at least one frame a marker associated with each position of said first group of positions, each said marker indicating where the associated position appears in said frame.
7. A method according to claim 6 wherein each of said plurality of positions has an associated image, said method further comprisingthe step of: displaying said image where the associated position appears in said frame. 0*
8. A method according to claim 6 wherein each of said plurality of positions has an associated video sequence, said method further comprisingthe step of: displaying said video sequence when the associated position appears in 0. :said frame. o• o"
9. A method according to claim 6 wherein each of said plurality of positions has an Sassociated title, and said marker is said title.
A method according to any one of claims 6 to 9 wherein said orientation data further includes a focal distance, and said field of vision is limited to an in-focus region.
11. A method according to any one of claims 6 to 10 wherein said field of vision is limited to a predetermined distance from said position of said frame.
12. A method according to any one of claims 6 to 10 wherein said field of vision is limited to a distance from said position of said frame, said distance being a function of a height component of said position of said frame.
13. Apparatus for automated editing of a first image, said first image having associated position and orientation data of a camera used for capturing said first image, said orientation data includeing at least camera direction and camera inclination data, said apparatus comprising: means for determining a field of vision of said camera from said position and orientation data; means for determining a first group of positions from a plurality of positions, wherein each position of said first group of positions fall within said field of vision; and •means for adding to said first image a marker associated with each position of 0% said first group of positions, each said marker indicating where the associated position is appears in said first image.
14. Apparatus according to claim 13 wherein each of said plurality of positions has o•an associated title, and said marker is said title. 20
15. Apparatus according to claim 13 or 14 wherein said orientation data further includes a focal distance, and said field of vision is limited to an in-focus region.
16. Apparatus according to any one of claims 13 to 15 wherein said field of vision is limited to a predetermined distance from said position of said first image.
17. Apparatus according to any one of claims 13 to 15 wherein said field of vision is limited to a distance from said position of said first image, said distance being a function of a height component of said position of said first image.
18. Apparatus for automated editing of a first video sequence, each frame of said first video sequence having associated position and orientation data of a camera used for capturing said frames, said orientation data includeing at least camera direction and camera inclination data, said apparatus comprising: means for determining for at least one frame a field of vision of said camera from said position and orientation data; means for determining a first group of positions from a plurality of positions, wherein each position of said first group of positions fall within said field of vision; and means for adding to said at least one frame a marker associated with each position of said first group of positions, each said marker indicating where the associated position appears in said frame.
19. Apparatus according to claim 18 wherein each of said plurality of positions has an associated image, said apparatus further comprising: 15 means for displaying said image where the associated position appears in said frame.
20. Apparatus according to claim 18 wherein each of said plurality of positions has oooo an associated video sequence, said apparatus further comprising: 20 means for displaying said video sequence when the associated position appears S-in said frame. oooi
21. Apparatus according to claim 18 wherein each of said plurality of positions has an associated title, and said marker is said title.
22. Apparatus according to any one of claims 18 to 21 wherein said orientation data further includes a focal distance, and said field of vision is limited to an in-focus region. -11
23. Apparatus according to any one of claims 18 to 22 wherein said field of vision is limited to a predetermined distance from said position of said frame.
24. Apparatus according to any one of claims 18 to 22 wherein said field of vision is limited to a distance from said position of said frame, said distance being a function of a height component of said position of said frame.
A computer program product including a computer readable medium incorporating a computer program for automated editing of a first image, said first image having associated position and orientation data of a camera used for capturing said first image, said orientation data includeing at least camera direction and camera inclination data, said computer program comprising: code for determining a field of vision of said camera from said position and orientation data; 15 code for determining a first group of positions from a plurality of positions, wherein each position of said first group of positions fall within said field of vision; and code for adding to said first image a marker associated with each position of said first group of positions, each said marker indicating where the associated position appears in said first image.
26. A computer program product according to claim 25 wherein each of said S•plurality of positions has an associated title, and said marker is said title.
27. A computer program product according to claim 25 or 26 wherein said orientation data further includes a focal distance, and said field of vision is limited to an in-focus region. -12-
28. A computer program product according to any one of claims 25 to 27 wherein said field of vision is limited to a predetermined distance from said position of said first image.
29. A computer program product according to any one of claims 25 to 27 wherein said field of vision is limited to a distance from said position of said first image, said distance being a function of a height component of said position of said first image. A computer program product including a computer readable medium incorporating a computer program for automated editing of a first video sequence, each frame of said first video sequence having associated position and orientation data of a camera used for capturing said frames, said orientation data includeing at least camera direction and camera inclination data, said computer program comprising: code for determining for at least one frame a field of vision of said camera from 15 said position and orientation data; code for determining a first group of positions from a plurality of positions, wherein each position of said first group of positions fall within said field of vision; and code for adding to said at least one frame a marker associated with each position oooo of said first group of positions, each said marker indicating where the associated position 20 appears in said frame.
S. S
31. A computer program product according to claim 30 wherein each of said plurality of positions has an associated image, said computer program further comprising: code for displaying said image where the associated position appears in said frame.
32. A computer program product according to claim 30 wherein each of said ~%ALIl plurality of positions has an associated video sequence, said computer program further wl comprising: -13- code for displaying said video sequence when the associated position appears in said frame.
33. A computer program product according to claim 30 wherein each of said plurality of positions has an associated title, and said marker is said title.
34. A computer program product according to any one of claims 30 to 33 wherein said orientation data further includes a focal distance, and said field of vision is limited to an in-focus region.
A computer program product according to any one of claims 30 to 34 wherein said field of vision is limited to a predetermined distance from said position of said frame.
36. A computer program product according to any one of claims 30 to 34 wherein 15 said field of vision is limited to a distance from said position of said frame, said distance S* being a function of a height component of said position of said frame.
37. A method substantially as described herein with reference to Figs. 2 to 7. a.. 20
38. Apparatus substantially as described herein with reference to the accompanying S• drawings. a
39. A computer program product substantially as described herein with reference to the accompanying drawings. DATED this eleventh Day of February 2002 Canon Kabushiki Kaisha S Patent Attorneys for the Applicant SPRUSON FERGUSON
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU22325/00A AU747064B2 (en) | 1999-03-17 | 2000-03-17 | Editing adviser |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AUPP9265A AUPP926599A0 (en) | 1999-03-17 | 1999-03-17 | Video editing adviser |
AUPP9265 | 1999-03-17 | ||
AU22325/00A AU747064B2 (en) | 1999-03-17 | 2000-03-17 | Editing adviser |
Publications (2)
Publication Number | Publication Date |
---|---|
AU2232500A AU2232500A (en) | 2000-09-21 |
AU747064B2 true AU747064B2 (en) | 2002-05-09 |
Family
ID=25618552
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU22325/00A Ceased AU747064B2 (en) | 1999-03-17 | 2000-03-17 | Editing adviser |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU747064B2 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5508736A (en) * | 1993-05-14 | 1996-04-16 | Cooper; Roger D. | Video signal processing apparatus for producing a composite signal for simultaneous display of data and video information |
US5684514A (en) * | 1991-01-11 | 1997-11-04 | Advanced Interaction, Inc. | Apparatus and method for assembling content addressable video |
US5790188A (en) * | 1995-09-07 | 1998-08-04 | Flight Landata, Inc. | Computer controlled, 3-CCD camera, airborne, variable interference filter imaging spectrometer system |
-
2000
- 2000-03-17 AU AU22325/00A patent/AU747064B2/en not_active Ceased
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5684514A (en) * | 1991-01-11 | 1997-11-04 | Advanced Interaction, Inc. | Apparatus and method for assembling content addressable video |
US5508736A (en) * | 1993-05-14 | 1996-04-16 | Cooper; Roger D. | Video signal processing apparatus for producing a composite signal for simultaneous display of data and video information |
US5790188A (en) * | 1995-09-07 | 1998-08-04 | Flight Landata, Inc. | Computer controlled, 3-CCD camera, airborne, variable interference filter imaging spectrometer system |
Also Published As
Publication number | Publication date |
---|---|
AU2232500A (en) | 2000-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101423928B1 (en) | Image reproducing apparatus which uses the image files comprised in the electronic map, image reproducing method for the same, and recording medium which records the program for carrying the same method. | |
KR101600115B1 (en) | Imaging device, image display device, and electronic camera | |
US8893026B2 (en) | System and method for creating and broadcasting interactive panoramic walk-through applications | |
US8228413B2 (en) | Photographer's guidance systems | |
US7272498B2 (en) | Method for incorporating images with a user perspective in navigation | |
CN101910936B (en) | Guided photography based on image capturing device rendered user recommendations | |
US20050046706A1 (en) | Image data capture method and apparatus | |
US20040183918A1 (en) | Producing enhanced photographic products from images captured at known picture sites | |
US20120128205A1 (en) | Apparatus for providing spatial contents service and method thereof | |
US8527261B2 (en) | Portable electronic apparatus capable of multilingual display | |
US20020001032A1 (en) | Portable computer, data management system using the same, and method of producing a map stored with actual photo-image data using the same portable computer and data management system | |
US20020076217A1 (en) | Methods and apparatus for automatic recording of photograph information into a digital camera or handheld computing device | |
US20100002084A1 (en) | Video sharing system, Photography support system, And camera | |
JP2000013722A (en) | Image recorder | |
WO2005124594A1 (en) | Automatic, real-time, superimposed labeling of points and objects of interest within a view | |
CN101799621A (en) | Shooting method and shooting equipment | |
JP2003198918A (en) | Method and device for recording and reproducing picture | |
CN100438605C (en) | Imaging apparatus and recording method | |
JP4244972B2 (en) | Information processing apparatus, information processing method, and computer program | |
CN111680238B (en) | Information sharing method, device and storage medium | |
US20040066391A1 (en) | Method and apparatus for static image enhancement | |
US20030146985A1 (en) | Data recording device and method, data reproducing device and method, data recording/reproducing device and method, map image data format | |
AU747064B2 (en) | Editing adviser | |
JP2943263B2 (en) | Image search system | |
JP2007190831A (en) | Image institution-name printing device and the method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGA | Letters patent sealed or granted (standard patent) |