WO2004008744A1 - 道路面等の平面対象物映像の平面展開画像処理方法、同逆展開画像変換処理方法及びその平面展開画像処理装置、逆展開画像変換処理装置 - Google Patents
道路面等の平面対象物映像の平面展開画像処理方法、同逆展開画像変換処理方法及びその平面展開画像処理装置、逆展開画像変換処理装置 Download PDFInfo
- Publication number
- WO2004008744A1 WO2004008744A1 PCT/JP2003/008817 JP0308817W WO2004008744A1 WO 2004008744 A1 WO2004008744 A1 WO 2004008744A1 JP 0308817 W JP0308817 W JP 0308817W WO 2004008744 A1 WO2004008744 A1 WO 2004008744A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- plane
- road surface
- unit
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Definitions
- Plane developed image processing method for plane object image such as road surface
- inverse developed image conversion processing method planar developed image processing apparatus
- inverse developed image conversion processing apparatus planar developed image processing apparatus
- the present invention provides a method of processing a plane developed image of a plane object image such as a road surface, which is photographed in a perspective view when photographed by a normal camera, and is developed into a plane like a map screen.
- the present invention relates to an inversely developed image conversion processing method, a planarly developed image processing apparatus, and an inversely developed image processing apparatus.
- images taken with a normal camera are developed, for example, images around the bus are taken with multiple cameras, each is developed as a flat road surface, and the road surface around the bus is taken as a bus.
- the developed images are combined so that they are displayed as a single planar road surface.By developing this planar image, it becomes possible to combine images by forming a linear scale.
- image recognition and measurement processing of the object can be performed within the developed plane image.Furthermore, using this image conversion principle in reverse, the road surface is developed into a plane, the viewpoint is moved, and the perspective By returning to the image, it is possible to obtain an image with a different viewpoint from the original perspective image, and to apply this principle of image development not only to the road surface but also to any plane in the image. It is intended.
- an arbitrary plane is extracted from other planes by performing optical flow (or parallax, matching, etc.) processing on the plane-expanded image. Further, the unevenness of the plane is detected, and the deviation of the unevenness from the plane is detected.
- optical flow we can extract the moving distance and speed direction, place an object consisting of a CG (computer graphics) image on a two-dimensional image, and paste the corresponding texture.
- the camera dip angle 0 is obtained, or the perspective angle By finding the point where the line component that should become a parallel line in the screen intersects from the image, that is, the vanishing point, the camera dip angle 0 can be obtained, By moving the image so that its dip 0 is constant, the camera
- the present invention relates to a method and an apparatus for developing the image on a plane which enables correction of blur, and an application thereof.
- CCTV ClosedCircuitTelevisision
- various monitor cameras naturally photographed in perspective, due to the nature of the lens.
- a camera that captures the situation behind the bus arriving at the back of the bus will, of course, shoot a perspective image unless the camera is mounted vertically downward, and obtain an image of the road surface viewed from directly above. I could't do that.
- transmission data compression is performed.
- a method of separating and compressing only a certain moving part is adopted (for example, MPEG 2 method).
- MPEG 2 method a moving image in which the camera itself moves and the whole image has a motion component is used.
- a sufficient compression effect could not be obtained, and as a result, data could not be transmitted overnight. Therefore, the present invention has been created in view of the above-described existing circumstances, and uses an image obtained by a camera perspective method without using a plurality of actual measurement points of a real object for each image. Is developed using only the shooting conditions of the camera and the information in the captured image, and is then combined and developed into a plan view that is represented on a map. .
- information necessary for plane development is read from a perspective image obtained from a normal camera, and the information is developed into a plan view such as a map by using a mathematical formula.
- a normal video camera is mounted around the bus, and multiple video cameras are used to shoot the road surface outside the bus at an angle, that is, a dip, so as to compose the entire field of view.
- the road surface image is developed like a map image, and the image developed like the map and the image of the building wall are In the same way, the same processing is performed, the images are joined together, and the road surface around the bus and the image of the building are created.
- take an image of the floor and walls inside the building develop the image taken at an angle into a plane, and open the wall of the room, for example, to create a drawing that resembles a room is there. This is obtained by calculation using mathematical formulas.
- a desired three-dimensional moving image can be reconstructed from a plurality of image data developed in a plane and position information data of the camera.
- the original moving image can be reproduced on the monitor side by transmitting the flattened image and the camera position information from the camera side. It is possible to do.
- Planar images are still images, and their data volume is much smaller than that of moving images of oblique images.Therefore, it is possible to transmit data freely even between devices connected by a line with a narrow band, etc.
- Reconstruction at the receiving end enables data communication of a moving image of a desired object.
- the oblique image plane developing device as the real-time processing includes, for example, a CC TV camera or a digital still camera as a video input device, a video reproducing unit, an image correcting unit, a spherical aberration correcting unit, a video developing plane processing unit, and a developing unit. It consists of an image combining unit, a display unit, and a recording unit.
- the oblique image plane developing device as the offline processing is composed of a video reproducing device having recorded an oblique image, an image correcting unit, a video expanding plane processing unit, a developed image combining unit, and a display unit.
- an image of a real scene object including a plane is photographed obliquely, and a mathematical operation is used to calculate the plane originally composed of the plane in a proportional relationship (to a similar shape) with the plane of the real scene. It is displayed on a plane as a flat image.
- a plurality of two-dimensional development images obtained by the above method are combined and expressed as one large two-dimensional development image.
- multiple CC TV images are flattened by the above-mentioned device, and the images are combined into one image to display the entire target area. If necessary, the CCTV diagonal image corresponding to the displayed location is also displayed at the same time.
- the moving direction image obtained by loading and photographing a moving vehicle, an aircraft, a ship, etc. on the moving direction is developed on a plane, and it is connected continuously and one image You do it.
- an expression such as (2) and an expression with the same meaning the position coordinates and size of the object in the real world By giving information that can be read from the captured video and information on shooting conditions such as 0, h, and r without giving the information as known information, the coordinate system of the real world and the coordinate system on the image monitor can be obtained.
- coordinate transformation is performed, where 0 is the angle between the camera's optical axis and the road surface, f is the camera's focal length, h is the camera's height, and / 3 is the camera's height.
- 0 is the angle between the camera's optical axis and the road surface
- f is the camera's focal length
- h is the camera's height
- / 3 is the camera's height.
- V is the vertical coordinate from the origin on the CCD (acquired image) plane, which is the projection plane of the camera.
- U are the coordinates in the horizontal direction from the origin on the CCD (acquired image) surface
- y is the distance from the point on the road surface, which is h from just below the camera, to the origin, and further from the origin in the optical axis direction
- the coordinates and X are the horizontal distances on the road surface, that is, the coordinates, and if the vertical wall surface is to be developed on a plane, the coordinates may be processed at an angle of 90 degrees. Instead, it may be a mathematical expression relating other similar perspectives to a plane.
- the numerical values required for the calculation that is, the angle 0 between the target plane and the optical axis, the angle between the real point corresponding to any point in the image and the camera, etc., and the values such as ( ⁇ — ⁇ ) are Since it is a physical quantity in the real world (live-action video), it can be obtained by actual measurement. However, as a practical matter, it is practically impossible to measure each moving image at multiple locations while the camera is moving, and the camera moves. Can be obtained from the image by the following method.
- the geometric center of the image is the optical axis position.However, to find it accurately, determine the optical center of the imaging equipment such as a camera by collimation etc. Once measured, the position can be obtained as a value unique to the camera system including the lens and the optical axis position as one point in the image.
- an example of the most important measurement in the image of 0 in the above-mentioned conversion formula is as follows. A parallel line portion in the real world is empirically searched from the image, and the extension of the parallel line is defined as an intersection in the image.
- the vanishing point a point that intersects at a distant point when drawing in fiP or perspective drawing, and is a vanishing point in a perspective view, etc. For example, when a straight road is drawn in perspective, And the road becomes a single point, and that point is the vanishing point.
- the plane a which is a plane parallel to the target plane containing the optical axis point, and the target plane containing the intersection point
- the distance between the plane b, which is a parallel plane, and the plane b is d, and 0 can be obtained as the ratio (arc T and / f) of this d to the virtual focal length f.
- an example of calculating the virtual focal length is as follows.
- the distance on the optical axis such that the angle at which an arbitrary object in the real space is seen in advance and the angle at which the same object in the displayed image is seen is the same. It is sufficient to determine it on the display image. If the unit at this time is expressed in pixels, it will be a value specific to the camera system including the lens, and it will be sufficient to determine it once.
- a parallel line portion of the object in the real world is searched for, and it is represented in the image as an intersection line having an intersection on the extension line, so that when this intersection line is expanded on a plane, it becomes parallel. Then, select ⁇ to obtain 0 or fine-tune S.
- 0 can also be obtained by actual measurement.
- the degree of inclination of a camera attached to a vehicle such as a moving body facing downward as a dip can be determined by a simple method using a protractor. If it is desired to obtain the property, it can be obtained by measuring with a special angle measuring device.
- moving the image so that the position of the vanishing point of each image is fixed and displaying the image can stabilize the image that shakes due to camera shake or the like. . That is, this is the part of the parallel line component in the target plane in the image
- the position of the vanishing point fluctuates due to the movement or shaking of the force camera, causing blur, but the position of the vanishing point
- Optical flow refers to the amount of unit time movement of a minute area in a moving image composed of a plurality of different planes obtained by plane development (in this specification and drawings, abbreviated as “Opt.F”). It is possible to separate each single planar image by obtaining the same components from the necessary range by the method and extracting the same components from the component distribution map.
- the optical flow of a perspective image generally takes the same value even in the same plane even in the moving direction and takes different values depending on the distance.
- the optical flow uses the property of taking the same value.
- the optical flow is a flow that indicates how each corresponding point has moved in a plurality of images. If there is movement in a plurality of images, there is a flow of the optical flow. However, the movement can be displayed as a line, and if there is no movement, the flow of optical flow will not be represented by a line, and it is important to know whether the corresponding point has moved in multiple images.
- an optical flow distribution map is generated, and the unevenness of the plane is detected as a deviation from the plane from the small difference, or a different image in the plane moving image is detected.
- the parallax is detected by comparing the plane images obtained from the corners, and the unevenness component in the plane is detected from the component distribution, and the deviation of each point of the original plan view from the plane is determined by the detected unevenness value. Is generated by comparing and comparing a plurality of plane images developed in a plurality of plan views by a method such as a correlation method or a matching method.
- the amount of movement of the corresponding small area is determined by the parallax method or the optical flow method, etc.
- the data can be detected, or a corrected plan view including the deviation of each point of the original plan view from the plane can be generated by the detected three-dimensional unevenness values.
- a distribution map of the optical flow is generated, the unevenness of the plane is detected as a deviation from the plane from the small difference using the above principle, and the plane images obtained from different angles of view are compared and calculated.
- a parallax component in the plane is detected from the component distribution, and a corrected plan view including deviation of each point of the original plan view from the plane is generated based on the detected concavo-convex value.
- a small difference in the optical flow of the converted plane image can be detected when the plane has undulations or irregularities, it means a deviation from the plane. You can do it.
- a method such as correlation method or matching. For each minute area of a plurality of planar images such as surfaces, the amount of movement is determined from the difference between the components of the minute area by calculating parallax, etc., and the corresponding points are combined and calculated to obtain the unevenness of the road surface. Is detected.
- optical flow in the present invention means that any processing such as optical flow, parallax, and matching may be used.
- the average optical flow value of the continuous image developed on the plane or the moving distance of the matching corresponding position is calculated, and the moving distance of the target plane ⁇ moving speed ⁇ moving direction, or the moving distance of the photographed camera ⁇ moving Speed ⁇ Moving direction can be obtained. That is, the optical flow on the same plane developed on a plane is constant Because of this property, the moving speed of the camera can be obtained from the optical flow of the target plane.Since this is the relative position and relative speed between the camera and the target plane, the stationary system and the moving system can be obtained.
- the disparity has the same geometric meaning as the optical flow, and the optical flow or the disparity of a wide area of the continuous image developed on the plane is obtained, and the disparity is used to calculate the moving distance of the target plane.
- the moving speed, moving direction, or moving distance, moving speed, and moving direction of the camera that captured the image can be obtained.
- the CG image or map image A real image can be captured and displayed as a plane image or inversely transformed and displayed as a perspective image. That is, since the same plane developed in the plane has the same optical flow, it is possible to cut out only one texture of the target plane from a plurality of mixed planes.
- the texture of the object plane in the isolated plane developed on the separated plane to the corresponding object plane in the CG image or map image, the actual image is imported into the CG image or map image, and Or, it is transformed and displayed as a perspective image.
- the parallel lines in the actual video are extracted from the image, and the distance between the plane a, which is the plane parallel to the target plane at the intersection, and the plane b, which is the plane parallel to the target plane including the optical axis point, is extracted.
- Is d and the virtual focal length is f.
- 0 arc 0 can be obtained as T an (d / f).
- the angle 0 between the target plane and the optical axis is a physical quantity in the real world (actual video) and should be measured, but it must be measured from the image and each frame image in the moving image.
- a parallel line part in the real world is empirically searched from the image, and the intersection point and the road created by the parallel line part Measure in advance the target plane, such as a plane, and the position of the optical axis in the image, or use the geometric center of the image as the approximate optical axis, and parallel to the target plane, such as a road, containing the optical axis point.
- the ratio of the distance to the plane to the virtual focal length is calculated, and the arc tangent is calculated to obtain 0.
- parallax is detected by acquiring multiple simultaneous images of the same spot with multiple ordinary cameras installed at different installation locations, and comparing and calculating a planar development image of the multiple simultaneous images of the same spot to detect parallax.
- a three-dimensional shape of an object can be generated. In the above example, this is to obtain a two-dimensional image by flattening the image from one camera or by combining a plurality of force cameras with different viewpoints.
- an image of the same point taken from a different point is acquired as a plane developed image, and then the parallax is detected in the plane developed image of the overlapping portion.
- a map or plan is prepared from the beginning, and based on all of the plan, plan and CG (computer graphics) images, the inverse transformation is performed in the opposite direction to the previous transformation. In this way, it is possible to generate a perspective image from an arbitrary viewpoint different from the image of the previous viewpoint. In addition, by continuously inverting each frame image of a video image, viewpoint movement is repeated, and a video moving image from a virtual moving camera viewpoint that is not actually photographed can be generated.
- a planar development view generated by converting a perspective image including a two-dimensional image into a plan view, or an image including a plurality of two-dimensional images taken from a plurality of directions is a plan view.
- the expression (1) and the expression (1) described above based on a single large screen plane development view or plan view CG (computer graphics) image or map generated by combining (2)
- a virtual perspective image viewed from an arbitrary viewpoint can be generated, or by processing continuously, a moving image can be generated from a virtual moving camera viewpoint.
- the specific inverse conversion formula by the inverse conversion method is based on the following formulas (3) and (4).
- V y ⁇ f ⁇ sin ⁇ / (2 12 ⁇ h ⁇ c 0 s ( ⁇ / 4- ⁇ )-cos ( ⁇ - ⁇ ))
- V is the vertical coordinate on the CCD plane which is the projection plane of the camera
- u is the horizontal coordinate on the CCD plane which is the projection plane of the camera. It should be noted that the equation is not limited to this equation, and may be an equation relating other similar perspectives to a plane.
- various types of recognition processing can be performed on the image using the plane-expanded image.
- the scale of the plane-expanded image becomes a linear scale, and measurement and Very easy image processing, image recognition, etc. It is. Since the optical flow is also obtained in a form proportional to the relative speed with respect to the camera, the relative speed of the object is expressed on a linear scale without depending on the distance, so not only in measurement but also in image processing recognition It is greatly simplified.
- An example of an application plane is a flat surface developed as road surface, sea surface, lake surface, lake surface, river surface, ground surface, vertical wall, vertical virtual plane created by objects arranged on the same plane, architectural wall floor, ship Deck surface ⁇ Airport facilities such as runway taxiways can be used.
- Examples of vehicles as applied equipment include peripheral roads on buses and other land-based vehicles, buildings, telephone poles, street trees, guard rails, etc., and the sea surface of ships and other marine vehicles. It can be displayed on the deck, wall, etc. of a ship, on the runway of an aircraft, on the ground, etc., or on the target area.
- a plane portion such as a floor surface or a wall surface of a building is displayed in a plane-expanded manner and in a plane-bonded manner.
- 3D map production as an application example is not only continuous shooting of the road surface, ground surface and water surface with moving vehicles, aircraft, ships, etc. with multiple cameras, but also vertical surfaces such as building walls etc.
- the vertical plane can be simultaneously extended while the plane developed image is coupled and extended in the moving direction.
- a 3D map is created by creating an expanded view of a wider vertical plane including that.
- the planar development image processing apparatus and the reverse development image processing apparatus used directly in the planar development image processing method for the planar object image such as the road surface and the inverse development image conversion processing method described above convert perspective images into perspective images.
- a video input unit to be acquired a video playback unit that plays back the oblique video captured by the video input unit, an image correction unit that corrects the shooting rotation angle, etc., of the video input device, spherical aberration in the video input device, etc.
- Spherical aberration corrector that corrects the image
- an image expansion plane processor that converts the perspective image into a plane expansion view
- a developed image combination unit that combines the images that have undergone the image expansion processing
- a display unit that displays the combined image It consists of:
- An optical flow map generation unit that generates and illustrates an optical flow of the expanded video, and an optical flow extraction unit that extracts only a target optical lip from the optical lip map, are provided.
- An image comparison unit that detects a parallax from a video at the same point, includes a development image comparison unit that compares a plurality of development images at the same point, and extracts road surface unevenness by calculation; It can be configured to include a correction plane generation unit that takes into account unevenness.
- a video input unit that generates video by a camera such as a CCTV camera or a digital still camera, an input image display unit that stabilizes and displays an input image, a video recording unit that records an input image, and a video recording unit that records the input image.
- An image that adjusts the direction of the target planar image to the plane in the image in order to perform coordinate transformation to correct the image distortion due to the lens, such as spherical aberration, etc., and to correct the camera rotation angle.
- a correction unit an image development plane processing unit that generates a plan view from a perspective image by mathematical operation, and an optical flow map generation unit that generates and illustrates an optical lip of the developed image
- the optical flow extraction unit extracts only the desired optical flow from the optical map and the parallax is detected from images of the same point from different positions.
- a developed image combining unit that generates one continuous image, a developed image display unit that displays them, a recording unit that records them, and an arbitrary viewpoint image generation unit that inversely converts the images to arbitrary viewpoints and displays them.
- An arbitrary viewpoint image display unit for displaying the image, a developed image comparison unit for comparing a plurality of developed images at the same point, an image comparison unit for extracting road surface unevenness by calculation, and a corrected plane generation unit considering the unevenness. are appropriately combined.
- a decompressed image processing device directly used for the plane uncompressed image processing method and the uncompressed image conversion processing method for a plane object image such as a road surface is an arbitrary image that is inversely transformed to an arbitrary viewpoint and displayed. It can be configured to include a viewpoint image generation unit and an arbitrary viewpoint image display unit that displays the image.
- the planar developed image processing device and the decompressed image processing device include a video input unit for acquiring a perspective image, and one or two or more of a perspective image captured by the video input unit that constitute a three-dimensional space.
- a plane decomposition unit that decomposes the image into plane images, a position detection unit that detects the three-dimensional position of the image input unit, and a three-dimensional image that is decomposed by the plane decomposition unit and the image input unit that is detected by the position detection unit Table for reconstructing and displaying 3D images from positions And a display unit.
- a configuration may be provided that includes a position notation section that writes the three-dimensional position of the video input section detected by the position detection section in the plane image decomposed by the plane decomposition section.
- the position notation unit can be configured to continuously indicate the three-dimensional position of the moving image input unit in the plane image decomposed by the plane decomposition unit. .
- the display unit for reconstructing the three-dimensional image When the display unit for reconstructing the three-dimensional image is disposed separately from the plane decomposition unit and the position detection unit, one or more plane images are displayed on the display unit from the plane decomposition unit and the position detection unit. It may be configured to include a transmission / reception unit that transmits a signal and a three-dimensional position signal of the video input unit.
- the method for processing a plane developed image of a plane object image such as a road surface according to the present invention configured as described above, the method of inversely developed image conversion and the planar developed image processing apparatus, and the image of the inversely developed image processing apparatus,
- the perspective image which is an oblique image acquired by the input device, is converted into a flat developed view by equations (1) and (2) and displayed as a practical map-like image.
- Combination of acquired and generated flat development images is displayed as a single expanded image, for example, on the map including the situation of the acquisition location and surrounding area, full display of the entire target area, direct display of the specific area Etc., and easily compare with the surrounding situation by the simultaneous direct display of the input video.
- the developed plane development map expands various things such as land display, sea surface, airport facility surface, etc., as well as plane display on architectural structures, etc., and generates and obtains a plane development map by moving, the plane according to the moving direction
- a three-dimensional map can also be created by extension of expansion and vertical expansion.
- video input unit input image display unit, video recording unit, video playback unit, image correction unit, video development plane processing unit, optical flow map generation unit, optical flow extraction unit, parallax extraction unit, object image processing unit ,
- a developed image combining unit a developed image display unit, a recording unit, an arbitrary viewpoint image generation unit, an arbitrary viewpoint image display unit, a developed image comparison unit, an image comparison unit, a corrected plane generation unit, etc.
- perspective imageDevelopment of road surface into planar image development of perspective wall image displayed on planar image, separation of building wall surface and guardrail image, planar image
- change the viewpoint and invert again to the perspective image display the road surface or building surface with the texture attached, or use the inverse transformation to perform the perspective image
- It can be applied to a wide range of applications, such as converting to a parallax image, combining parallax images from different viewpoints, and calculating road surface irregularities.
- a m-dimensional image, a three-dimensional map, etc. can be reconstructed from a plurality of plane-deployed image data and camera position information data, so that the camera for shooting and the display unit for mobile are separated. Even if the camera is installed in a remote location, the original moving image can be reproduced on the receiving side (monitor side) by transmitting the flattened image and camera position information from the image acquisition side (camera side). .
- the image data developed on a plane is a still image, and the amount of data is much smaller than that of a moving image of an oblique image. Therefore, by transmitting and receiving the plane developed image and the position information data for reconstructing the plane developed image, the data communication of the moving image with the data transmission amount as small as possible becomes possible.
- FIG. 1 is a block diagram of an apparatus for developing a two-dimensional image of a flat object such as a road surface according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing another embodiment of the apparatus of the present invention. ⁇
- FIG. 3 is a block diagram showing another embodiment of the apparatus of the present invention.
- FIG. 4 is a block diagram showing the image flattening device.
- FIG. 5 is a block diagram showing an apparatus for detecting unevenness on a road.
- FIG. 6 is a block diagram showing a viewpoint moving 'texture pasting device which also performs viewpoint moving and texture pasting by the road surface development method.
- FIG. 7 is a block diagram of an embodiment in which an image is similarly developed on a plane by the optical flow method.
- FIG. 8 is a schematic explanatory diagram of a case where a plane developed image is generated from the acquired image and an oblique image is further formed by moving the viewpoint.
- FIG. 9 is a schematic explanatory diagram showing a case of detecting unevenness on a road surface.
- FIG. 10 is a schematic diagram for obtaining 0 from the vanishing point.
- Fig. 11 is an example of the plane expansion when the 0 value is different when the plane is also expanded.
- (A) is the value before the plane expansion in perspective and
- (B) is the value different from the actual value.
- (C) is when the value of 0 is the same as the actual value, that is, when 0 is found correctly.
- FIG. 12 is a block diagram of the three-dimensional diagram generation device.
- FIG. 13 is an image diagram of a plane similarly extracted by the optical flow.
- Fig. 14 shows the plane extracted by the optical flow and the image of the three-dimensional map generated by the three-dimensional conversion.
- FIG. 15 is an image diagram showing a state in which the locus of the camera position is similarly described on a three-dimensional map.
- FIG. 16 is a block diagram of the same three-dimensional moving object detection apparatus.
- FIG. 17 is a block diagram of a texture pasting device in the three-dimensional diagram generating device.
- FIG. 18 is a block diagram of the object recognition device.
- FIG. 19 is a specific example of an image obtained by a traffic surveillance video camera as an example of the object recognition device of FIG. 18, where (a) shows a surveillance video camera image (perspective image), (b) ) Is a plane developed image converted by the present invention, and (c) is an area analysis display showing the object recognized by the present invention.
- Fig. 20 shows an example of the object recognition device shown in Fig. 18 as a traffic monitoring video camera difference sheet (Rule 26).
- 5 is a flowchart illustrating processing steps of object recognition in the first embodiment.
- FIG. 21 is a table summarizing the results obtained by the object recognition processing shown in FIG.
- FIG. 22 is a situation diagram in which a plurality of cameras for photographing a road around a bus are arranged.
- Fig. 23 is a video image developed on the road like a map.
- FIG. 24 shows nine road image diagrams of, for example, (1)... (9), which similarly photograph oblique images of the road.
- Fig. 25 is a map-like planar image obtained by processing the same road image.
- FIG. 26 shows another embodiment, in which, for example, (1) ... 16 (16) indoor images of plane parts such as floors and walls of a building structure taken as oblique images Figure Each.
- FIG. 27 is a composite image diagram in which the image diagrams shown in FIG. 26 are combined in a plane.
- a camera such as a CC TV camera or a digital still camera, as an input device, an image reproducing unit 2 for reproducing oblique images captured by these cameras 1, an image correcting unit 3, and a spherical surface Aberration correction unit 4, image expansion plane processing unit 5 that expands an oblique image into a plan view according to equations (1) and (2) described later, and developed image combining unit that connects each developed image by an appropriate method 6, a developed image display unit 7 for displaying the joined images, and a recording unit 8 for recording the joined images on a recording medium.
- the oblique images are two-dimensionally developed as real-time processing.
- the image correction unit 3 corrects the rotation angle and the like of the camera 1 while inputting the image from the camera 1 and playing back the video in real time so that it can be configured as an oblique image plane development device as real-time processing.
- the spherical aberration corrector 4 corrects the spherical aberration of the camera 1 to suit the purpose
- the image development plane processing unit 5 converts the perspective image into a plane development view like a map and displays the developed image. Display with 7. Further If necessary, the obtained images are combined by the developed image combining unit 6 with the images obtained by performing the image developing process from the plurality of force cameras 1, displayed, and recorded in the recording unit 8. It is. Also, as shown in FIG.
- the oblique image reproducing section 11 having the oblique image recorded therein, the image correcting section 12, the image developing plane processing section 13 and the developed image combining section It consists of a unit 14 and a developed image display unit 15, and plays back the image from the oblique image playback unit 11 that records the image taken by the normal camera 1.
- the rotation angle, etc. are corrected to suit the purpose, and the image development plane processing unit 13 develops it into a flat image such as a map and displays it on the development image display unit 15.
- a plurality of images developed by the developed image combining unit 14 are connected by an appropriate method, and the developed image is displayed by the developed image display unit 15. Further, as shown in FIG.
- the oblique image plane developing apparatus includes a camera side (transmitting side) for acquiring an image and a monitor side (displaying, recording, and reconstructing a planar image decomposed and expanded from the oblique image).
- a camera side transmitting side
- a monitor side displaying, recording, and reconstructing a planar image decomposed and expanded from the oblique image.
- Receiveiving side can be installed separately.
- a camera 1 as an input device
- an image reproducing unit 2 an image correcting unit 3
- a spherical aberration correcting unit 4 and an image developing plane processing unit 5 are provided on the monitor side (reception side)
- a developed image combining unit 6, a developed image display unit 7, and a recording unit 8 are provided.
- the camera is provided with a transmitter 5a for transmitting the developed and decomposed planar image signal to the monitor, and the monitor is provided with a receiver 6a for receiving the planar image signal transmitted from the camera.
- These transmission / reception units 5a and 6a are connected via a communication line so that they can communicate overnight.
- the monitor side can transmit the plane developed image to the received plane developed image.
- a moving image can be reconstructed on the basis of this, and it is possible to transmit and reproduce a desired moving image while minimizing the amount of transmission data.
- a plane originally composed of planes is subjected to mathematical operations on an image of an object in a real scene including a plane that is obliquely photographed, and is proportional to the plane of the real scene (similar form).
- Is developed and displayed on a plane as a plane image This is an image obtained by photographing a scene including a plane with an ordinary camera, that is, an image photographed from an oblique direction. For example, if it is a road surface, it is developed into a plane and developed like a map screen.
- a plurality of planar developed images obtained by the above method are combined and expressed as a single large planar developed image.
- a plurality of planar developed images are combined by an appropriate method and expressed as a single large planar developed image.
- a planar development diagram such as a map is created, they are connected to form a single large planar development image.
- the image since the image is developed on a plane, it can be freely combined as much as possible, and it cannot be combined with a perspective image unless the camera position is the same.
- FIG. 4 shows a processing block diagram when the image is developed on a plane.
- a video image 21 or a still image image 22 is input to the device as an input image, the input image is input.
- the correction unit 23 performs spherical aberration correction, rotation correction, and the like.
- the image plane expansion processing unit 24 performs the image plane expansion processing according to the following equations (1) and (2).
- the present invention obtains the image by a method referred to as a 0 method.
- a 0 method In developing the oblique image into a two-dimensional image, read the optical axis position 0 from the oblique image, read the camera height h, the f-value of the taking lens, or the virtual f-value on the monitor, and set the coordinates of the target location as follows: It is to be obtained by equations (1) and (2).
- x and y are the plane development coordinates
- u and V are the coordinates in the image
- 0 is the angle between the camera's optical axis and the road surface
- f is the focal length of the camera
- h is the height of the camera
- / 3 is the camera
- V is the vertical coordinate from the origin on the CCD surface, which is the projection surface of the power camera
- u is the CCD surface
- the coordinates are in the horizontal direction from the upper origin.
- y is a distance or coordinates further advanced in the optical axis direction from the origin at a point advanced h from directly below the camera on the road surface
- X is a lateral distance or coordinates on the road surface.
- the perspective image obtained by photographing the road surface is converted into a map-like planar image and developed.
- the plane image thus transformed and developed is displayed and recorded by the plane developed image display / recording unit 25.
- a perspective image is developed on a plane, and the developed image combining unit 26 connects the planar developed images several times.
- the developed image combining unit 26 provides a planar expanded image of a large screen. Displaying and recording this is a combined expanded image display- The recording unit 27.
- an arbitrary viewpoint is specified from the developed video and the arbitrary viewpoint is generated by the arbitrary viewpoint generator 28 in order to generate the arbitrary viewpoint.
- the inverse expansion conversion processing unit 29 uses the following equations (3) and (4), which are the inverse conversion equations of the above equations (1) and (2), to obtain a perspective that is different from the original viewpoint.
- a normal image can be obtained, and the reversely developed image display / recording unit 30 displays and records this.
- u x-f-si ⁇ ⁇ / (cos ( ⁇ - ⁇ )) (4)
- h is the height of the camera from the road surface and ⁇ is the angle between the optical axis of the camera and the road surface
- F is the focal length of the camera
- / 3 is the angle between the line that connects the point advancing by y and the point advancing by y from the point just below the camera to the camera lens
- the angle between the road surface and X is the camera light
- Y is the coordinate in the vertical direction from the line segment obtained by orthogonally projecting the road onto the road surface, that is, the horizontal direction when viewed from the camera.
- U is the vertical coordinate on the CCD plane, which is the projection plane of the camera
- u is the horizontal coordinate on the CCD plane.
- the inverse transformation equation may be not only these equations (3) and (4) but also an equation that relates other similar perspectives to a plane.
- FIG. 5 a new block diagram is added to a part of the block diagram of FIG. That is, the video image 21 and the still image image 22 are input, and the perspective image is expanded into a plane image by the equations (1) and (2). This is performed by the image plane expansion processing unit 24. .
- the developed image comparison unit 31 compares the developed images.
- the developed image comparison unit 31 compares the images from different viewpoints to determine the unevenness of the road surface.
- the processing is performed by the unit 32.
- Fig. 6 not only the road surface is deployed, but also the buildings, street trees, Guardrails etc. are also developed on a plane, and converted to arbitrary viewpoint images, or textures are pasted on buildings, and 3D CGs using actual textures are used to move viewpoints by road surface development method And a structure for performing texture pasting.
- a road surface plane development unit 42 obtains a road development plan from an image input from the video image input unit 41, while a horizontal elevation plane development unit 43 obtains a road side surface such as a building wall surface. Expand the part into a plan view. Thereafter, an optical flow map is generated by an optical flow map generation unit 44, and a target portion is selectively extracted by a next optical flow map selection and extraction unit 45.
- the curb surface extraction unit 46 extracts the curb of the road
- the building surface extraction unit 47 extracts the building wall and the like.
- the sidewalk development unit 48 performs sidewalk development, matches the data from the road surface development unit 42 described above, and combines the sidewalk and roadway parts with the road horizontal integration unit 49.
- the street tree is extracted by the street tree etc. plane extraction unit 50, and the data from the building surface extraction, the sidewalk surface development unit and the road vertical surface integration unit 51 are combined.
- the vertical component plane on the side of the road can be configured.
- the 3DCG positioning unit 52 performs positioning, and the vertical surface texture such as a building pastes the texture of the vertical surface in the texture pasting unit 53 and pastes the actual shooting texture. It consists of part 54.
- the data from the road horizontal plane integration unit 49 and the data from the road vertical plane integration unit 51 with the road vertical horizontal plane integration unit 55 and integrating them a three-dimensional object having a horizontal plane and a vertical plane is obtained.
- Figure (solid map) is obtained. This is converted into an image from an arbitrary viewpoint by an arbitrary viewpoint perspective display unit 56, and a perspective image from the arbitrary viewpoint with the changed viewpoint is displayed.
- FIG. 7 shows an example of the configuration of the apparatus of the present invention.
- An image input section 61 is a section for inputting an image of a real photograph acquired by a camera such as a CCTV camera or a digital still camera.
- the central image recording section 62 records the input video
- the video playback section 63 reproduces the recorded image
- the image correction section 64 corrects the image distortion caused by the lens such as spherical aberration.
- the target plane image is oriented to the plane in the image.
- the image development plane processing unit 65 uses the equations (1) and (2) based on the above equations (1) and (2).
- the optical flow map generation unit 66 generates the optical flow of the expanded video by mathematical operation and generates the optical flow of the expanded video, and the optical flow selection and extraction unit. 6 7 is a part that extracts only the desired optical flow from the optical map, and the image processing unit 6 8 deletes unnecessary object images while leaving only the necessary objects from the image. There is a portion to insert the image.
- the developed image combining unit 69 combines the developed and processed individual images to generate one continuous image.
- the generated developed image is displayed on the display unit 70, and the recording unit 7 Recorded by one.
- the arbitrary viewpoint image generating unit 72 is a part for inversely converting to an arbitrary viewpoint and displaying it as a perspective image.
- the developed image comparing unit 73 is a part for comparing developed images at a plurality of the same points. Is a portion for extracting road surface irregularities by calculation.
- a perspective image road surface can be developed into a planar image according to the purpose, a building wall image displayed in perspective can be developed into a planar image, and a building wall image and a guardrail image can be displayed.
- change the viewpoint and then reverse convert it back to a perspective image again, or display a road surface, building surface, etc. on which texture is pasted, or use a reverse transform to produce a perspective image It is also possible to obtain a parallax image by combining images from different viewpoints, and then perform processing such as calculating the unevenness of the road surface.
- FIG. 8 shows the procedure for moving the viewpoint, deleting the moving object, etc., and the top part of the figure shows the perspective image of the road surface viewed from a certain viewpoint.
- the system After developing many images into three planes, the system removes moving objects on the road surface, for example, the vehicles in front that would become invisible in perspective, and processes the vehicles from the road surface image. Can be obtained. Below the combined arrow, a left-side plane development image, a road-side plane development image, and a right-side plane development image developed on three planes are obtained. In the left-side plane development image and right-side plane development image, the optical tree is used to separate street trees, guardrails, and building walls.
- the road surface is displayed as a map, and the two sides are displayed as flat surfaces with open walls, as shown in FIG. 8.
- FIG. 9 shows a procedure for detecting an uneven surface on a road surface.
- Fig. 9 shows an example of the depth of a hole, for example.In addition to the depth of the hole, the asphalt swelling after road construction, or the rutting caused by driving a car, etc. Can be measured.
- FIG. 10 shows a specific method for obtaining 0.
- the parallel line part in the real world live-action video
- the extension of the parallel line is displayed as an intersection in the image, so it is a plane parallel to the target plane created by the intersection.
- d the distance between plane a and plane b, which is a plane parallel to the target plane including the optical axis point
- an example of calculating the virtual focal length is as follows.
- the distance on the optical axis such that the angle at which an arbitrary object in the real space is seen in advance and the angle at which the same object in the displayed image is seen is the same. It is sufficient to find it on the display image. If the unit at this time is expressed in pixels, it will be a value specific to the camera system including the lens, and it will be sufficient to find it once.
- a parallel line portion of the object in the real world is searched for, and it is represented in the image as an intersection line having an intersection on the extension line, so that when this intersection line is expanded on a plane, it becomes parallel.
- FIG. 11 shows a method of measuring the actual value of 0 by developing a perspective image into a plane.
- FIG. 11 (A) in FIG. 11 shows the perspective image as it is
- FIG. 11 (B) shows the state when the plane is developed with 0 being a certain value.
- the parallel lines in the real world that is, the road surface
- the line segments on both sides of the road that should become parallel lines do not become parallel lines as shown in (B).
- the line segments on both sides of the road become parallel lines on the developed side view, as shown in (C). Since 0 was obtained, 0 can be obtained by this.
- FIG. 12 a case where a three-dimensional map is created will be described with reference to FIGS. 12 to 18.
- FIG. a case will be described as an example in which a three-dimensional map is generated from a video taken in a substantially moving direction from a video camera mounted on a vehicle traveling on a road.
- a moving image input unit 81 inputs an image taken in a direction substantially advancing from a bidet talent merchandise loaded on a vehicle running on a road.
- the image is decomposed into images by a multi-plane decomposition unit 82.
- the reference plane designating section 83 interprets that the image is composed of a plurality of planes and sets a plurality of planes in the image. Set as the reference plane.
- the arbitrary-purpose plane designating section 84 sets the street light plane assuming that a plurality of street lights and the like are in one plane because they are regularly installed, and similarly sets the curbstone plane and the street Multiple planes, such as a tree plane and the entire building plane, can be set.
- Fig. 13 shows an image of a plane extracted by optical flow. As shown in the figure, assuming that the vertical distance of the plane to which each object belongs from the standard position of the camera is D, all planes can be separated and extracted as a group of multiple parallel planes.
- the ⁇ and h detection unit 85 reads the angle 0 between the road surface and the optical axis and the distance h between the camera system optical center and the road surface from the image.To read these automatically,
- the intersection lines by giving f and h in the above formulas (1) and (2) are set so as to be parallel when developed on a plane, or d and f related to the planes a and b This is due to 0 which is obtained from the ratio of, and this 0 may be read by actual measurement when it can be measured.
- the coordinate conversion unit 86 substitutes 0 and h into the plane expansion conversion formulas of the above-described equations (1) and (2) to perform an operation, and the image plane expansion unit 87 obtains a plane expansion image. . Opt.
- the F (Optical flow) value calculator 8 8 divides the image into small areas because the video is a moving image, and calculates the optical flow of each part of the image by matching and correlation methods.
- the Opt.F (optical flow) map generator 89 displays the above calculation result as an image map.
- the reference plane image extraction unit 90 obtains the reference plane image by extracting the reference plane image based only on the unique optical flow value indicating the road surface.
- the relative speed in the same plane is always constant on the road surface developed on a plane, so the optical flow is the same, and the road surface can be easily extracted.
- the relative speed changes even on the same plane in the image depending on the distance, so the reference plane cannot be extracted with the unique optical flow value.
- the comparison is not simple because the size changes depending on the distance.
- the parallel plane extracting unit 91 is configured to obtain an optical flow value different from that of the reference plane in the same manner as when extracting the above-mentioned reference plane. Since the plane parallel to the reference plane only has a different eigenvalue from the reference plane, the parallel plane can be obtained separately from the reference plane.
- the plane image forming unit 92 treats the obtained planes as image planes as they are, so that an image within the set planes can be acquired.
- the three-dimensional map generation unit 93 generates a three-dimensional map having a configuration of a reference plane and a plane parallel thereto by assembling each parallel plane with three-dimensional coordinates. However, since not all objects can be represented using only the reference plane and a plane parallel to it, another plane different from the reference plane must be constructed in the same way.
- one of the initially set planes can be handled in the same way as the reference plane, and can be generated in the same process as a three-dimensional map.
- ⁇ and h must be converted to their respective planes from the obtained three-dimensional coordinates of the reference plane. That is, the position of the arbitrary plane is calculated from the obtained three-dimensional image of the reference plane by the conversion of the S and h values on the specified plane and the specifying unit 95, and the calculation is simply manually converted. It is also possible.
- the target image is subjected to the same processing through the ⁇ r designation section (0, h designation section) 96, and a three-dimensional map is generated through the target plane image extraction section 97. It is generated by part 93.
- Fig. 14 shows a plane extracted by optical flow and an image of a three-dimensional map generated by three-dimensional conversion. Further, the position and direction of the camera that acquires the video are detected by the processing of the ⁇ and h detection unit 85, the coordinate conversion unit 86, and the image plane development unit 87 described above, and the reconstructed 3D map is displayed. And plotted. That is, by the processing of the detection unit 85, coordinate conversion unit 86, and image plane development unit 87 for ⁇ and h, the angle between the optical axis of the camera and the target plane such as the road surface is given, and the coordinates in the image are given.
- the camera position and the camera direction calculated by the calculation can be described in the image expanded on the plane by the conversion formula or the coordinates of the target plane by specifying the origin.
- the above equations (1) and (2) are calculated by giving the focal length of the camera, the angle between the road surface and the optical axis of the camera, and the coordinate origin to the target plane such as the camera optical axis and the road surface. Then, a plane developed image of the road image acquired by the camera can be obtained, and at that time, the camera position and the camera direction are obtained by calculation from the conditions of the conversion formula. As a result, the camera position and camera direction can be detected in the converted image and plotted on a three-dimensional map.
- a single image is generated by combining a plurality of images taken by a moving camera and developed in a plane, and the combined image represented is used as a new common coordinate system in the new coordinate system.
- the camera position and camera direction obtained by the above equations (1) and (2) can be described one after another. For example, a video from an on-board camera is flattened, the corresponding points on the target plane in each frame image are searched automatically or manually, and the corresponding points are combined so as to match, and a combined image of the target plane is generated. And display them in the same coordinate system. Then, the camera position and camera direction are detected one after another in the common coordinate system, and the position, direction, and locus can be plotted.
- FIG. 15 shows an image of a state in which the locus of the camera position is described on a three-dimensional map.
- the three-dimensional map can be generated as described above, a plurality of images obtained by planar decomposition of the perspective image and the position information of the camera can be generated, and a desired three-dimensional image can be reconstructed from the information.
- a desired three-dimensional image can be reconstructed from the information.
- the main method is to separate moving parts in a moving image, predict the movement, and eliminate signal redundancy. It has been known.
- this type of conventional method is effective for partial movement, such as when there is a moving object on a still background, but for moving images where the camera itself moves, Since the whole image has a motion component and the moving speed is not the same, the entire video must be updated, and the compression effect is significantly reduced.
- a moving image is analyzed and treated as an image composed of a three-dimensional plane.
- the image can be reconstructed by extracting each plane with this device and reconstructing those planes . Since the plane is defined three-dimensionally, the reconstructed plane is placed in the three-dimensional space, so that the final image is a three-dimensional image. Therefore, as long as the camera moves in a certain direction, the relative speed between the camera and the plane is constant, and the speed reading is uniquely required for each plane.
- Each plane has its own velocity component in the range in which the force mea- sure moves at a constant speed.
- the image is compressed and the receiving side can reproduce the original moving image by moving each plane at a defined speed. Moreover, since the image contains three-dimensional information, it can be expressed three-dimensionally. .
- FIG. 16 shows another embodiment, and the same parts as those in the embodiment shown in FIG. 12 are denoted by the same reference numerals, and the detailed description thereof is omitted.
- the optical flow of the reference plane takes a value different from the reference plane and an Opt. F value parallel thereto.
- the 0 pt.F value which is an abnormal value existing on the road surface (reference surface) is detected. Then, that part will be the area of the moving object on the road surface. If the purpose is to delete the moving object, this moving object area is extracted and deleted by the moving object Opt. F (optical flow) extraction unit 101 and the moving object part extraction unit 102, and the deletion is performed.
- the region may be complemented from the overlapping developed images before and after.
- Reference numeral 103 in the figure denotes a simple moving body relative speed extraction unit that simply extracts the relative speed of the moving body.
- the processing process on the right side of FIG. 16 is performed via the moving object plane designation unit 105.
- the entire Opt.F value of the moving object area obtained on the flattened reference plane does not directly mean the Opt.F value specific to the moving object such as a vehicle.
- the moving body is further decomposed into multiple planes by the Opt.F (optical flow) extraction unit 106 and the moving object plane extraction unit 107, etc.
- the three-dimensional shape of the moving object can be obtained by the moving object plane image forming unit 108 by calculating the plane development of each plane constituting the moving object in the same process.
- the speed vector of the moving object can be obtained by the vector extraction unit 109. Furthermore, it can be taken into a three-dimensional map by a three-dimensional map generation unit 110 that includes a moving object.
- a method of pasting the texture of an acquired image to a CG (computer graphics) image or the like as an embodiment of an application example of the three-dimensional map generation will be described with reference to FIG.
- the image developed on the plane has acquired three-dimensional coordinates, and Because only the image of the road surface was extracted, the texture of the building wall, which was partially shaded by a street tree at the time of shooting, could be obtained by deleting the image of the street tree.
- the CG image can be used to add the texture of the building wall of the video image to the CG image and the street. Trees, guardrails, etc. can be attached.
- FIG. 17 the same parts in the embodiment shown in FIG. 12 are denoted by the same reference numerals, and their detailed description is omitted.
- Opt. F Optical flow
- the parallel plane image is extracted from the map generation unit 89 by the parallel plane image extraction unit 111
- the target plane image extracted from the target plane image extraction unit 97
- a texture signal is obtained through a texture signal generator 1 1 and 2.
- three-dimensional coordinates are acquired by the three-dimensional coordinate acquisition unit 113 from the plane image forming unit 92, and the three-dimensional coordinates are matched by the CG coordinate matching unit 114 to match the coordinates with the CG image.
- FIG. 18 shows an embodiment in the case where a three-dimensional map is constituted by recognized parts, and shows the same configuration in the embodiment shown in FIGS. 12 to 17. Are denoted by the same reference numerals, and their detailed description is omitted.
- the optical flow of the object in the image depends only on the relative speed between the camera and the object. Objects can be tracked, and speed can be extracted even for moving objects, making it easy to track them.
- image recognition can be facilitated by using the position and shape change of the object with respect to time change as a clue for recognition, and the object in the image can be directly compared to the 3D CG model without using tracking. Can be replaced. Therefore, a specific object existing in the screen is selected and extracted from the image formed by the planar image forming section 92 by the object selection and tracking section 122, and the object is tracked. Through the object recognition section 1 2 2 Then, it is input to the attribute adding section 123 for adding the attribute of the object.
- a specific object existing in the screen is displayed by a moving object selection tracking unit 124.
- a mobile object is selected and extracted, and the mobile object is tracked.
- the attribute adding unit adds the attributes of the mobile object through the mobile object recognition unit 125 that recognizes the attributes of the mobile object and other various information. It is entered in 1 2 6.
- the three-dimensional map is generated by the three-dimensional map generation unit 127 composed of the recognition target object from the attribute addition units 123 and 126, respectively.
- FIG. 19 is a specific example of an image obtained by a traffic surveillance video camera.
- FIG. 19 (a) is a surveillance video camera image (perspective image),
- (b) is a plane developed image converted by the present invention,
- (C) is an area analysis display showing the object recognized according to the present invention.
- FIG. 20 is a flowchart showing processing steps of object recognition in the traffic monitoring video camera.
- Fig. 21 is a list that summarizes the results obtained by object recognition.
- the types of passing vehicles, vehicle color, traffic volume, speed, acceleration, and the vehicle trajectory in the surveillance video camera images are obtained from the traffic monitoring video camera images by image recognition. Has become.
- the surveillance video image obtained by the traffic surveillance video camera is a perspective image (oblique image) as shown in Fig. 19 (a), and the size and speed of the target vehicle are not uniform. Absent.
- This perspective image is digitized and planarly developed to obtain a planar developed image of the present invention.
- the resulting plan view is as shown in Fig. 19 (b).
- the parameters f and 0 are determined so that the road surface is developed into a plane, so on the road surface, except for the vehicle height, such as the width and length of the vehicle, The scale is uniform and measurable even at the position. Therefore, the traffic volume Image recognition for monitoring can be performed.
- Fig. 19 (a) Conventionally, it is common practice to perform vehicle recognition and recognition on perspective images (see Fig. 19 (a)), limit the vehicle recognition area to a part of the screen, and detect and measure vehicles passing through the area. It was a target.
- the perspective image can be developed on a plane, the entire road surface of the image can be used as a vehicle recognition range by using the plane developed image (see FIG. 19 '(b)).
- the moving object (vehicle) captured at the upper part (upper side) of the two-dimensional image moves to the lower part (lower side) of the image, which is the traveling direction.
- a plurality of images can be obtained for the same vehicle.
- a moving object image moving from the upper part to the lower part of the two-dimensional image can be acquired as image data for about 30 frames.
- a detailed image analysis (area analysis) can be performed. For example, the accuracy of identifying a vehicle type can be improved, and accurate image recognition can be performed.
- Figure 19 (c) shows the area analysis display.
- a plane developed image for example, in order to increase the recognition accuracy, it is possible to perform processing such as averaging of images and averaging of processed contour images, which is very effective in image processing.
- processing such as averaging of images and averaging of processed contour images, which is very effective in image processing.
- the scale is the same on any part of the road surface on the planar developed image, position information can be easily obtained, and the movement trajectory of the moving object can be tracked along with recognition.
- the scale is the same, the vehicle position and moving speed (calculated from the number of video frames per second) can be measured anywhere on the road surface, and acceleration and speed can be easily calculated. It is. With reference to FIG. 20, the flow of the traffic volume recognition process using the above-described plane development technology will be described in more detail.
- the captured perspective image is digitized (201 in FIG. 20), and a plane developed image is created from the perspective image (202).
- the moving object region is detected in the plane developed image, and the candidate region is created by comparing the background image with the current image to generate a candidate region (203).
- the candidate region is subjected to image processing by image calculation with the background image, and a region having a small independent area is removed for vehicle detection, and the image region of the vehicle candidate is specified by expansion coupling of the remaining region.
- the background image is updated using Kalman Phillips or the like (2003).
- the specified candidate area is subjected to processing such as changing the threshold value of image processing, and analyzed in detail (204).
- the movement amount of the vehicle candidate region can be predicted by expanding the plane, when the vehicle presence region image is obtained, the vehicle presence region is predicted, and the shape correction such as the size of the image and the position shift is performed. Adjustments are made and the corresponding area to be used is analyzed and determined (Same as in 2005).
- the determined corresponding area is registered on a data basis together with its location information (see
- the vehicle type is determined using the averaging image (2008), and is registered in the database together with the recognition result (2010).
- a vehicle ID passes time (Passed Time), average speed (Speed), acceleration (Acc), vehicle type (Type) Information on various items required for vehicle recognition, such as) and color
- the information is not limited to the items shown in FIG. 21 and other information can be registered.
- a vehicle average image, a vehicle trajectory, and the like can be registered.
- Japanese Patent Application No. Hei 11-975 / 62 Japanese Patent Application No. 2000-1990, Japanese Patent Application No.
- An outline of the method and apparatus is an information code conversion device that converts object information acquired for an object into an information code registered in advance corresponding to the object, and transmits or outputs the information code. And a reproduction conversion device that receives or inputs an information code from the information code conversion device and converts the information code into reproduction object information registered in advance corresponding to the information code. .
- information input means for inputting information on required objects, information on various objects or their parts and their attributes and the like created in advance, and data obtained by encoding the information are stored.
- An information code conversion for comparing and comparing the information input to the information input means with the information stored in the first component storage, and selecting and outputting data relating to the corresponding information;
- An apparatus, a second parts warehouse that forms a database similarly to the first parts warehouse, and a data output from the information code converter is compared with data stored in the second parts warehouse.
- An information reproduction conversion device that selects information for reproducing the corresponding object and reproduces and outputs the object by required output means based on the information for reproducing the object.
- one or more objects at arbitrary positions can be specified by name or attribute on the image display of the video image taken by one or more cameras with the one or more cameras, or a single point position can be specified by enclosing the object with a mouse.
- the target object including the change in the relative angle and direction between the camera and the target object and the change in the distance while excluding the existence around the target object are excluded.
- the features of each image frame of the object or the features of each component of the object are sequentially detected in chronological order, and a wealth of image data on various features is stored.
- image frames that have a correspondence with the feature of each image frame of the object or the continuous change of the feature of each component, or image data of the feature A search is performed sequentially, and a reproduction image including a two-dimensional or three-dimensional shape corresponding to the target object that has been subjected to pattern matching in accordance with the search result is sequentially formed for each time-series change.
- Each image frame including an image or the image data of the feature is matched with the required size standard setting on the required image on the above image display or other image display via a communication line, etc.
- a 3D map is constructed by combining and accumulating the 3D CG models of the replaced objects
- the above-mentioned texture pasting makes it possible to paste the actual texture into the three-dimensional CG model of the object.
- This enables continuous connection in the moving direction so that a surrounding image of a moving object such as an object can be acquired. That is, images in the moving direction obtained by mounting a camera on a moving vehicle, an aircraft, a ship, or the like and capturing the images are developed in a plane, and continuously combined into one image.
- a camera is mounted on a moving vehicle, aircraft, ship, etc., which sequentially captures diagonal images, for example, captures a road image along a road, develops it on a plane, and joins a single road. It is to get a video.
- the situation of a camera for photographing a road around a bus 201 as a vehicle is described.
- the first camera 201 which captures the front of the bus 201, is at the front of the bus 201
- the second camera 201 which captures the rear
- the bus 201 is at the rear.
- the fourth camera 201 D to the left left of the bus 201 to shoot the left front of the bus 201.
- a fifth camera 201 E for photographing the rear right is provided at the front right of the bus 201
- a sixth camera 201 F for photographing the rear left of the bus 201 is provided at the front left of the bus 201.
- the area photographed by these cameras 201 A... Is shown in a fan-shaped diagram. In this way, the images of the six types of road surfaces are captured in perspective, and are developed into a two-dimensional image such as a map by the above equations (1) and (2).
- the coordinates (u, V) above are converted to coordinates (X, y) converted on the map.
- FIG. 24 and FIG. 25 show specific examples, and FIG. 24 shows oblique images of nine roads (1) to (9).
- FIG. 25 shows this developed into a two-dimensional image such as a map by the above-mentioned equations (1) and (2), and then connected and displayed as a single road like a map.
- a moving object image that is unnecessary for image formation, it can be deleted, and when combining multiple images that partially include overlapping objects, if the moving object is By combining the images while avoiding the image of the moving object, a combined image of only the stationary object is generated. For example, when a vehicle or the like is shown on a road, a combination of images that are developed on a plane while avoiding the vehicle or the like is combined and connected to obtain a long road photograph that shows only the road.
- Vehicles can also be used as an example of applied equipment, such as the surrounding road surface of a land vehicle such as a bus, the surface of a building, the arrangement of telephone poles, the arrangement of street trees, the arrangement of guardrails, etc. It is the sea surface of a vehicle, such as the deck or wall surface of a ship, the runway of an aircraft, the ground surface, etc., which enables the display of all directions or the target area.
- applied equipment such as the surrounding road surface of a land vehicle such as a bus, the surface of a building, the arrangement of telephone poles, the arrangement of street trees, the arrangement of guardrails, etc. It is the sea surface of a vehicle, such as the deck or wall surface of a ship, the runway of an aircraft, the ground surface, etc., which enables the display of all directions or the target area.
- the surrounding road surface, the building surface, the array of telephone poles, and the like from a normal camera 201A attached to a land vehicle such as a bus 201, etc. Develop images of the arrangement of street trees, guardrails, etc., the sea surface of vehicles such as ships, the decks and walls of ships, the runway of aircraft, and the ground surface, etc. in a plan view. It enables the omnidirectional display of the surrounding roads, buildings, telephone poles, street trees, guard rails, and so on. Alternatively, although not shown, the omnidirectional display of the marine vehicle and other vehicles on the sea, and the omnidirectional display of the ship's deck and wall surfaces, etc. Can be done.
- the present invention can be applied to a building structure, and as shown in FIGS. 26 and 27, for example, a plane portion such as a floor surface or a wall surface of a building is displayed in a plane-expanded manner, and This enables plane combination display, in which the inside of the building is photographed with a normal force camera, and flat parts such as the floor surface and wall surface are displayed in plane development and plane combination display.
- FIG. 26 shows an oblique image of a room taken by a normal camera.
- the 16 images (1) to (16) the above equation (1) is used.
- the two-dimensional images are converted by (2) and (2), and they are connected as shown in FIG. In this way, an image that cannot be actually captured can be obtained as an image in which the floor and walls in a building room are developed, and an image in which the surrounding wall is developed with respect to the floor can be generated. It is.
- the plane-expanded image is simultaneously extended while being coupled in the moving direction.
- a 3D map is created by creating a wider view of the vertical plane, including the vertical plane.
- images of the road surface, ground surface, and water surface captured by a video camera mounted on a moving vehicle, such as a vehicle, aircraft, or ship, are developed into a plan view, and they are combined by an appropriate method.
- a map is created by drawing a figure and extending it in the direction of movement.
- an object having a vertical surface such as a building wall surface, or a virtual vertical plane in which a plurality of utility poles, guardrails, and the like are regularly arranged in a plane
- the image developed in the plane can be obtained.
- a three-dimensional map is created by creating a wider vertical plane development map, including the vertical plane, while extending the connection in the movement direction.
- the present invention is configured as described above, an image that cannot be actually photographed is converted by using the equation (1) and the equation (2) by using the present method and apparatus. By doing so, the oblique image can be converted to a planar image.
- the application range ffl is also broad.
- the desired moving image can be reconstructed by transmitting and receiving the planar image information and the camera position information, and moving image data can be transmitted and received at high speed while minimizing the data transmission amount. This is particularly effective for video transmission using narrow-band telephone lines and Internet lines.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Description
Claims
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2004521184A JP4273074B2 (ja) | 2002-07-12 | 2003-07-11 | 道路面等の平面対象物映像の平面展開画像処理方法、同逆展開画像変換処理方法及びその平面展開画像処理装置、逆展開画像変換処理装置 |
| AU2003248269A AU2003248269A1 (en) | 2002-07-12 | 2003-07-11 | Road and other flat object video plan-view developing image processing method, reverse developing image conversion processing method, plan-view developing image processing device, and reverse developing image conversion processing device |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2002204879 | 2002-07-12 | ||
| JP2002-204879 | 2002-07-12 | ||
| JP2002253809 | 2002-08-30 | ||
| JP2002-253809 | 2002-08-30 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2004008744A1 true WO2004008744A1 (ja) | 2004-01-22 |
| WO2004008744A9 WO2004008744A9 (ja) | 2004-03-25 |
Family
ID=30117457
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2003/008817 Ceased WO2004008744A1 (ja) | 2002-07-12 | 2003-07-11 | 道路面等の平面対象物映像の平面展開画像処理方法、同逆展開画像変換処理方法及びその平面展開画像処理装置、逆展開画像変換処理装置 |
Country Status (3)
| Country | Link |
|---|---|
| JP (1) | JP4273074B2 (ja) |
| AU (1) | AU2003248269A1 (ja) |
| WO (1) | WO2004008744A1 (ja) |
Cited By (28)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2007172541A (ja) * | 2005-12-26 | 2007-07-05 | Toyota Motor Corp | 運転支援装置 |
| JP2008172310A (ja) * | 2007-01-09 | 2008-07-24 | Fujifilm Corp | 電子式手振れ補正方法及びその装置並びに電子式手振れ補正プログラムと撮像装置 |
| JP2009015583A (ja) * | 2007-07-04 | 2009-01-22 | Nagasaki Univ | 画像処理装置及び画像処理方法 |
| WO2009013823A1 (ja) * | 2007-07-25 | 2009-01-29 | Fujitsu Limited | 動き認識装置および動き認識プログラム |
| JP2009181492A (ja) * | 2008-01-31 | 2009-08-13 | Konica Minolta Holdings Inc | 解析装置 |
| JP2009223220A (ja) * | 2008-03-18 | 2009-10-01 | Zenrin Co Ltd | 路面標示地図生成方法 |
| JP2009223213A (ja) * | 2008-03-18 | 2009-10-01 | Zenrin Co Ltd | 路面標示地図生成方法 |
| JP2009223221A (ja) * | 2008-03-18 | 2009-10-01 | Zenrin Co Ltd | 路面標示地図生成方法 |
| JP2009258651A (ja) * | 2008-03-18 | 2009-11-05 | Zenrin Co Ltd | 路面標示地図生成方法 |
| JP2010503078A (ja) * | 2006-08-30 | 2010-01-28 | ピクトメトリー インターナショナル コーポレーション | モザイク斜め画像、並びにモザイク斜め画像の作成及び使用方法 |
| JP2010175756A (ja) * | 2009-01-29 | 2010-08-12 | Zenrin Co Ltd | 路面標示地図生成方法及び路面標示地図生成装置 |
| JP4553072B1 (ja) * | 2009-03-31 | 2010-09-29 | コニカミノルタホールディングス株式会社 | 画像統合装置および画像統合方法 |
| JP4553071B1 (ja) * | 2009-03-31 | 2010-09-29 | コニカミノルタホールディングス株式会社 | 3次元情報表示装置および3次元情報表示方法 |
| JP2010237798A (ja) * | 2009-03-30 | 2010-10-21 | Equos Research Co Ltd | 画像処理装置および画像処理プログラム |
| JP2010246166A (ja) * | 2010-07-21 | 2010-10-28 | Konica Minolta Holdings Inc | 3次元情報表示装置および3次元情報表示方法 |
| KR20100122688A (ko) * | 2009-05-13 | 2010-11-23 | 삼성전자주식회사 | 카메라를 이용한 문자 인식 장치 및 방법 |
| JP2010277262A (ja) * | 2009-05-27 | 2010-12-09 | Konica Minolta Holdings Inc | 画像処理装置および方法 |
| JP2016081525A (ja) * | 2014-10-10 | 2016-05-16 | アプリケーション・ソリューションズ・(エレクトロニクス・アンド・ヴィジョン)・リミテッド | 車両用画像認識システム、及び対応法 |
| JP2017010082A (ja) * | 2015-06-16 | 2017-01-12 | 株式会社パスコ | 地図作成装置、地図作成方法、およびモービルマッピング用計測装置 |
| WO2018199663A1 (ko) * | 2017-04-26 | 2018-11-01 | (주)디렉션 | 원근법 대응 이미지 보정 방법 및 장치 |
| KR20180120122A (ko) * | 2018-05-23 | 2018-11-05 | (주)디렉션 | 원근법 대응 이미지 보정 방법 및 장치 |
| EP3306267A4 (en) * | 2015-05-27 | 2019-01-23 | Kyocera Corporation | ARITHMETIC LOGICAL DEVICE, CAMERA DEVICE, VEHICLE AND CALIBRATION METHOD |
| CN110148135A (zh) * | 2019-04-03 | 2019-08-20 | 深兰科技(上海)有限公司 | 一种路面分割方法、装置、设备及介质 |
| US11644331B2 (en) | 2020-02-28 | 2023-05-09 | International Business Machines Corporation | Probe data generating system for simulator |
| US11702101B2 (en) | 2020-02-28 | 2023-07-18 | International Business Machines Corporation | Automatic scenario generator using a computer for autonomous driving |
| JP2023110400A (ja) * | 2022-01-28 | 2023-08-09 | 株式会社コア | 情報処理装置、情報処理方法及び情報処理プログラム |
| US11814080B2 (en) | 2020-02-28 | 2023-11-14 | International Business Machines Corporation | Autonomous driving evaluation using data analysis |
| JP2024513145A (ja) * | 2021-03-22 | 2024-03-22 | タスグローバル カンパニーリミテッド | 船舶清掃ロボットの複数のカメラ映像を1つの映像に処理する映像処理方法、コンピュータ判読可能記録媒体、コンピュータプログラム及びこれを利用したロボット制御方法 |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9330471B2 (en) * | 2013-02-14 | 2016-05-03 | Qualcomm Incorporated | Camera aided motion direction and speed estimation |
| WO2014192316A1 (ja) * | 2013-05-31 | 2014-12-04 | パナソニックIpマネジメント株式会社 | モデリング装置、3次元モデル生成装置、モデリング方法、プログラム、レイアウトシミュレータ |
| CN105096252B (zh) * | 2015-07-29 | 2018-04-10 | 广州遥感信息科技有限公司 | 一种带状全方位街景影像图的制作方法 |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS6216073B2 (ja) * | 1981-12-23 | 1987-04-10 | Hino Motors Ltd | |
| JPH10255071A (ja) * | 1997-03-10 | 1998-09-25 | Iwane Kenkyusho:Kk | 画像処理システム |
| JP2001141423A (ja) * | 1999-11-11 | 2001-05-25 | Fuji Photo Film Co Ltd | 画像撮像装置及び画像処理装置 |
| JP2001141422A (ja) * | 1999-11-10 | 2001-05-25 | Fuji Photo Film Co Ltd | 画像撮像装置及び画像処理装置 |
| JP3286306B2 (ja) * | 1998-07-31 | 2002-05-27 | 松下電器産業株式会社 | 画像生成装置、画像生成方法 |
| JP3300334B2 (ja) * | 1999-04-16 | 2002-07-08 | 松下電器産業株式会社 | 画像処理装置および監視システム |
-
2003
- 2003-07-11 WO PCT/JP2003/008817 patent/WO2004008744A1/ja not_active Ceased
- 2003-07-11 JP JP2004521184A patent/JP4273074B2/ja not_active Expired - Lifetime
- 2003-07-11 AU AU2003248269A patent/AU2003248269A1/en not_active Abandoned
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS6216073B2 (ja) * | 1981-12-23 | 1987-04-10 | Hino Motors Ltd | |
| JPH10255071A (ja) * | 1997-03-10 | 1998-09-25 | Iwane Kenkyusho:Kk | 画像処理システム |
| JP3286306B2 (ja) * | 1998-07-31 | 2002-05-27 | 松下電器産業株式会社 | 画像生成装置、画像生成方法 |
| JP3300334B2 (ja) * | 1999-04-16 | 2002-07-08 | 松下電器産業株式会社 | 画像処理装置および監視システム |
| JP2001141422A (ja) * | 1999-11-10 | 2001-05-25 | Fuji Photo Film Co Ltd | 画像撮像装置及び画像処理装置 |
| JP2001141423A (ja) * | 1999-11-11 | 2001-05-25 | Fuji Photo Film Co Ltd | 画像撮像装置及び画像処理装置 |
Cited By (40)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2007172541A (ja) * | 2005-12-26 | 2007-07-05 | Toyota Motor Corp | 運転支援装置 |
| JP2010503078A (ja) * | 2006-08-30 | 2010-01-28 | ピクトメトリー インターナショナル コーポレーション | モザイク斜め画像、並びにモザイク斜め画像の作成及び使用方法 |
| JP2008172310A (ja) * | 2007-01-09 | 2008-07-24 | Fujifilm Corp | 電子式手振れ補正方法及びその装置並びに電子式手振れ補正プログラムと撮像装置 |
| JP2009015583A (ja) * | 2007-07-04 | 2009-01-22 | Nagasaki Univ | 画像処理装置及び画像処理方法 |
| WO2009013823A1 (ja) * | 2007-07-25 | 2009-01-29 | Fujitsu Limited | 動き認識装置および動き認識プログラム |
| US8229173B2 (en) | 2008-01-31 | 2012-07-24 | Konica Minolta Holdings, Inc. | Analyzer |
| JP2009181492A (ja) * | 2008-01-31 | 2009-08-13 | Konica Minolta Holdings Inc | 解析装置 |
| JP2009258651A (ja) * | 2008-03-18 | 2009-11-05 | Zenrin Co Ltd | 路面標示地図生成方法 |
| JP2009223221A (ja) * | 2008-03-18 | 2009-10-01 | Zenrin Co Ltd | 路面標示地図生成方法 |
| JP2009223213A (ja) * | 2008-03-18 | 2009-10-01 | Zenrin Co Ltd | 路面標示地図生成方法 |
| JP2009223220A (ja) * | 2008-03-18 | 2009-10-01 | Zenrin Co Ltd | 路面標示地図生成方法 |
| JP2010175756A (ja) * | 2009-01-29 | 2010-08-12 | Zenrin Co Ltd | 路面標示地図生成方法及び路面標示地図生成装置 |
| JP2010237798A (ja) * | 2009-03-30 | 2010-10-21 | Equos Research Co Ltd | 画像処理装置および画像処理プログラム |
| JP4553072B1 (ja) * | 2009-03-31 | 2010-09-29 | コニカミノルタホールディングス株式会社 | 画像統合装置および画像統合方法 |
| JP4553071B1 (ja) * | 2009-03-31 | 2010-09-29 | コニカミノルタホールディングス株式会社 | 3次元情報表示装置および3次元情報表示方法 |
| WO2010113239A1 (ja) * | 2009-03-31 | 2010-10-07 | コニカミノルタホールディングス株式会社 | 画像統合装置および画像統合方法 |
| WO2010113253A1 (ja) * | 2009-03-31 | 2010-10-07 | コニカミノルタホールディングス株式会社 | 3次元情報表示装置および3次元情報表示方法 |
| US9415723B2 (en) | 2009-03-31 | 2016-08-16 | Konica Minolta Holdings, Inc. | Image integration unit and image integration method |
| KR101577824B1 (ko) * | 2009-05-13 | 2015-12-29 | 삼성전자주식회사 | 카메라를 이용한 문자 인식 장치 및 방법 |
| KR20100122688A (ko) * | 2009-05-13 | 2010-11-23 | 삼성전자주식회사 | 카메라를 이용한 문자 인식 장치 및 방법 |
| JP2010277262A (ja) * | 2009-05-27 | 2010-12-09 | Konica Minolta Holdings Inc | 画像処理装置および方法 |
| JP2010246166A (ja) * | 2010-07-21 | 2010-10-28 | Konica Minolta Holdings Inc | 3次元情報表示装置および3次元情報表示方法 |
| JP2016081525A (ja) * | 2014-10-10 | 2016-05-16 | アプリケーション・ソリューションズ・(エレクトロニクス・アンド・ヴィジョン)・リミテッド | 車両用画像認識システム、及び対応法 |
| EP3306267A4 (en) * | 2015-05-27 | 2019-01-23 | Kyocera Corporation | ARITHMETIC LOGICAL DEVICE, CAMERA DEVICE, VEHICLE AND CALIBRATION METHOD |
| JP2017010082A (ja) * | 2015-06-16 | 2017-01-12 | 株式会社パスコ | 地図作成装置、地図作成方法、およびモービルマッピング用計測装置 |
| CN110546681A (zh) * | 2017-04-26 | 2019-12-06 | D睿科逊公司 | 远近法对应图像校正方法及装置 |
| WO2018199663A1 (ko) * | 2017-04-26 | 2018-11-01 | (주)디렉션 | 원근법 대응 이미지 보정 방법 및 장치 |
| US10970826B2 (en) | 2017-04-26 | 2021-04-06 | D Rection, Inc. | Method and device for image correction in response to perspective |
| CN110546681B (zh) * | 2017-04-26 | 2023-10-20 | D睿科逊公司 | 远近法对应图像校正方法及装置 |
| KR20180120122A (ko) * | 2018-05-23 | 2018-11-05 | (주)디렉션 | 원근법 대응 이미지 보정 방법 및 장치 |
| KR101978271B1 (ko) * | 2018-05-23 | 2019-05-14 | (주)디렉션 | 원근법 대응 이미지 보정 방법 및 장치 |
| CN110148135A (zh) * | 2019-04-03 | 2019-08-20 | 深兰科技(上海)有限公司 | 一种路面分割方法、装置、设备及介质 |
| US11702101B2 (en) | 2020-02-28 | 2023-07-18 | International Business Machines Corporation | Automatic scenario generator using a computer for autonomous driving |
| US11644331B2 (en) | 2020-02-28 | 2023-05-09 | International Business Machines Corporation | Probe data generating system for simulator |
| US11814080B2 (en) | 2020-02-28 | 2023-11-14 | International Business Machines Corporation | Autonomous driving evaluation using data analysis |
| US12275434B2 (en) | 2020-02-28 | 2025-04-15 | International Business Machines Corporation | Autonomous driving evaluation using data analysis |
| JP2024513145A (ja) * | 2021-03-22 | 2024-03-22 | タスグローバル カンパニーリミテッド | 船舶清掃ロボットの複数のカメラ映像を1つの映像に処理する映像処理方法、コンピュータ判読可能記録媒体、コンピュータプログラム及びこれを利用したロボット制御方法 |
| JP7621011B2 (ja) | 2021-03-22 | 2025-01-24 | タスグローバル カンパニーリミテッド | 船舶清掃ロボットの複数のカメラ映像を1つの映像に処理する映像処理方法、コンピュータ判読可能記録媒体、コンピュータプログラム及びこれを利用したロボット制御方法 |
| JP2023110400A (ja) * | 2022-01-28 | 2023-08-09 | 株式会社コア | 情報処理装置、情報処理方法及び情報処理プログラム |
| JP7770938B2 (ja) | 2022-01-28 | 2025-11-17 | 株式会社コア | 情報処理装置、情報処理方法及び情報処理プログラム |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2004008744A9 (ja) | 2004-03-25 |
| AU2003248269A1 (en) | 2004-02-02 |
| JP4273074B2 (ja) | 2009-06-03 |
| JPWO2004008744A1 (ja) | 2005-11-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2004008744A1 (ja) | 道路面等の平面対象物映像の平面展開画像処理方法、同逆展開画像変換処理方法及びその平面展開画像処理装置、逆展開画像変換処理装置 | |
| Sekkat et al. | SynWoodScape: Synthetic surround-view fisheye camera dataset for autonomous driving | |
| US8000895B2 (en) | Navigation and inspection system | |
| Zhao et al. | Alignment of continuous video onto 3D point clouds | |
| KR101235815B1 (ko) | 촬영 위치 해석 장치, 촬영 위치 해석 방법, 기록 매체 및 화상 데이터 취득 장치 | |
| Pfeiffer et al. | Efficient representation of traffic scenes by means of dynamic stixels | |
| Kanade et al. | Advances in cooperative multi-sensor video surveillance | |
| JP4185052B2 (ja) | 拡張仮想環境 | |
| JP4355535B2 (ja) | 360度画像変換処理装置 | |
| US8963943B2 (en) | Three-dimensional urban modeling apparatus and method | |
| JP4854819B2 (ja) | 画像情報出力方法 | |
| JP2020008664A (ja) | ドライビングシミュレーター | |
| JP2004265396A (ja) | 映像生成システム及び映像生成方法 | |
| Zhao et al. | Alignment of continuous video onto 3D point clouds | |
| Edelman et al. | Tracking people and cars using 3D modeling and CCTV | |
| JP4530214B2 (ja) | 模擬視界発生装置 | |
| WO2008082423A1 (en) | Navigation and inspection system | |
| CN118864778A (zh) | 一种虚实融合方法、装置、设备及存储介质 | |
| Gotoh et al. | Geometry reconstruction of urban scenes by tracking vertical edges | |
| Kokkas | An investigation into semi-automated 3D city modelling | |
| Sirmacek | Quality assessment and comparison of smartphone and leica c10 laser scanner based point clouds | |
| US7400343B2 (en) | Apparatus and method for generating image | |
| Ikeuchi et al. | Constructing virtual cities with real activities | |
| Molina et al. | Mosaic-based modeling and rendering of large-scale dynamic scenes for internet applications | |
| You | Large-Scale 3D Scene Modeling |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| COP | Corrected version of pamphlet |
Free format text: PAGE 20/27, DRAWINGS, ADDED |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2004521184 Country of ref document: JP |
|
| 122 | Ep: pct application non-entry in european phase |