[go: up one dir, main page]

CN112215033B - Method, device and system for generating panoramic looking-around image of vehicle and storage medium - Google Patents

Method, device and system for generating panoramic looking-around image of vehicle and storage medium Download PDF

Info

Publication number
CN112215033B
CN112215033B CN201910616903.5A CN201910616903A CN112215033B CN 112215033 B CN112215033 B CN 112215033B CN 201910616903 A CN201910616903 A CN 201910616903A CN 112215033 B CN112215033 B CN 112215033B
Authority
CN
China
Prior art keywords
scene
information
image
vehicle
stereoscopic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910616903.5A
Other languages
Chinese (zh)
Other versions
CN112215033A (en
Inventor
王泽文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910616903.5A priority Critical patent/CN112215033B/en
Publication of CN112215033A publication Critical patent/CN112215033A/en
Application granted granted Critical
Publication of CN112215033B publication Critical patent/CN112215033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, a system and a storage medium for generating a panoramic looking-around image of a vehicle, and belongs to the technical field of vehicles. The method comprises the following steps: if the vehicle is detected to be switched from a first scene to a second scene, determining residual scene information based on the first scene information and the second scene information, wherein the first scene information comprises a first scene image around the vehicle, and the second scene information comprises a second scene image around the vehicle; and updating the first three-dimensional scene model based on the residual scene information to obtain a second three-dimensional scene model, and determining a panoramic looking-around image of the vehicle in the second scene based on the second scene image and the second three-dimensional scene model. The application can adaptively update the stereoscopic scene model according to the dynamic scene information to obtain the dynamic stereoscopic scene model which can adapt to scene change, and effectively solves the problem that the object is severely stretched and deformed when the panoramic looking-around image is generated based on the fixed stereoscopic scene model.

Description

Method, device and system for generating panoramic looking-around image of vehicle and storage medium
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a method, an apparatus, a system, and a storage medium for generating a panoramic looking-around image of a vehicle.
Background
The panoramic looking-around image of the vehicle refers to a panoramic image capable of showing 360 degrees of scenes around the vehicle, and can be obtained by processing images acquired by a plurality of cameras arranged around the vehicle to map the acquired images into a three-dimensional space. Through the panoramic looking-around image of the vehicle, a driver can intuitively check whether barriers exist at all angles around the vehicle, and know the relative positions and distances of the barriers, so that the visual field of the driver can be enlarged, and accidents such as scratching, collision and collapse are effectively reduced.
In the related art, a panoramic all-around image of a vehicle is generally generated based on a fixed stereoscopic scene model. Specifically, when the vehicle is in a low-speed running state or a parking state, scene information of a scene in which the vehicle is located, including scene images around the vehicle acquired by a plurality of cameras provided around the vehicle, may be acquired. Then, based on the acquired scene information, a three-dimensional scene model for representing the current scene in a three-dimensional space is constructed, based on the scene image and the shooting parameters, the spatial mapping relation between the scene image and the actual scene is determined, and based on the spatial mapping relation, the scene image is mapped into the three-dimensional scene model, so that the panoramic view image of the vehicle in the current scene can be obtained. And the constructed three-dimensional scene model is used as a fixed three-dimensional scene model, and the scene image acquired at any moment later is mapped into the three-dimensional scene model to obtain the panoramic looking-around image of the vehicle in the scene at any moment later.
However, the scene where the vehicle is located is not fixed, after the scene is changed, if the changed scene image is mapped into the original fixed stereoscopic scene model, the fixed stereoscopic scene model can only be used for representing the scene before the change in the three-dimensional space and cannot accurately represent the scene after the change, so that serious stretching and deformation of objects in the obtained panoramic looking-around image can occur.
Disclosure of Invention
The embodiment of the application provides a method, a device, a system and a storage medium for generating a panoramic looking-around image of a vehicle, which can be used for solving the problem that the panoramic looking-around image is easy to cause serious stretching and deformation of objects in the image when the panoramic looking-around image is generated based on a fixed three-dimensional scene model in the related technology. The technical scheme is as follows:
in one aspect, a method for generating a panoramic looking-around image of a vehicle is provided, the method comprising:
if the fact that the vehicle is switched from the first scene to the second scene is detected, determining residual scene information based on the first scene information and the second scene information;
wherein the residual scene information is used to indicate a scene change condition of the second scene relative to the first scene, the first scene information including a first scene image around the vehicle, the second scene information including a second scene image around the vehicle;
Updating a first three-dimensional scene model based on the residual scene information to obtain a second three-dimensional scene model, wherein the first three-dimensional scene is used for representing the first scene in a three-dimensional space, and the second three-dimensional scene model is used for representing the second scene in the three-dimensional space;
and determining a panoramic looking-around image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model.
Optionally, the determining residual scene information based on the first scene information and the second scene information includes:
determining second point cloud information based on the second scene information, wherein the second point cloud information is used for indicating a coordinate set of the second scene in a three-dimensional space;
determining residual errors between the second point cloud information and first point cloud information to obtain residual error point cloud information, wherein the first point cloud information is used for indicating a coordinate set of the first scene in a three-dimensional space;
and determining the residual error point cloud information as the residual error scene information.
Optionally, the updating the first stereoscopic scene model based on the residual scene information to obtain a second stereoscopic scene model includes:
Quantizing the residual scene information to obtain quantized residual scene information;
and updating the first stereoscopic scene model based on the quantized residual scene information to obtain the second stereoscopic scene model.
Optionally, the residual scene information is residual point cloud information, the residual point cloud information is a residual between second point cloud information and first point cloud information, the first point cloud information is used for indicating a coordinate set of the first scene in a three-dimensional space, and the second point cloud information is used for indicating a coordinate set of the second scene in the three-dimensional space;
the step of quantizing the residual scene information to obtain quantized residual scene information includes:
quantizing the residual error point cloud information to obtain quantized residual error point cloud information;
the updating the first stereoscopic scene model based on the quantized residual scene information to obtain the second stereoscopic scene model includes:
and summing the quantized residual difference point cloud information with the first three-dimensional scene model to obtain the second three-dimensional scene model.
Optionally, the determining, based on the second scene image and the second stereoscopic scene model, a panoramic looking-around image of the vehicle in the second scene includes:
Acquiring a first space mapping relation between the first scene image and an actual scene;
determining a second spatial mapping relation between a first sub-image and an actual scene based on the residual scene information and the shooting parameters, wherein the first sub-image is a partial image corresponding to a changed scene in the second scene image, and the changed scene is a partial scene of the second scene which is changed relative to the first scene;
and mapping the second scene image into the second stereoscopic scene model based on the first mapping relation and the second mapping relation to obtain a panoramic looking-around image of the vehicle in the second scene.
Optionally, the mapping the second scene image into the second stereoscopic scene model based on the first mapping relationship and the second mapping relationship includes:
mapping a second sub-image in the second scene image into the second stereoscopic scene model based on the first mapping relation, wherein the second sub-image refers to a part of the second scene image corresponding to an unchanged scene, and the unchanged scene refers to a part of the second scene which is unchanged relative to the first scene;
And mapping the first sub-image in the second scene image into the second stereoscopic scene model based on the second mapping relation.
Optionally, before determining the residual scene information based on the first scene information and the second scene information, the method further includes:
acquiring first scene information corresponding to a first scene where the vehicle is located;
constructing the first stereoscopic scene model based on the first scene information;
and determining a first space mapping relation between the first scene image and an actual scene based on the first scene information and the shooting parameters.
Optionally, after determining the first spatial mapping relationship between the first scene image and the actual scene based on the first scene information and the image capturing parameter, the method further includes:
and mapping the first scene image into the first stereoscopic scene model based on the first space mapping relation to obtain a panoramic looking-around image of the vehicle in the first scene.
In one aspect, a device for generating a panoramic looking-around image of a vehicle is provided, the device comprising:
the first determining module is used for determining residual scene information based on the first scene information and the second scene information if the vehicle is detected to be switched from the first scene to the second scene;
Wherein the residual scene information is used to indicate a scene change condition of the second scene relative to the first scene, the first scene information including a first scene image around the vehicle, the second scene information including a second scene image around the vehicle;
the updating module is used for updating a first three-dimensional scene model based on the residual scene information to obtain a second three-dimensional scene model, wherein the first three-dimensional scene is used for representing the first scene in a three-dimensional space, and the second three-dimensional scene model is used for representing the second scene in the three-dimensional space;
and the second determining module is used for determining a panoramic all-round image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model.
Optionally, the first determining module is configured to:
determining second point cloud information based on the second scene information, wherein the second point cloud information is used for indicating a coordinate set of the second scene in a three-dimensional space;
determining residual errors between the second point cloud information and first point cloud information to obtain residual error point cloud information, wherein the first point cloud information is used for indicating a coordinate set of the first scene in a three-dimensional space;
And determining the residual error point cloud information as the residual error scene information.
Optionally, the updating module includes:
the quantization unit is used for quantizing the residual scene information to obtain quantized residual scene information;
and the updating unit is used for updating the first stereoscopic scene model based on the quantized residual scene information to obtain the second stereoscopic scene model.
Optionally, the residual scene information is residual point cloud information, the residual point cloud information is a residual between second point cloud information and first point cloud information, the first point cloud information is used for indicating a coordinate set of the first scene in a three-dimensional space, and the second point cloud information is used for indicating a coordinate set of the second scene in the three-dimensional space;
the quantization unit is used for quantizing the residual point cloud information to obtain quantized residual point cloud information;
and the updating unit is used for summing the quantized residual difference point cloud information with the first three-dimensional scene model to obtain the second three-dimensional scene model.
Optionally, the second determining module includes:
the acquisition unit is used for acquiring a first space mapping relation between the first scene image and the actual scene;
The determining unit is used for determining a second space mapping relation between a first sub-image and an actual scene based on the residual scene information and the shooting parameters, wherein the first sub-image is a partial image corresponding to a changed scene in the second scene image, and the changed scene is a partial scene of the second scene which is changed relative to the first scene;
and the mapping unit is used for mapping the second scene image into the second stereoscopic scene model based on the first mapping relation and the second mapping relation to obtain a panoramic looking-around image of the vehicle in the second scene.
Optionally, the mapping unit is configured to:
mapping a second sub-image in the second scene image into the second stereoscopic scene model based on the first mapping relation, wherein the second sub-image refers to a part of the second scene image corresponding to an unchanged scene, and the unchanged scene refers to a part of the second scene which is unchanged relative to the first scene;
and mapping the first sub-image in the second scene image into the second stereoscopic scene model based on the second mapping relation.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring the first scene information corresponding to the first scene where the vehicle is located;
the construction module is used for constructing the first stereoscopic scene model based on the first scene information;
and the third determining module is used for determining a first space mapping relation between the first scene image and the actual scene based on the first scene information and the shooting parameters.
Optionally, the apparatus further comprises:
and the mapping module is used for mapping the first scene image into the first stereoscopic scene model based on the first space mapping relation to obtain a panoramic looking-around image of the vehicle in the first scene.
In one aspect, a vehicle panoramic looking-around system is provided, the system comprising a perception element, a processor, and a display, the perception element comprising a plurality of cameras disposed about a vehicle;
the sensing element is used for collecting scene information of a scene where the vehicle is located, and the scene information comprises scene images around the vehicle;
the processor is used for realizing the generation method of the panoramic looking-around image of the vehicle;
The display is used for displaying the panoramic looking-around image of the vehicle.
Optionally, the sensing device further includes a distance sensor disposed on the vehicle, the distance sensor including at least one of an optical distance sensor, an infrared distance sensor, and an ultrasonic distance sensor.
In one aspect, a non-transitory computer readable storage medium is provided that, when executed by a processor of a device, enables the device to perform the method of generating a panoramic looking-around image of a vehicle as described in any of the above.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
in the embodiment of the application, if the vehicle is detected to be switched from the first scene to the second scene, residual scene information which can indicate the scene change condition of the second scene relative to the first scene can be determined based on the first scene information and the second scene information, then the first stereoscopic scene model corresponding to the first scene is updated based on the residual scene information, so that a second stereoscopic scene model adapting to the second scene is obtained, and then the panoramic looking-around image of the vehicle in the second scene is determined based on the second scene image and the second stereoscopic scene model. That is, the application can adaptively update the stereoscopic scene model according to the dynamically changed scene information to obtain the dynamic stereoscopic scene model which can adapt to the scene change, and then, based on the changed scene information and the dynamic stereoscopic scene model, the panoramic looking-around image which can accurately reflect the scene change can be generated, thereby effectively avoiding the problem of serious stretching and deformation of objects in the image caused when the panoramic looking-around image is generated based on the fixed stereoscopic scene model. In addition, when the first stereoscopic scene model is updated based on residual scene information, only the changed partial scenes are updated, and the partial scenes which are not changed are not updated, so that the model updating efficiency is improved, and the panoramic image generating efficiency is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of a panoramic all-around system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a panoramic all-around system according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for generating a panoramic looking-around image of a vehicle according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a three-dimensional scene model reconstruction process according to an embodiment of the present application;
fig. 5 is a schematic diagram of comparison between before and after quantization of point cloud information according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a panoramic all-around image generation process provided by an embodiment of the present application;
fig. 7 is a block diagram of a generating device for panoramic looking-around image of a vehicle according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a device for generating a panoramic looking-around image of a vehicle according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Before explaining the embodiment of the present application in detail, an application scenario of the embodiment of the present application is described.
The method for generating the panoramic looking-around image of the vehicle, provided by the embodiment of the application, can be applied to a scene in which a driver needs to look up the surrounding environment of the vehicle when driving the vehicle. The present application is not limited to this, and may be applied to special driving scenarios such as reversing, mountain driving, curve driving, or congestion driving, or may be applied to normal driving scenarios.
For example, in a scene of collision and scratch easily occurring in a narrow and congested urban area, a parking lot and the like, in order to enlarge the field of view of a driver, a panoramic annular view of the vehicle can be generated by the method provided by the embodiment of the application and displayed to the driver, so that the driver can more clearly and intuitively sense 360-degree omnidirectional environment around the vehicle, blind spots of the field of view are avoided, and further collision and scratch are avoided.
Next, an implementation environment related to the embodiment of the present application will be briefly described.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application, see fig. 1, including an on-board see-around system 100 mounted on a vehicle. In-vehicle see-around system 100 includes, but is not limited to, a perception element 110 and an in-vehicle see-around device 120, and in-vehicle see-around device 120 may include a communication element 121, a processor 122, and a display 123.
The sensing element 110 may include, but is not limited to, a camera, an optical sensor, an infrared sensor, an ultrasonic sensor, an odometer, wheel pulses, and the like. The sensing element 110 may acquire a plurality of captured images around the vehicle, surrounding object information, a speed of the vehicle, or a wheel angle of the vehicle. As an example, the optical sensor may be a laser sensor and the ultrasonic sensor may be a radar. Illustratively, referring to fig. 2, the sensing element 110 may include cameras 11, 12, 13, and 14 mounted around the vehicle. The sensing element 110 can capture 4 captured images of the periphery of the vehicle at the same time through the 4 cameras.
The communication element 121 is configured to transmit the scene information including the plurality of captured images around the vehicle acquired by the sensing element 110 to the processor 122. Optionally, the communication element 121 may also transmit instruction information sent by the user to the processor 122. When the display 123 is a touchable resistive display or capacitive display, the user may select an image to be displayed by clicking or sliding the display interface, thereby triggering instruction information.
After receiving the scene information, the processor 122 may generate a panoramic looking-around image of the vehicle according to the method provided by the embodiment of the present application. The processor 122 may be, for example, a general purpose central processing unit (Central Processing Unit, CPU), microprocessor, application-specific integrated circuit (ASIC), or may be one or more integrated circuits for controlling the execution of the program of the present application.
The display 123 is used to display the generated panoramic all-around image and the like. By way of example, the display 123 may be a resistive display, a capacitive display, a liquid crystal display (liquid crystal display, LCD), a light emitting diode (light emitting diode, LED) display, a Cathode Ray Tube (CRT) display, or a projector, etc.
Next, a method for generating a panoramic looking-around image of a vehicle provided by the embodiment of the present application will be described in detail. Fig. 3 is a flowchart of a method for generating a panoramic looking-around image of a vehicle, which is provided in an embodiment of the present application, and the method may be applied to the vehicle-mounted looking-around device shown in fig. 1, as shown in fig. 3, and includes the following steps:
step 301: and acquiring first scene information corresponding to a first scene where the vehicle is located, wherein the first scene information comprises first scene images around the vehicle.
The vehicle is a vehicle for obtaining a panoramic looking-around image, and can be a car, a truck, a passenger car or the like. The first scene in which the vehicle is located refers to the environment surrounding the vehicle.
The first scene information refers to scene information of a first scene, and is used to indicate the first scene, which includes at least a first scene image around the vehicle. The first scene image may be a look-around image of the vehicle, such as an image captured by a plurality of cameras disposed around the vehicle. In addition, the first scene information may further include object information acquired by a distance sensor provided on the vehicle, based on which the distance and angle of objects around the vehicle may be acquired.
In some examples, a plurality of cameras are installed around the vehicle, and a radar is also installed on the vehicle, the vehicle-mounted looking-around device may acquire scene images and radar information around the vehicle acquired by the plurality of cameras, and the acquired scene images and radar information are used as the first scene information.
In some examples, the first scene information of the vehicle may be acquired in real time, periodically, or upon satisfaction of an information acquisition condition. For example, the information acquisition condition may be that the vehicle is in a stationary state or a low-speed running state, or the like, or that a panoramic all-around function is detected to be started, or the like.
Step 302: based on the first scene information, a first stereoscopic scene model is constructed, the first stereoscopic scene being used to characterize the first scene in three-dimensional space.
Based on the acquired first scene information, a first stereoscopic scene model corresponding to the first scene can be constructed first.
In some examples, first point cloud information may be determined based on the first scene information, the first point cloud information referring to point cloud information of the first scene, and then the first stereoscopic scene model is constructed based on the first point cloud information. The first point cloud information is used for indicating a coordinate set of the first scene in the three-dimensional space, namely indicating a coordinate set of the first stereoscopic scene model, so that the first stereoscopic scene model can be constructed based on the first point cloud information.
In one possible implementation, the first point cloud information may be recovered by a motion structure recovery technique based on the first scene information. Of course, the first point cloud information may also be determined in other manners, which is not limited by the embodiment of the present application.
In some examples, after the first point cloud information is obtained, the first point cloud information may be quantized, so as to obtain quantized first point cloud information, and then, based on the quantized first point cloud information, the first stereoscopic scene model is constructed. By carrying out quantization processing on the first point cloud information, the point cloud density corresponding to the first scene can be reduced, and the point cloud singular points are filtered out, so that the point cloud distribution is more uniform. In one possible implementation, the first point cloud information may be quantized by performing a process such as smooth sampling on the first point cloud information.
For example, any coordinate point included in the first point cloud information generally includes coordinate values of 3 dimensions, such as coordinate values of x-axis, y-axis and z-axis, and when any coordinate point is quantized, the coordinate value of each dimension of the coordinate needs to be quantized. For example, for the coordinate value of each dimension, a correspondence relationship between the coordinate value and the quantized coordinate value may be preset, and then the quantized coordinate value of the coordinate value of each dimension may be determined according to the coordinate value of each dimension and the correspondence relationship. The corresponding relation includes a plurality of coordinate intervals and quantized coordinate values corresponding to each coordinate interval, where the quantized coordinate value corresponding to each coordinate interval is usually a specific coordinate value in the coordinate interval, for example, may be a coordinate value located at a head position, a middle position or a tail position of the coordinate interval, or be a coordinate mean value of the coordinate interval.
For example, for any coordinate point a (x, y, z) included in the first point cloud information, the coordinate values of each dimension of the coordinate point a may be processed by the following formula, to obtain quantized coordinate values in the x-axis, the y-axis, and the z-axis, respectively:
quantification of x-axis coordinates:
Wherein x is an x-axis coordinate value before quantization; x is x th0 ,x th1 ,x th2 ,…,x thn N+1 thresholds set in advance for the x-axis for dividing the x-axis coordinates into a plurality of coordinate sections; x is the quantized X-axis coordinate value.
Quantification of y-axis coordinates:
wherein y is a y-axis coordinate value before quantization; y is th0 ,y th1 ,y th2 …,y thn N+1 thresholds set in advance for the y-axis for dividing the y-axis coordinates into a plurality of coordinate sections; y is the quantized Y-axis coordinate value.
Quantification of z-axis coordinates:
wherein z is a z-axis coordinate value before quantization; z th0 ,z th1 ,z th2 …,z thn N+1 thresholds set in advance for the z axis for dividing the z axis coordinates into a plurality of coordinate sections; z is the quantized Z-axis coordinate value.
Step 303: a first spatial mapping relationship between the first scene image and the actual scene is determined based on the first scene information and the camera parameters.
The first spatial mapping relationship refers to a spatial mapping relationship between the first scene image and the actual scene, and is used to refer to a position in the actual scene space where a certain point in the first scene image is mapped. The camera parameters are used to indicate an imaging principle between the actual scene and the scene image, based on which a first spatial mapping relationship between the first scene image and the actual scene can be determined. For example, the imaging parameters may be parameters of a camera that captures an image of the first scene.
In some examples, the camera parameters include an internal parameter for indicating a spatial mapping relationship between the image space and the camera space and an external parameter for indicating a spatial mapping relationship between the camera space and the actual space, and thus the first spatial mapping relationship may be obtained based on the first scene information, the internal parameter, and the external parameter.
In some examples, after step 303, the first scene image may also be mapped into the first stereoscopic scene model based on the first spatial mapping relationship, resulting in a panoramic looking-around image of the vehicle in the first scene.
Step 304: if the vehicle is detected to be switched from the first scene to the second scene, determining residual scene information based on the first scene information and the second scene information, wherein the residual scene information is used for indicating scene change conditions of the second scene relative to the first scene, and the second scene information comprises second scene images around the vehicle.
In the embodiment of the application, when the scene where the vehicle is located is detected to be changed, the changed scene information can be obtained, the residual scene information between the changed scene information and the scene information before the change is calculated, and then the first three-dimensional scene model before the change is updated based on the residual scene information to obtain the second three-dimensional scene model which can adapt to the scene after the change.
In some examples, the second point cloud information may be determined based on the second scene information, and then residual point cloud information between the second point cloud information and the first point cloud information may be determined, and the residual point cloud information may be determined as residual scene information. The first point cloud information is used for indicating a coordinate set of the first scene in the three-dimensional space, and the second point cloud information is used for indicating a coordinate set of the second scene in the three-dimensional space.
Step 305: based on the residual scene information, updating the first three-dimensional scene model to obtain a second three-dimensional scene model, wherein the first three-dimensional scene is used for representing the first scene in the three-dimensional space, and the second three-dimensional scene model is used for representing the second scene in the three-dimensional space.
The model updating based on the residual scene information has the advantages that when the scene change is small, only a small part which is changed in the three-dimensional scene model is required to be updated, and the part which is not changed is not required to be updated, so that the model updating speed can be improved, and the generation efficiency of panoramic images is further improved.
In some examples, the residual scene information may be quantized first to obtain quantized residual scene information, and then the first stereoscopic scene model may be updated based on the quantized residual scene information to obtain the second stereoscopic scene model. By quantizing the residual scene information, the singular scene information can be filtered out, so that the residual scene information is smoother and more uniform.
In one possible implementation manner, if the residual scene information is residual point cloud information, the residual point cloud information may be quantized first to obtain quantized residual point cloud information, and then the quantized residual point cloud information is summed with the first stereoscopic scene model to obtain the second stereoscopic scene model. As an example, the reconstruction process of the second stereoscopic scene model may be as shown in fig. 4.
The implementation manner of quantizing the residual point cloud information to obtain quantized residual point cloud information is the same as that of quantizing the first point cloud information to obtain quantized first point cloud information, and specific processes can refer to the related description, so that the embodiments of the present application are not repeated here.
For example, referring to fig. 5, for the point cloud (1), the point cloud (2) and the point cloud (3) in the residual point cloud information, the positions before and after quantization may be as shown in fig. 5. As can be seen from fig. 5, the position deviation between the point cloud (1), the point cloud (2), the point cloud (3) and other point clouds before quantization is larger, and the position deviation can be reduced by quantizing the position deviation, so that the residual difference point cloud information is smoother and more uniform.
Step 306: a panoramic looking-around image of the vehicle in the second scene is determined based on the second scene image and the second stereoscopic scene model.
In some examples, a first spatial mapping relationship between the first scene image and the actual scene may be acquired first, then, based on residual scene information and the imaging parameters, a second spatial mapping relationship between the first sub-image and the actual scene may be determined, and based on the first mapping relationship and the second mapping relationship, the second scene image may be mapped into a second stereoscopic scene model, so as to obtain a panoramic looking-around image of the vehicle in the second scene. The first sub-image is a partial image corresponding to a changed scene in the second scene image, and the changed scene is a partial scene of the second scene which is changed relative to the first scene.
In some examples, mapping the second scene image into the second stereoscopic scene model based on the first mapping relationship and the second mapping relationship may include: and mapping a second sub-image in the second scene image into the second stereoscopic scene model based on the first mapping relation, and mapping the first sub-image in the second scene image into the second stereoscopic scene model based on the second mapping relation. The second sub-image is a partial image corresponding to an unchanged scene in the second scene image, and the unchanged scene is a partial scene of the second scene which is unchanged relative to the first scene.
Therefore, after the three-dimensional scene model is reconstructed, the panoramic image can be rendered according to the new mapping relation only by recalculating the mapping relation between the reconstructed part and the scene image, and the non-reconstructed part still adopts the original mapping relation to render. Therefore, when the scene changes, the three-dimensional scene model reconstruction and the panoramic all-around image rendering can be rapidly performed.
As one example, during a dynamic change of a scene, a panoramic looking-around image of a vehicle may be generated as shown in fig. 6. That is, in the process of dynamic scene change, a motion structure recovery technology can be adopted to recover the motion structure of the panoramic image of the current scene acquired by the camera, so as to obtain the point cloud information of the current scene, then, based on the point cloud information of the current scene, reconstructing the stereoscopic scene model of the scene before the change, and then, based on the panoramic image of the current scene and the reconstructed stereoscopic scene model, determining to obtain the panoramic image of the vehicle in the current scene.
In the embodiment of the application, if the vehicle is detected to be switched from the first scene to the second scene, residual scene information which can indicate the scene change condition of the second scene relative to the first scene can be determined based on the first scene information and the second scene information, then the first stereoscopic scene model corresponding to the first scene is updated based on the residual scene information, so that a second stereoscopic scene model adapting to the second scene is obtained, and then the panoramic looking-around image of the vehicle in the second scene is determined based on the second scene image and the second stereoscopic scene model. That is, the application can adaptively update the stereoscopic scene model according to the dynamically changed scene information to obtain the dynamic stereoscopic scene model which can adapt to the scene change, and then, based on the changed scene information and the dynamic stereoscopic scene model, the panoramic looking-around image which can accurately reflect the scene change can be generated, thereby effectively avoiding the problem of serious stretching and deformation of objects in the image caused when the panoramic looking-around image is generated based on the fixed stereoscopic scene model. In addition, when the first stereoscopic scene model is updated based on residual scene information, only the changed partial scenes are updated, and the partial scenes which are not changed are not updated, so that the model updating efficiency is improved, and the panoramic image generating efficiency is further improved.
Fig. 7 is a block diagram of a device for generating a panoramic looking-around image of a vehicle according to an embodiment of the present application, as shown in fig. 7, the device includes: a first determination module 701, an update module 702 and a second determination module 703.
A first determining module 701, configured to determine residual scene information based on the first scene information and the second scene information if it is detected that the vehicle is switched from the first scene to the second scene;
wherein the residual scene information is used for indicating a scene change condition of the second scene relative to the first scene, the first scene information comprises a first scene image around the vehicle, and the second scene information comprises a second scene image around the vehicle;
an updating module 702, configured to update a first stereoscopic scene model based on the residual scene information, to obtain a second stereoscopic scene model, where the first stereoscopic scene is used for representing the first scene in a three-dimensional space, and the second stereoscopic scene model is used for representing the second scene in the three-dimensional space;
a second determining module 703 is configured to determine a panoramic all-around image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model.
Optionally, the first determining module 701 is configured to:
Determining second point cloud information based on the second scene information, wherein the second point cloud information is used for indicating a coordinate set of the second scene in a three-dimensional space;
determining residual errors between the second point cloud information and first point cloud information to obtain residual error point cloud information, wherein the first point cloud information is used for indicating a coordinate set of the first scene in a three-dimensional space;
and determining the residual error point cloud information as the residual error scene information.
Optionally, the updating module 702 includes:
the quantization unit is used for quantizing the residual scene information to obtain quantized residual scene information;
and the updating unit is used for updating the first stereoscopic scene model based on the quantized residual scene information to obtain the second stereoscopic scene model.
Optionally, the residual scene information is residual point cloud information, the residual point cloud information is a residual between second point cloud information and first point cloud information, the first point cloud information is used for indicating a coordinate set of the first scene in a three-dimensional space, and the second point cloud information is used for indicating a coordinate set of the second scene in the three-dimensional space;
the quantization unit is used for quantizing the residual error point cloud information to obtain quantized residual error point cloud information;
And the updating unit is used for summing the quantized residual difference point cloud information with the first three-dimensional scene model to obtain the second three-dimensional scene model.
Optionally, the second determining module 703 includes:
the acquisition unit is used for acquiring a first space mapping relation between the first scene image and the actual scene;
the determining unit is used for determining a second space mapping relation between a first sub-image and an actual scene based on the residual scene information and the shooting parameters, wherein the first sub-image is a partial image corresponding to a changed scene in the second scene image, and the changed scene is a partial scene of the second scene which is changed relative to the first scene;
and the mapping unit is used for mapping the second scene image into the second stereoscopic scene model based on the first mapping relation and the second mapping relation to obtain a panoramic looking-around image of the vehicle in the second scene.
Optionally, the mapping unit is configured to:
mapping a second sub-image in the second scene image into the second stereoscopic scene model based on the first mapping relation, wherein the second sub-image refers to a part of the second scene image corresponding to an unchanged scene, and the unchanged scene refers to a part of the second scene which is unchanged relative to the first scene;
The first sub-image in the second scene image is mapped into the second stereoscopic scene model based on the second mapping relationship.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring the first scene information corresponding to the first scene where the vehicle is located;
the building module is used for building the first stereoscopic scene model based on the first scene information;
and the third determining module is used for determining a first space mapping relation between the first scene image and the actual scene based on the first scene information and the shooting parameters.
Optionally, the apparatus further comprises:
and the mapping module is used for mapping the first scene image into the first stereoscopic scene model based on the first space mapping relation to obtain a panoramic looking-around image of the vehicle in the first scene.
In the embodiment of the application, if the vehicle is detected to be switched from the first scene to the second scene, residual scene information which can indicate the scene change condition of the second scene relative to the first scene can be determined based on the first scene information and the second scene information, then the first stereoscopic scene model corresponding to the first scene is updated based on the residual scene information, so that a second stereoscopic scene model adapting to the second scene is obtained, and then the panoramic looking-around image of the vehicle in the second scene is determined based on the second scene image and the second stereoscopic scene model. That is, the application can adaptively update the stereoscopic scene model according to the dynamically changed scene information to obtain the dynamic stereoscopic scene model which can adapt to the scene change, and then, based on the changed scene information and the dynamic stereoscopic scene model, the panoramic looking-around image which can accurately reflect the scene change can be generated, thereby effectively avoiding the problem of serious stretching and deformation of objects in the image caused when the panoramic looking-around image is generated based on the fixed stereoscopic scene model. In addition, when the first stereoscopic scene model is updated based on residual scene information, only the changed partial scenes are updated, and the partial scenes which are not changed are not updated, so that the model updating efficiency is improved, and the panoramic image generating efficiency is further improved.
It should be noted that: the apparatus for generating a panoramic view image of a vehicle according to the above embodiment is only exemplified by the division of the above functional modules when generating a panoramic view image of a vehicle, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above. In addition, the apparatus for generating a panoramic looking-around image of a vehicle provided in the foregoing embodiment belongs to the same concept as the embodiment of the method for generating a panoramic looking-around image of a vehicle, and detailed implementation procedures of the apparatus are shown in the method embodiment, and are not described herein.
Fig. 8 is a schematic structural diagram of a device for generating a panoramic looking-around image of a vehicle according to an embodiment of the present application, where the device 800 for generating a panoramic looking-around image of a vehicle may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 801 and one or more memories 802, where at least one instruction is stored in the memory 802, and the at least one instruction is loaded and executed by the processor 801. Of course, the apparatus 800 for generating a panoramic view image of a vehicle may further have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing functions of the device, which are not described herein.
In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory, including instructions executable by a processor in a generation apparatus of a vehicle panoramic all-around image to complete the generation method of a vehicle panoramic all-around image in the above embodiment. For example, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the exemplary embodiments of the application is not intended to limit the application to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.

Claims (12)

1. A method for generating a panoramic looking-around image of a vehicle, the method comprising:
if the fact that the vehicle is switched from the first scene to the second scene is detected, determining residual scene information based on the first scene information and the second scene information;
Wherein the residual scene information is used to indicate a scene change condition of the second scene relative to the first scene, the first scene information including a first scene image around the vehicle, the second scene information including a second scene image around the vehicle;
updating a first stereoscopic scene model based on the residual scene information to obtain a second stereoscopic scene model, wherein the first stereoscopic scene model is used for representing the first scene in a three-dimensional space, the first stereoscopic scene model is a stereoscopic model obtained by using point cloud information of the first scene, the point cloud information is used for indicating a coordinate set of the first stereoscopic scene model, and the second stereoscopic scene model is used for representing the second scene in the three-dimensional space;
and determining a panoramic looking-around image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model.
2. The method of claim 1, wherein the determining residual scene information based on the first scene information and the second scene information comprises:
determining second point cloud information based on the second scene information, wherein the second point cloud information is used for indicating a coordinate set of the second scene in a three-dimensional space;
Determining residual errors between the second point cloud information and first point cloud information to obtain residual error point cloud information, wherein the first point cloud information is used for indicating a coordinate set of the first scene in a three-dimensional space;
and determining the residual error point cloud information as the residual error scene information.
3. The method of claim 1, wherein updating the first stereoscopic scene model based on the residual scene information to obtain the second stereoscopic scene model comprises:
quantizing the residual scene information to obtain quantized residual scene information;
and updating the first stereoscopic scene model based on the quantized residual scene information to obtain the second stereoscopic scene model.
4. The method of claim 3, wherein the residual scene information is residual point cloud information, the residual point cloud information being a residual between second point cloud information and first point cloud information, the first point cloud information being used to indicate a set of coordinates of the first scene in three-dimensional space, the second point cloud information being used to indicate a set of coordinates of the second scene in three-dimensional space;
the step of quantizing the residual scene information to obtain quantized residual scene information includes:
Quantizing the residual error point cloud information to obtain quantized residual error point cloud information;
the updating the first stereoscopic scene model based on the quantized residual scene information to obtain the second stereoscopic scene model includes:
and summing the quantized residual difference point cloud information with the first three-dimensional scene model to obtain the second three-dimensional scene model.
5. The method of claim 1, wherein the determining a panoramic looking-around image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model comprises:
acquiring a first space mapping relation between the first scene image and an actual scene;
determining a second spatial mapping relation between a first sub-image and an actual scene based on the residual scene information and the shooting parameters, wherein the first sub-image is a partial image corresponding to a changed scene in the second scene image, and the changed scene is a partial scene of the second scene which is changed relative to the first scene;
and mapping the second scene image into the second stereoscopic scene model based on the first spatial mapping relation and the second spatial mapping relation to obtain a panoramic looking-around image of the vehicle in the second scene.
6. The method of claim 5, wherein the mapping the second scene image into the second stereoscopic scene model based on the first spatial mapping relationship and the second spatial mapping relationship comprises:
mapping a second sub-image in the second scene image into the second stereoscopic scene model based on the first spatial mapping relation, wherein the second sub-image refers to a part of the second scene image corresponding to an unchanged scene, and the unchanged scene refers to a part of the second scene which is unchanged relative to the first scene;
and mapping the first sub-image in the second scene image into the second stereoscopic scene model based on the second spatial mapping relationship.
7. The method of claim 1, wherein prior to determining residual scene information based on the first scene information and the second scene information, further comprising:
acquiring first scene information corresponding to a first scene where the vehicle is located;
constructing the first stereoscopic scene model based on the first scene information;
and determining a first space mapping relation between the first scene image and an actual scene based on the first scene information and the shooting parameters.
8. The method of claim 7, wherein after determining the first spatial mapping relationship between the first scene image and the actual scene based on the first scene information and the camera parameters, further comprising:
and mapping the first scene image into the first stereoscopic scene model based on the first space mapping relation to obtain a panoramic looking-around image of the vehicle in the first scene.
9. A device for generating a panoramic all-around image of a vehicle, the device comprising:
if the fact that the vehicle is switched from the first scene to the second scene is detected, determining residual scene information based on the first scene information and the second scene information;
wherein the residual scene information is used to indicate a scene change condition of the second scene relative to the first scene, the first scene information including a first scene image around the vehicle, the second scene information including a second scene image around the vehicle;
updating a first stereoscopic scene model based on the residual scene information to obtain a second stereoscopic scene model, wherein the first stereoscopic scene model is used for representing the first scene in a three-dimensional space, the first stereoscopic scene model is a stereoscopic model obtained by using point cloud information of the first scene, the point cloud information is used for indicating a coordinate set of the first stereoscopic scene model, and the second stereoscopic scene model is used for representing the second scene in the three-dimensional space;
And determining a panoramic looking-around image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model.
10. A device for generating a panoramic all-around image of a vehicle, the device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of claims 1-8.
11. A vehicle-mounted look-around system, the system comprising a sensing element, a processor, and a display, the sensing element comprising a plurality of cameras disposed about a vehicle;
the sensing element is used for collecting scene information of a scene where the vehicle is located, and the scene information comprises scene images around the vehicle;
the processor is configured to implement the method for generating a panoramic image of a vehicle according to any one of claims 1-8;
the display is used for displaying the panoramic looking-around image of the vehicle.
12. A non-transitory computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the steps of the method of generating a panoramic looking-around image of a vehicle as claimed in any one of claims 1-8.
CN201910616903.5A 2019-07-09 2019-07-09 Method, device and system for generating panoramic looking-around image of vehicle and storage medium Active CN112215033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910616903.5A CN112215033B (en) 2019-07-09 2019-07-09 Method, device and system for generating panoramic looking-around image of vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910616903.5A CN112215033B (en) 2019-07-09 2019-07-09 Method, device and system for generating panoramic looking-around image of vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN112215033A CN112215033A (en) 2021-01-12
CN112215033B true CN112215033B (en) 2023-09-01

Family

ID=74047372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910616903.5A Active CN112215033B (en) 2019-07-09 2019-07-09 Method, device and system for generating panoramic looking-around image of vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN112215033B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359225A (en) * 2022-09-22 2022-11-18 中国农业银行股份有限公司 Virtual simulation scene and real scene synchronization method, device and equipment
CN115909251A (en) * 2022-12-14 2023-04-04 杭州枕石智能科技有限公司 A method, device and system for providing a panoramic view image of a vehicle
CN118644563A (en) * 2023-03-08 2024-09-13 法雷奥汽车内部控制(深圳)有限公司 Method, device, system, computer program product and motor vehicle for generating surround view images
CN118445013A (en) * 2024-05-11 2024-08-06 阿维塔科技(重庆)有限公司 Scene switching method and device and vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009144994A1 (en) * 2008-05-29 2009-12-03 富士通株式会社 Vehicle image processor, and vehicle image processing system
CN106355546A (en) * 2015-07-13 2017-01-25 比亚迪股份有限公司 Vehicle panorama generating method and apparatus
CN106875467A (en) * 2015-12-11 2017-06-20 中国科学院深圳先进技术研究院 D Urban model Rapid Updating
WO2018121333A1 (en) * 2016-12-30 2018-07-05 艾迪普(北京)文化科技股份有限公司 Real-time generation method for 360-degree vr panoramic graphic image and video
CN109685891A (en) * 2018-12-28 2019-04-26 鸿视线科技(北京)有限公司 3 d modeling of building and virtual scene based on depth image generate system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10650608B2 (en) * 2008-10-08 2020-05-12 Strider Labs, Inc. System and method for constructing a 3D scene model from an image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009144994A1 (en) * 2008-05-29 2009-12-03 富士通株式会社 Vehicle image processor, and vehicle image processing system
CN106355546A (en) * 2015-07-13 2017-01-25 比亚迪股份有限公司 Vehicle panorama generating method and apparatus
CN106875467A (en) * 2015-12-11 2017-06-20 中国科学院深圳先进技术研究院 D Urban model Rapid Updating
WO2018121333A1 (en) * 2016-12-30 2018-07-05 艾迪普(北京)文化科技股份有限公司 Real-time generation method for 360-degree vr panoramic graphic image and video
CN109685891A (en) * 2018-12-28 2019-04-26 鸿视线科技(北京)有限公司 3 d modeling of building and virtual scene based on depth image generate system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
3D车载环视全景生成方法;刘冬;秦瑞;陈曦;李庆;;计算机科学(第04期);302-305 *

Also Published As

Publication number Publication date
CN112215033A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN112215033B (en) Method, device and system for generating panoramic looking-around image of vehicle and storage medium
JP7054803B2 (en) Camera parameter set calculation device, camera parameter set calculation method and program
CN110163930B (en) Lane line generation method, device, equipment, system and readable storage medium
CN111582080B (en) Method and device for realizing 360-degree looking-around monitoring of vehicle
CN114913506B (en) A 3D object detection method and device based on multi-view fusion
CN106462996B (en) Method and device for displaying vehicle surrounding environment without distortion
JP2019096072A (en) Object detection device, object detection method and program
CN112802092B (en) Obstacle sensing method and device and electronic equipment
CN114312577B (en) Vehicle chassis perspective method and device and electronic equipment
EP4386676A1 (en) Method and apparatus for calibrating cameras and inertial measurement unit, and computer device
CN114111568B (en) Method and device for determining appearance size of dynamic target, medium and electronic equipment
CN113658262B (en) Camera external parameter calibration method, device, system and storage medium
CN116486351A (en) Driving early warning method, device, equipment and storage medium
CN113221756A (en) Traffic sign detection method and related equipment
CN113850881A (en) Image generation method, device, equipment and readable storage medium
JP2003009141A (en) Processing device for image around vehicle and recording medium
CN114821544B (en) Perception information generation method and device, vehicle, electronic equipment and storage medium
JP2022544348A (en) Methods and systems for identifying objects
CN116630210A (en) Vehicle environment perception method, device, equipment and storage medium
CN117079231A (en) Object recognition method, device, vehicle and storage medium
CN117994614A (en) Target detection method and device
CN115171384A (en) Key vehicle position delay compensation method and device in vehicle-mounted display process
WO2020246202A1 (en) Measurement system, measurement method, and measurement program
CN113077503A (en) Blind area video data generation method, system, device and computer readable medium
CN113449646A (en) Head-up display system with safe distance prompt

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant