Background
In the field of autonomous driving, monocular vision positioning and Mapping (SLAM) refers to an autonomous vehicle that creates a map that is consistent with the real environment using a single vision sensor (e.g., a monocular camera) and determines its position in the map at the same time.
Monocular vision synchronous positioning and mapping (SLAM) and an inertial navigation system are fused to position the vehicle. However, the monocular camera usually has a limited field of view, and sometimes the accuracy and continuity of monocular SLAM positioning are affected due to too few detected image features or parallax.
Disclosure of Invention
The embodiment of the invention provides a method for determining a vehicle pose, a vehicle-mounted system and a vehicle, which are used for obtaining a conversion pose according to external parameters of a plurality of monocular cameras of the vehicle, fusing to obtain a fused pose of the vehicle, and further improving the pose estimation precision of the vehicle.
In view of this, a first aspect of the present invention provides a method of determining a vehicle pose, which may include:
acquiring first inertial navigation information and first automobile odometer information of a vehicle;
determining a first current estimated pose of the vehicle according to the first inertial navigation information and the first automobile odometer information;
acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1;
fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras;
respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle;
and obtaining the fusion pose of the vehicle according to the N conversion poses.
Alternatively, in some embodiments of the present invention,
the obtaining of the fusion pose of the vehicles according to the transformation poses of the N vehicles comprises:
determining a weight for each transformed pose according to the N transformed poses and a first current estimated pose of the vehicle;
and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Alternatively, in some embodiments of the present invention,
the obtaining of the fusion pose of the vehicles according to the transformation poses of the N vehicles comprises:
determining the weight of each conversion pose according to a preset algorithm and the N conversion poses;
and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Optionally, in some embodiments of the present invention, the method further includes:
and according to the external parameters of the N monocular cameras, fusing the output information of the inertial navigation odometer with the SLAM algorithm of each monocular camera to construct a visual feature map of the N monocular cameras.
Optionally, in some embodiments of the present invention, the method further includes:
and fusing the N monocular camera visual feature maps into a reference visual feature map according to the common vehicle track in the N monocular camera visual feature maps.
Optionally, in some embodiments of the present invention, the acquiring external parameters of N monocular cameras of the vehicle includes:
acquiring visual odometer information;
and determining the external parameters of the N monocular cameras of the vehicle according to the visual odometry information and the first current estimated pose of the vehicle.
Optionally, in some embodiments of the present invention, the method further includes:
acquiring second inertial navigation information and second automobile odometer information of the vehicle;
and determining a second current estimated pose of the vehicle according to the fused pose of the vehicle, second inertial navigation information of the vehicle and second automobile odometer information.
A second aspect of the present invention provides an in-vehicle system, which may include:
the acquisition module is used for acquiring first inertial navigation information and first automobile odometer information of the automobile; acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1;
the processing module is used for determining a first current estimated pose of the vehicle according to the first inertial navigation information and the first automobile odometer information; fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras; respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle; and obtaining the fusion pose of the vehicle according to the N conversion poses.
Alternatively, in some embodiments of the present invention,
the processing module is specifically configured to determine a weight of each of the conversion poses according to the N conversion poses and the first current estimated pose of the vehicle; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Alternatively, in some embodiments of the present invention,
the processing module is specifically configured to determine a weight of each conversion pose according to a preset algorithm and the N conversion poses; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Alternatively, in some embodiments of the present invention,
and the processing module is also used for fusing the output information of the inertial navigation odometer with the SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras to construct a visual feature map of the N monocular cameras.
Alternatively, in some embodiments of the present invention,
and the processing module is also used for fusing the N monocular camera visual characteristic maps into a reference visual characteristic map according to a common vehicle track in the N monocular camera visual characteristic maps.
Alternatively, in some embodiments of the present invention,
the acquisition module is specifically used for acquiring visual odometer information; and determining the external parameters of the N monocular cameras of the vehicle according to the visual odometry information and the first current estimated pose of the vehicle.
Alternatively, in some embodiments of the present invention,
the acquisition module is further used for acquiring second inertial navigation information and second automobile odometer information of the vehicle;
the processing module is further configured to determine a second current estimated pose of the vehicle according to the fused pose of the vehicle, second inertial navigation information of the vehicle, and second vehicle odometer information.
A third aspect of the invention provides a vehicle that may include an on-board system as described in any of the second and second aspects of the invention.
A fourth aspect of the present invention provides a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute a method of determining a vehicle pose disclosed in the first aspect of the embodiments of the present invention.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the invention, first inertial navigation information and first automobile odometer information of a vehicle are obtained; determining a first current estimated pose of the vehicle according to the first inertial navigation information and the first automobile odometer information; acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1; fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras; respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle; and obtaining the fusion pose of the vehicle according to the N conversion poses. The conversion pose can be obtained according to the external parameters of the monocular cameras of the vehicle, fusion is carried out, and the fusion pose of the vehicle is obtained, so that the pose estimation precision of the vehicle is improved.
Detailed Description
The embodiment of the invention provides a method for determining a vehicle pose, a vehicle-mounted system and a vehicle, which are used for obtaining a conversion pose according to external parameters of a plurality of monocular cameras of the vehicle, fusing to obtain a fused pose of the vehicle, and further improving the pose estimation precision of the vehicle.
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. The embodiments based on the present invention should fall into the protection scope of the present invention.
The following is a brief description of the terms involved in the embodiments of the present invention:
an inertial navigation system (INS, hereinafter referred to as inertial navigation) is an autonomous navigation system that does not depend on external information and does not radiate energy to the outside, and includes an inertial measurement unit. The working environment of the device not only comprises the air and the ground, but also can be underwater. The basic working principle of inertial navigation is based on Newton's law of mechanics, and by measuring the acceleration of a carrier in an inertial reference system, integrating the acceleration with time and transforming the acceleration into a navigation coordinate system, information such as speed, yaw angle and position in the navigation coordinate system can be obtained.
The automobile odometer: refers to a device for measuring travel (e.g., mounted on a vehicle), such as a wheel pulse counter.
It should be noted that, in the embodiment of the present invention, the inertial navigation system and the automobile odometer are generally referred to as an inertial navigation odometer.
Aiming at the defects of the prior art, the invention provides a scheme for improving the estimation of the pose of a vehicle by using a method for carrying out multi-monocular SLAM and inertial navigation fusion based on a multi-monocular camera. The following further describes the technical solution of the present invention by way of an embodiment, as shown in fig. 1, which is a schematic diagram of an embodiment of a method for determining a vehicle pose in an embodiment of the present invention, and the method may include:
101. first inertial navigation information and first automobile odometer information of the vehicle are obtained.
The vehicle-mounted system acquires first inertial navigation information and first automobile odometer information of the vehicle.
For example, the vehicle may be equipped with sensors such as an Inertial Measurement Unit (IMU), a wheel pulse counter, etc., and these sensors may be used as a positioning module (e.g., a body odometer) of the vehicle to calculate a driving distance of the vehicle, and further, may position the vehicle. Fig. 2 is a schematic diagram of a work flow applied to the embodiment of fig. 1 of the present invention.
102. And determining a first current estimation pose of the vehicle according to the first inertial navigation information and the first automobile odometer information.
And the vehicle-mounted system pre-estimates a first current estimated pose of the vehicle under a global coordinate system according to the first inertial navigation information and the first vehicle odometer information.
The first inertial navigation information can include attitude, speed and position change information provided by integration, and can be fused with displacement information provided by an automobile odometer by using algorithms such as Kalman filtering and the like, so that a first current estimation pose of the vehicle under a global coordinate system can be determined.
103. And acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1.
The obtaining of the external parameters of the N monocular cameras of the vehicle may include: acquiring visual odometer information; and determining the external parameters of the N monocular cameras of the vehicle according to the visual odometry information and the first current estimated pose of the vehicle. Furthermore, according to the visual odometer information and the first current estimated pose of the vehicle, determining the external parameters of the N monocular cameras of the vehicle by using an online external parameter calibration algorithm (such as a hand-eye calibration method).
It can be understood that the parameter of the monocular camera represents the rotation information and the translation information of the coordinate system of the monocular camera relative to the inertial coordinate system of the vehicle.
It should be noted that the monocular camera may be a single monocular camera provided in the front windshield of the vehicle or in the rear-view mirror of the vehicle. That is, if more than one monocular camera is installed on the vehicle, such as front view, back view and side view, the invention is applicable to the technical scheme provided by the invention.
Optionally, the vehicle-mounted electronic device determines at least two initial frames from a real environment image sequence captured by a monocular camera of the vehicle, and obtains an actual moving distance of the vehicle between capturing moments of the at least two initial frames through a positioning module of the vehicle.
In the embodiment of the invention, the monocular camera can be a monocular camera arranged on a front windshield of the vehicle or a rearview mirror of the vehicle. The initial frame is an image frame for initializing monocular vision SLAM, and relative pose change of the monocular camera between the initial frames is determined by comparing difference of image positions of the same characteristic point in at least two initial frames, so that initialization of the monocular vision SLAM is completed. It will be appreciated that in order to increase the success rate of monocular vision SLAM initialization, the initial frame should meet certain requirements. For example, the initial frame may be set as two consecutive image frames with the number of feature points respectively greater than a preset threshold (e.g., 100).
Monocular vision SLAM initialization includes the following steps: 1. performing feature point matching in at least two initial frames; 2. based on the matched feature points, the inter-frame pose of the monocular camera can be determined according to the initial frame by utilizing a pair geometry principle, namely the relative pose change (including translation amount and rotation amount) of the monocular camera between the initial frames; 3. determining the depth of a feature point in an image by utilizing a triangulation principle based on the inter-frame pose of a monocular camera between at least two initial frames; 4. and constructing an initial SLAM map according to the inter-frame pose of the monocular camera between at least two initial frames and the depth of the feature points in the image.
Further, in the subsequent mapping process, the pose of the monocular camera corresponding to each image frame can be represented as the relative pose change of the monocular camera between the image frame and the image frame before the image frame. In general, the relative pose change between the image frame and the previous image frame can be specifically expressed as M times of the relative pose change of the monocular camera between the initial frames, where M is a real number. Therefore, if the translation amount t of the monocular camera is directly normalized during initialization, so that the interframe pose of the monocular camera and the restored feature point depth do not contain scale information during initialization, the finally obtained SLAM map can only determine that the relative position change of the monocular camera is accurate, and cannot reflect the real geographic environment.
Therefore, the finally constructed SLAM map comprises the corresponding relation between the pose of the monocular camera and the three-dimensional space position of the feature point, the corresponding relation takes the image in the real environment image sequence as the constraint, and when the monocular camera is positioned at a certain pose and shoots a certain feature point, the obtained image is a certain frame image in the real environment image sequence. That is, the SLAM map includes a pose sequence of the monocular camera and three-dimensional spatial positions of feature points included in each image in the real environment image sequence.
104. And fusing an inertial navigation mileage calculation method with the visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras.
The vehicle-mounted system can determine the current pose of each monocular camera according to the first current estimated pose of the vehicle, which is externally referred by the N monocular cameras.
For example, for a monocular camera system such as camera 1, if the camera external parameter (the pose of the camera with respect to the vehicle body/inertial navigation system) is not calibrated, the vehicle-mounted system may calibrate the camera external parameter using an optimization method based on the pre-estimated poses of the visual odometer and the inertial odometer, i.e., the first current estimated pose.
If the camera external parameters are calibrated, the current pose of the camera can be estimated in real time according to the visual SLAM algorithm of each monocular camera which is mature at present and an algorithm (such as monocular version VI-ORB-SLAM2 or VINS and the like) fused with the inertial navigation odometer, and a visual feature map is established. It will be appreciated that other monocular cameras are also subject to the principles described above, determining the corresponding current pose.
105. And respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle.
And then, respectively converting the coordinate system of the current pose of the camera obtained by the monocular SLAM system according to the external parameters of the camera, and solving N converted poses of the corresponding vehicle in the global coordinate system.
It can be understood that the SLAM system finds the current pose of the camera in the visual global coordinate system. By means of camera external reference, namely relative pose of the camera relative to a vehicle inertia coordinate system, a transformation matrix from a visual coordinate system to a vehicle coordinate system can be obtained; and then, the transformation matrix and the current pose of the camera mentioned at the beginning under the visual coordinate system are utilized to calculate the transformation pose of the vehicle in a vehicle global coordinate system (namely a navigation coordinate system in the scheme).
106. And obtaining the fusion pose of the vehicle according to the N conversion poses.
The obtaining of the fusion pose of the vehicles according to the transformation poses of the N vehicles comprises:
determining a weight for each transformed pose according to the N transformed poses and a first current estimated pose of the vehicle; obtaining a fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses;
or,
determining the weight of each conversion pose according to a preset algorithm and the N conversion poses; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses. If the time synchronization problem exists among the N positions, before the positions are fused, position synchronization processing is carried out, for example, position calculation or interpolation is carried out by using inertial navigation to obtain a time-synchronized position.
It can be understood that the vehicle-mounted system can calculate respective pose estimation weights (reflecting pose estimation precision) according to process parameters (such as external reference reliability, residual errors of the vision odometer and inertial navigation estimated poses, rear-end optimization scale, presence or absence of detected closed loops and the like) of each monocular camera system, and therefore the current vehicle pose after each system is fused is obtained through a weighted average method. If a monocular SLAM fails at the current time, it is ignored, and if its process parameters are too poor, its weight may also be set to zero. The information of the external parameter credibility is the external parameter credibility of the corresponding monocular camera acquired when the external parameter of each monocular camera is acquired. The information such as the residual error of the estimated pose of the visual odometer and the inertial navigation, the rear-end optimization scale, the presence or absence of the detected closed loop and the like is acquired in the process of fusing the monocular SLAM and the inertial navigation.
Exemplarily, as shown in the figure, the conversion poses respectively corresponding to the N cameras are: a1, A2 and A3 … AN respectively correspond to the weights of M1, M2 and M3 … MN, and the sum of the weights is 1, so that the fusion pose of the vehicle is as follows:
A1*M1+A2*M2+A3*M3+…+AN*MN。
optionally, in some embodiments of the present invention, the inertial navigation odometer output information is fused with the visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras, so as to construct N monocular camera visual feature maps. And further, fusing the N monocular camera visual feature maps into a reference visual feature map according to a common vehicle track in the N monocular camera visual feature maps.
Illustratively, to facilitate direct reuse of the visual feature maps created by each monocular camera system (e.g., for map matching relocation), the sets of visual feature maps may be kept separate. If the scene mapping requirement exists, the multiple sets of visual feature maps can be fused into the same visual feature map according to the common vehicle track.
Among them, it can be understood that the visual feature map in the embodiment of the present invention is a map composed of visual features (e.g., orb (organized FAST and Rotated feature points)) constructed by SLAM algorithm. FAST is a common visual corner extraction method, and BRIEF is a common feature point description mode.
Further, after the vehicle-mounted system obtains the fusion pose of the vehicle, the vehicle-mounted system can also obtain second inertial navigation information and second automobile odometer information of the vehicle; and determining a second current estimated pose of the vehicle according to the fused pose of the vehicle, second inertial navigation information of the vehicle and second automobile odometer information.
Illustratively, given the current pose, the next step pose is estimated as x (k +1) ═ x (k) + dx (k). Where dx (k) is derived from the inertial navigation and odometer outputs over the interval dt.
Wherein x comprises position and attitude vectors, the change in attitude of dx (k) between x (k +1) and x (k) is calculated by integrating the gyroscope output (angular velocity) in the IMU over time, and the change in position is calculated by multiplying the odometer output (distance traveled) by the trigonometric function of attitude.
The vehicle-mounted system outputs a fusion pose of the vehicle and a plurality of sets of independent visual feature maps built by the multi-camera system. The fusion pose is used for the next step of inertial navigation pose pre-estimation, and the next step of inertial navigation pose pre-estimation can be used for calculating the fusion pose correspondingly, which is equivalent to using the output fusion of a multi-monocular camera SLAM system, iteratively optimizing each subsystem, and reducing the positioning drift error.
In the embodiment of the invention, first inertial navigation information and first automobile odometer information of a vehicle are obtained; determining a first current estimated pose of the vehicle according to the first inertial navigation information and the first automobile odometer information; acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1; fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras; respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle; and obtaining the fusion pose of the vehicle according to the N conversion poses. The conversion pose can be obtained according to the external parameters of the monocular cameras of the vehicle, fusion is carried out, and the fusion pose of the vehicle is obtained, so that the pose estimation precision of the vehicle is improved. The defect that the visual field of a monocular camera is limited is overcome, and algorithm integration and transplantation from a single camera to any multi-camera system are easy to realize based on a relatively mature visual SLAM algorithm (such as a monocular version ORB-SLAM2, VINS and the like) of the monocular camera. The multiple cameras are not required to have a common viewing area, and the external reference calibration among the multiple cameras is not required. Namely, a system design for realizing the fusion of a parallel multi-monocular camera vision SLAM algorithm and an inertial navigation mileage calculation method on a vehicle-mounted system; calculating weights according to the process parameters of the monocular camera SLAM system and calculating a fusion pose according to the weights; the current fusion pose is used for the next pose pre-estimation, and each monocular SLAM can be iteratively optimized to reduce the drift error of the monocular SLAM.
Fig. 3 is a schematic diagram of an embodiment of an in-vehicle system according to an embodiment of the present invention. The method can comprise the following steps:
the acquiring module 301 is configured to acquire first inertial navigation information and first automobile odometer information of a vehicle; acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1;
a processing module 302, configured to determine a first current estimated pose of the vehicle according to the first inertial navigation information and the first vehicle odometer information; fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras; respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle; and obtaining the fusion pose of the vehicle according to the N conversion poses.
Alternatively, in some embodiments of the present invention,
a processing module 302, configured to determine a weight of each of the transformed poses according to the N transformed poses and the first current estimated pose of the vehicle; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Alternatively, in some embodiments of the present invention,
a processing module 302, configured to determine a weight of each conversion pose according to a preset algorithm and the N conversion poses; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Alternatively, in some embodiments of the present invention,
the processing module 302 is further configured to fuse the output information of the inertial navigation odometer with the SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras, and construct a visual feature map of the N monocular cameras.
Alternatively, in some embodiments of the present invention,
the processing module 302 is further configured to fuse the N monocular camera visual feature maps into a reference visual feature map according to a common vehicle trajectory in the N monocular camera visual feature maps.
Alternatively, in some embodiments of the present invention,
an obtaining module 301, specifically configured to obtain visual odometer information; and determining the external parameters of the N monocular cameras of the vehicle according to the visual odometry information and the first current estimated pose of the vehicle.
Alternatively, in some embodiments of the present invention,
the obtaining module 301 is further configured to obtain second inertial navigation information and second vehicle odometer information of the vehicle;
the processing module 302 is further configured to determine a second current estimated pose of the vehicle according to the fused pose of the vehicle, the second inertial navigation information of the vehicle, and the second vehicle odometer information.
Fig. 4 is a schematic diagram of an embodiment of a vehicle according to an embodiment of the present invention, where the vehicle includes an on-board system shown in fig. 3.
Fig. 5 is a schematic diagram of an embodiment of an in-vehicle system according to an embodiment of the present invention. The method can comprise the following steps:
a memory 501 in which executable program code is stored;
a processor 502 coupled to a memory 501;
a vehicle positioning module 503;
the vehicle positioning module 503 acquires first inertial navigation information and first vehicle odometer information of the vehicle and transmits the first inertial navigation information and the first vehicle odometer information to the processor 502, and the processor 502 calls the executable program code stored in the memory 501 to execute any one of the methods for determining the vehicle pose shown in fig. 1.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.