[go: up one dir, main page]

SE2050258A1 - Machine learning based system, methods, and control arrangement for positioning of an agent - Google Patents

Machine learning based system, methods, and control arrangement for positioning of an agent

Info

Publication number
SE2050258A1
SE2050258A1 SE2050258A SE2050258A SE2050258A1 SE 2050258 A1 SE2050258 A1 SE 2050258A1 SE 2050258 A SE2050258 A SE 2050258A SE 2050258 A SE2050258 A SE 2050258A SE 2050258 A1 SE2050258 A1 SE 2050258A1
Authority
SE
Sweden
Prior art keywords
landmarks
agent
region
observed
computational model
Prior art date
Application number
SE2050258A
Inventor
Alireza Razavi
Navid Mahabadi
Original Assignee
Scania Cv Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Scania Cv Ab filed Critical Scania Cv Ab
Priority to SE2050258A priority Critical patent/SE2050258A1/en
Priority to PCT/SE2021/050189 priority patent/WO2021177887A1/en
Publication of SE2050258A1 publication Critical patent/SE2050258A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3602Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/40Control within particular dimensions
    • G05D1/43Control of position or course in two dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/862Combination of radar systems with sonar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/87Combinations of radar systems, e.g. primary radar and secondary radar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9323Alternative operation using light waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9324Alternative operation using ultrasonic waves
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2101/00Details of software or hardware architectures used for the control of position
    • G05D2101/10Details of software or hardware architectures used for the control of position using artificial intelligence [AI] techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Navigation (AREA)

Abstract

Method (600) and machine learning based system (400) comprising a neural network (300), configured to create a computational model to be applied for positioning of an agent (100) within a region (200), by training the machine learning based system (400), based on the method (600). Further, a method (700) for positioning of an agent (100) within the region (200), based on a computational model and a control arrangement (510) in the agent (100), configured to determine position of the agent (100) within the region (200), are disclosed.

Description

MACHINE LEARNING BASED SYSTEM, METHODS, AND CONTROL ARRANGEMENTFOR POSITIONING OF AN AGENT TECHNICAL FIELD This document discloses a machine learning based system comprising a neural network, acontrol arrangement and methods for positioning of an agent. More particularly, a machinelearning based system, a control arrangement and methods are disclosed for creating acomputational model to be applied for positioning of an agent within a region, by training themachine learning based system and then determine position of the agent within the region,based on a computational model.
BACKGROUND The technical development for partly or fully autonomous vehicles is progressing rapidly, andfully autonomous vehicles will soon have a place in our everyday life. A key to making asmart and anticipating autonomous vehicle is to use reliable solutions for determining posi- tion of the vehicle.
Awell-known solution to this problem is to use a satellite-based radio navigation system suchas e.g. Global Positioning System (GPS) or any other similar Global Navigation SatelliteSystems (GNSS) that provides geolocation and time information to a receiver maintained inthe vehicle.
However, the receiver of the vehicle may not be able to detect the satellite signals in certainenvironments, such as for example mines, tunnels, drive through shopping facilities, parkinggarages, and similar locations where the sky/ GPS satellites are occluded by obstacles. Socalled urban canyon environments presents another problem for reliable navigation, wherethe GPS signals often are blocked by high buildings and there are not enough availablesatellites signals to estimate the position.
A vehicle may be parked inside in a garage, for example at night, for charging batteries ofthe vehicle when the vehicle is an electric vehicle. ln the morning when the vehicle is started,there may be no prior information is available in terms of vehicle location, i.e. the onboardlogic may not know where it is situated and has no possibility to receive satellites signals. ln case the vehicle position is known, a conventional way to estimate the position when theGPS signal is lost, for example when entering a tunnel, is to either use dead-reckoning meth-ods or landmark based localisation. However, these methods do not address the problem ofno available position history and yet be able to find the current position in the map.
The currently existing satellite-based radio navigation systems are controlled by some ratherfew respective nations. ln case of an international conflict, the functionality of the respectivesatellite-based radio navigation systems may be switched off for other users by the control-ling nation; alternatively be subject to foreign aggression, making satellite-based radio navi-gation systems unreliable or impossible in case of a large scale conflict; this is in particulara problem for military vehicles (both manned and autonomous) and other military applica-tions such as guided missiles, which are expected to be able to navigate in particular in case of an armed conflict.
Yet another problem is navigation for autonomous robotic devices intended primarily or atleast partly for indoor usage such as for example a floor cleaning robot intended to operateat a shopping mall and other similar applications. Indoor navigation using satellite-basedradio navigation systems is normally not a possible solution, why there is a need to find asolution enabling reliable positioning independently from satellite detection.
Reliable vehicle localisation, in particular for autonomous vehicles, is fundamental for trueautonomous vehicle driving and traffic safety. Initial vehicle positioning is required when theoperations is started. lt is consequently important to find a solution to the positioning problemwhich is independent from satellite signal reception. The positioning solution needs to bemade in real time, or with only a negligent time lag, why the solution needs to be computa-tionally effective to be performed on the (relatively) limited computational resources onboardthe vehicle.
Document US20190114507 presents a solution for improving navigation accuracy of a vehi-cle. A navigation system comprises an image sensor that generates a plurality of images,each image comprising one or more features. These images are analysed by the processorand a number of features are determined in each image. A computation engine generatesnavigation information, based on the features included as constraints in the navigation infer-ence engine. The computation engine outputs the navigation information to improve naviga- tion accuracy for the vehicle.
The purpose of the described solution seems to aim at improving accuracy of GPS position-ing, see paragraphs [0031]- [0032]. The analyse of images and features of the images andparsing of sensor captures images/ features with prestored images/ features in a databaseis very computational heavy and will require multiple processors operating in parallel, foravoiding a time lag, see paragraph [0035]. lt is desired to find a positioning solution which is completely independent from any availa-bility of GPS/ other satellite-based radio navigation system. Also, computational resourcesis/ will be a limiting factor for mass produced entities due to cost issues. lt is desired todevelop a solution which is independent from satellite-based radio navigation systems, which enables real time positioning also with limited computational resources.
Document US20190033867 concerns a method for vehicle positioning. Images are capturedand analysed, wherein objects/ features in the images are detected. A relationship betweenthe established objects/ features are determined, where after a motion trajectory and a cam-era pose relative to a ground plane is determined using the plurality of objects/ features.
The solution seems to require satellite-based radio navigation system, see paragraph [0038].Again, the disclosed method seems to require extensive computations for parsing objects/features detected by a vehicle sensor with previously recorded objects/ features stored in adatabase. lt is not discussed in the document how real time navigation could be achieved.
Document US20190137280 discloses method for localisation and mapping including record-ing an image at a camera mounted to a vehicle, the vehicle associated with a global systemlocation. A landmark depicted in the image is identified with a landmark identification moduleof the computing system. A set of landmark parameters is extracted from the image with afeature extraction module of the computing system. A relative position betvveen the vehicleand the landmark geographic location is determined at the computing system, based on acomparison between the extracted set of landmark parameters and the known parameter;and updating, at the computing system, the global system location based on the relative position.
A very large amount of plausible mathematical method for realising the described solution isenumerated, see paragraph [0034], which however seems to rather engender a haze con- cerning actual implementation of the solution.
None of the cited documents addresses the above-mentioned problem of localisation at start- up of the vehicle.lt appears that further development is required for implementing non-satellite-based posi-tioning of autonomous vehicles, enabling real time, or almost real time, positioning of the vehicle.
SUMMARY lt is therefore an object of this invention to solve at least some of the above problems andprovide an enhanced positioning of an autonomous agent/ vehicle.
According to a first aspect of the invention, this objective is achieved by a method for creatinga computational model to be applied for positioning of an agent within a region. The methodcomprises the steps of selecting a number of landmarks in the region. Further the methodalso comprises determining coordinates and an identity reference of the selected respectivelandmarks. The method in addition comprises observing at least two landmarks, out of theselected landmarks, from a first position within the region. The method in addition comprisesproviding a neural network with data comprising the determined coordinates and identityreferences of the selected landmarks, the observed landmarks, and the determined vehicleposition in relation to the observed landmarks. The method also comprises determiningweights of the data provided to the neural network for successfully mapping the observedlandmarks with the selected landmarks, thereby creating the computational model to be ap-plied for positioning of the agent within the region.
According to a second aspect of the invention, this objective is achieved by a machine learn-ing based system, comprising a neural network, configured to create a computational modelto be applied for positioning of an agent within a region, by training the machine learningbased system, based on the method according to the first aspect.
According to a third aspect of the invention, this objective is achieved by a method for posi-tioning of an agent within a region, based on a computational model. The method comprisesthe steps of observing at least two landmarks within the region, from a position which isdesired to determine. Also, the method further comprises obtaining the computational modelof the region. The method in addition comprises providing data concerning the observedlandmarks to the obtained computational model. ln further addition the method also com-prises obtaining identities and coordinates of the landmarks in the region, corresponding tothe observed landmarks, based on the computational model. The method also comprisesdetermining position of the agent based on relational position of the agent in relation to theobserved landmarks and the obtained coordinates of the landmarks.
According to a fourth aspect of the invention, this objective is achieved by a control arrange-ment in an agent, configured to determine position of the agent within a region, based on acomputational model, created according to the method according to the third aspect.
According to a fifth aspect of the invention, this objective is achieved by a computer programcomprising program code for performing a method according to any one of the first aspect or the third aspect, when the computer program is executed in a computer.
Thanks to the described aspects, by training a neural network to generate a computationalmodel, positioning may be made in real time, or almost real time, by applying the ready-trained computational model at the agent/ vehicle in question, with limited computational FGSOUFCGS.
By performing the computational heavy, but not time critical, preparation and training of thecomputational model on an advanced computational resource, which may be remote from the region in question, the ready trained computational model could be provided to the agent.
The agent then only has to make sensor detections of landmarks in the surroundings, insertthis sensor data into the computational model and then obtain the most likely identities of theobserved landmarks from the computational model. The position of the agent could then bedetermined by trigonometric methods based on the extracted known positions of the ob- served landmarks.
The positioning is unrelated to coverage of satellite-based positioning methods. Thus, a re-liable positioning of the agent could be made also in an indoor environment, or in any otherscenario where satellite-based positioning is not available or reliable.
Other advantages and additional novel features will become apparent from the subsequentdetailed description.
FIGURES Embodiments of the invention will now be described in further detail with reference to theaccompanying figures, in which: Figure 1 illustrates a vehicle with sensors, according to an embodiment.
Figure 2A illustrates a scenario of a region wherein a vehicle according to an embodi-ment is situated.
Figure 2B illustrates a scenario of a region wherein a vehicle according to an embodi-ment is situated.
Figure 3 schematically illustrates a network architecture according to an embodiment.
Figure 4 schematically illustrates a vehicle interior of a vehicle, according to an embod- iment.
Figure 5 schematically illustrates a vehicle interior of a vehicle, according to an embod-iment.
Figure 6 is a flow chart illustrating an embodiment of a method for creating a computa-tional model to be applied for positioning of an agent.
Figure 7 is a flow chart illustrating an embodiment of a method for positioning of an agent within a region.
DETAILED DESCRIPTION Embodiments of the invention described herein are defined as a machine learning basedsystem comprising a neural network, a control arrangement and methods for positioning ofan agent, which may be put into practice in the embodiments described below. These em-bodiments may, however, be exemplified and realised in many different forms and are not tobe limited to the examples set forth herein; rather, these illustrative examples of embodi-ments are provided so that this disclosure will be thorough and complete.
Still other objects and features may become apparent from the following detailed description,considered in conjunction with the accompanying drawings. lt is to be understood, however,that the drawings are designed solely for purposes of illustration and not as a definition ofthe limits of the herein disclosed embodiments, for which reference is to be made to theappended claims. Further, the drawings are not necessarily drawn to scale and, unless oth-en/vise indicated, they are merely intended to conceptually illustrate the structures and pro-cedures described herein.
Figure 1 illustrates an agent 100, embodied as a vehicle, driving on a road 105. The agent100 comprises a number of environmental sensors 110a, 110b, 110c for assisting theonboard logic when driving the vehicle autonomously.
The sensors 110a, 110b, 110c may comprise e.g. a camera, a stereo camera, an infraredcamera, a video camera, a radar, a lidar, an ultrasound device, a Passive Infrared (PIR)sensor, a time-of-flight camera, or similar device, in different embodiments. The vehicle 100may comprise several sensors 110a, 110b, 110c, directed in distinct directions around thevehicle 100. However, the majority of the sensors 110a, 110b, 110c may be directed in thedriving direction. The sensors 110a, 110b, 110c may be of the same, or different type. Anadvantage with using different types of sensors is that different sensor types have differentcapabilities, i.e. a camera may function well in daylight while a laser sensor may be advan-tageous in darkness, etc.
The agent 100 may comprise a vehicle such as any means for transportation in broad sensesuch as e.g. a truck, a car, a motorcycle, a trailer, a bus, an aircraft, a watercraft, an Un-manned Aerial Vehicle (UAV), a spacecraft, or other similar manned means of conveyancerunning e.g. on wheels or similar means, rails, air, water or similar media. The agent 100may alternatively comprise a drone, a lawnmower, a robot, an autonomous moveable vend-ing machine, an autonomous floor cleaner, etc., i.e. an autonomous moving device perhaps in particular intended for indoor usage or at least partly indoor usage.
However, the herein provided examples of the agent 100 are primarily illustrated by an au- tonomous road vehicle having wheels.
According to the provided solution, machine learning is utilised to match current landmarkobservations obtained by sensors 110a, 110b, 110c of the agent 100 to a given map to findcorresponding identities of the map landmarks that best describe the made observations.The positions of the identified landmarks have previously been determined and stored asso-ciated with the identity reference of the landmark in question. Having found the matches,vehicle position of the agent 100 may be estimated in the map.
The solution may be divided into six main blocks: Feature selection, Data Collection, NeuralNetwork Architecture, Data Preparation, training, and inference.
Figure 2A illustrates an agent 100 situated/ parked at an unknown position in a region 200,for example in a garage. The region 200 comprises a number of landmarks 210. The land-marks 210 may comprise various non-mobile object such as for example road signs, poles,lamps, dashed lines and/ or other road marks, building corners, marks on a wall, ceiling and/or floor, and other permanently non-mobile objects. ln order to solve a problem using machine learning techniques, a first step may be featureengineering or feature selection, i.e. determine which landmarks 210 within sight to utilise.Thus, features/ landmarks 210 are selected which can help the machine learning algorithmto discriminate between classes. Feature selection is not trivial because the correspond-ences of landmarks 210 are unknown: when a landmark is observed, it is not known whichlandmark 210 on the map it is, i.e. there is normally no observable identity reference of land-marks 210.
To solve this issue, at each potential location of the agent 100 within the region 200, coordi-nates (x, y) or features of K number of observed landmarks 220a, 220b, 220c with respectto the vehicle position are selected. ln the illustrated embodiment of Figure 2B, K= 3. However, Kmay be set to any numberwithin: 2 s Ks w; where a higher number of observed landmarks 220a, 220b, 220c mayprovide higher precision in the positioning while a low number of observed landmarks 220a,220b, 220c within 2 s Ks °<> leads to less complex computations.
For each position in the region 200, there are Kobserved landmarks 220a, 220b, 220c, eachhaving two coordinates/ features, in total 2K features. ln some embodiments wherein it is desired to determine more features to get better results,the agent 100 may be moved a bit and coordinates to the same Klandmarks 220a, 220b,220c which were observed at the initial location may be collected. ln case this procedure isrepeated L-1 times, the result will comprise a collection of 2KL features (including the fea-tures collected on the initial position).
Having selected the 2KL features, data is collected. Data is collected from the region 200described above. The data may be collected from all "feasible" locations in the map of theregion 200. For example, in case data is collected concerning a road passing the region 200,then the feasible locations are all the points on the road. ln case the region 200 is indoors, inside a building then the feasible locations are all thewalkable/drivable/ passable locations inside the building. ln addition to the collected datafeatures, a ground truth concerning the collected data features is also recorded. The groundtruth in this context may comprise a respective identity reference of the K landmarks 220a,220b, 220c which are truly observed at each position, in other words, the entire map withcorresponding identity references.
Figure 3 illustrates a general structure of a neural network architecture 300, according to an example.
A machine learning based system may comprise or be based on an artificial neural network300 and/ or a connectionist system. Thus, the machine learning based system may at leastto some extent be inspired by biological neural networks that constitute animal/ humanbrains. Such system may learn (progressively improve performance on) tasks by consideringexamples, generally without task-specific programming.
The neural network 300 may be used for various purposes that require complex data han-dling and computations, for example in autonomous vehicles. Neural networks 300 may be trained to detect and classify specific patterns or signatures in data and can be used in var-ious types of applications. Thus, the neural network 300 may be trained to detect and identifythe landmarks 210 of the region 200.
The neural network architecture 300 may comprise a fully connected multilayer neural net-work. The number of neurons in the input layer may be 2N+2KL, where N is the number oflandmarks 210 in the map of the region 200 and the number of neurons in the output layerwill also be N. Kis the number of observed landmarks at each position of the agent 100. Lis the number of sampling iterations, i.e. at each position where Klandmarks are observed,a process iterated L number of times.
There may be one or several hidden layers in between to have a better representation. Theactivation functions of hidden layers may be some nonlinear function (e.g. tanh) and theactivation of output layer may comprise softmax (sigmoids which sum up to 1). The differentlayers may perform different kinds of transformations on their inputs. Signals may travel fromthe first (input), to the last (output) layer, possibly after traversing the layers multiple times.The neural network 300 may thereby solve problems much in the same way that a humanbrain would, i.e. by studying examples and make conclusions. ln a subsequent step, the data is prepared for feeding into the network 300. The input to thenetwork 300 comprises (x, y) coordinates of all landmarks 210 in the map of the area 200plus the features as described above. This means that with N landmarks 210 in the map,where N is an arbitrary positive integer; and at each point, (x, y) coordinates of K observedlandmarks 220a, 220b, 220c then the input size is 2N+2KL, where N, Kand L are definedabove.
A success factor of creating a good neural network 300 may be to train the network 300 witha big and well labelled data set. The training data may be appropriately labelled in order forthe neural network 300 to learn and be able to distinguish the unique identifiers or patternsin the data to be able to achieve the high level goal, which can be to detect objects, surfacesor other unique landmarks 210.
The output layer may then have size N. This is because in map matching the task is to findthe identity of the (K) observed landmarks 220a, 220b, 220c at each point. Since there area total of N landmarks 210 there may be N neurons in the output layer. The expectation is tohave the Kneurons corresponding to the true Kobserved landmarks 220a, 220b, 220c tothe initial location of the agent 100, to have the highest values in the output during a testphase.
Artificial neurons of the network 300 may have a weight that adjusts as learning proceeds.The weight may increase or decrease the strength of the signal at a connection. Artificialneurons may have a threshold such that only if the aggregate signal crosses that thresholdis the signal sent. During the training of the network 300, the target vectors fed to the outputlayer of the network 300 are Ndimensional vectors where N - Kelements of them are zeroand the Knonzero element correspond to the observed Klandmarks 220a, 220b, 220c.
The value of these K nonzero elements may be proportional to their initial distance to theagent 100 and may be normalised such that the sum of their values become 1. Thereby, arepresentation of a probability distribution may be achieved in some embodiments.
The training of the network 300 may be done using the celebrated backpropagation algorithmwith input and target data prepared as explained above. The loss function may be the crossentropy between the target vectors and the output vectors spitted out after feeding the net-work 300 by input vectors.
Finally, an inference phase may be performed. inference in this context means that thetrained model is utilised in the test phase to find the respective identity references of the Kobserved landmarks 220a, 220b, 220c. To this end, new data is collected during the testphase, build input vector similar to the way it is prepared for training and feed it to the network300. The Kneurons which will get the highest values in the output layer, estimate the Kobserved landmarks 220a, 220b, 220c.
Having determined the identity references of the observed landmarks 220a, 220b, 220c, theposition of the agent 100 may be determined based on extraction of the stored positions ofthe observed landmarks 220a, 220b, 220c.
Figure 4 illustrates an example of a scenario wherein an agent 100 such as the one previ-ously discussed and illustrated in Figure 1, Figure 2A and/ or Figure 2B, as it may be per-ceived by a potential passenger of the agent 100.
Also, Figure 4 illustrates a machine learning based system 400 comprising a neural network300. The neural network 300 is configured to create a computational model to be applied forpositioning of an agent 100 within a region 200 by training the machine learning based sys-tem 400. The neural network 300 is provided with a selected number of landmarks 210 inthe region 200. Also, coordinates and an identity reference of the selected respective land-marks 210 are provided to the neural network 300.
The sensor 110a, 110b of the agent 100 may detect observe at least two landmarks 220a,220b out of the selected landmarks 210, from the agent 100 in a position in the region 200.
Although one single sensor 110a, 110b of the agent 100 typically is sufficient for performingthe disclosed method, a plurality of sensors 110a, 110b of the agent 100 may alternativelybe utilised for observing the at least two landmarks 220a, 220b. Using the plurality of sensors110a, 110b may improve precision of the sensor detections, thereby providing a better resultstatistically.
Sensor data related to the observed landmarks 220a, 220b as captured by the sensors 1 10a,110b may be provided over a wireless communication interface via a wireless transceiver410.
The wireless communication may be made over a wireless communication interface, suchas e.g. Vehicle-to-Vehicle (V2V) communication, or Vehicle-to-Infrastructure (V2l) commu-nication. The common term Vehicle-to-Everything (V2X) is sometimes used. ln some embodiments, the wireless communication between the agent 100 and the machinelearning based system 400 may be performed via V2X communication, e.g. based on Dedi-cated Short-Range Communications (DSRC) devices. DSFIC works in 5.9 GHz band withbandwidth of 75 l\/|Hz and approximate range of 1000 m in some embodiments.
The wireless communication may be made according to any IEEE standard for wireless ve-hicular communication like e.g. a special mode of operation of IEEE 802.11 for vehicularnetworks called Wireless Access in Vehicular Environments (WAVE). IEEE 802.11p is anextension to 802.11 Wireless LAN medium access layer (MAC) and physical layer (PHY)specification.
Such wireless communication interface may comprise, or at least be inspired by wirelesscommunication technology such as Wi-Fi, Wireless Local Area Network (WLAN), Ultra Mo-bile Broadband (Ul\/IB), Bluetooth (BT), Near Field Communication (NFC), Radio-FrequencyIdentification (RFID), to name but a few possible examples of wireless communications in some embodiments.
The communication may alternatively be made over a wireless interface comprising, or at least being inspired by radio access technologies such as e.g. 3GPP LTE, LTE-Advanced, E-UTRAN, UMTS, GSM, or similar, just to mention some few options, via a wireless commu- nication network.
Based on the sensor data related to the observed landmarks 220a, 220b, the vehicle positionin relation to the observed landmarks 220a, 220b may be determined and provided to themachine learning based system 400.
The neural network 300 is then provided with data comprising the determined coordinatesand identity references of the selected landmarks 210, the observed landmarks 220a, 220b,and the determined vehicle position in relation to the observed landmarks 220a, 220b.
Having prepared the data for the neural network 300, the training phase may be done bydetermining weights of the data provided to the neural network 300 for successfully mappingthe observed landmarks 220a, 220b, 220c with the selected landmarks 210, thereby creatingthe computational model to be applied for positioning of the agent 100 within the region 200.
The created computational model may be stored in a database 405, from which it later maybe obtained by the same or (more likely) another agent 100 approaching/ entering the region200.
The training of the neural network 300 may be time consuming and computational heavy.However, once the computational model is created, the performance of the test phase/ in-ference phase is computationally effective and enable real time positioning, also with limitedcomputational resources, as often may be the case onboard an autonomous vehicle.
Figure 5 illustrates an example of a scenario wherein an agent 100 such as the one previ-ously discussed and illustrated in Figure 1, Figure 2A, Figure 2B, and/ or Figure 3 as it maybe perceived from a perspective of the agent 100.
The agent 100 comprises control arrangement 510, configured to determine position of theagent 100 within a region 200, based on a computational model created by the neural net-work 300 of the machine learning based system 400.
The control arrangement 510 is configured to observe at least two landmarks 220a, 220bwithin the region 200, from a position which is desired to determine via the sensors 110a,110b. Then the computational model of the region 200 is obtained from the for exampleobtained from the database 405 as described in Figure 4 and the corresponding text seg- ment; and/ or from a database 520 of the agent 100, to which the computational model pre-viously has been obtained.
The control arrangement 510 is also configured to providing data concerning the observedlandmarks 220a, 220b to the obtained computational model. By adding the data of the ob-served landmarks 220a, 220b to the computational model, identities and coordinates of thelandmarks 210 in the region 200, corresponding to the observed landmarks 220a, 220b areobtained by running the computational model. Hereby, the position of the agent 100 basedon relational position of the agent 100 in relation to the observed landmarks 220a, 220b, andthe obtained coordinates of the landmarks 210 could be determined, by knowledge of therespective geographical/ absolute positions of the observed landmarks 220a, 220b, and therelative respective position (cartesian coordinates, or polar coordinates and/ or distance) be-tvveen the landmarks 220a, 220b and the vehicles and using triangulation, trilateration, truerange multilateration or similar trigonometric method or combination of methods.
Hereby, the position of agents 100 operating indoors or in areas without coverage of GPS/GNSS, and/ or agents 100 not having a receiver for GPS/ GNSS could thus be made. Theinitial position of the agent 100 when starting the agent 100 after having been parked for aperiod of time, for example over-night may thereby be determined.
As the computational heavy establishment of the computational model is created by the ma-chine learning based system 400 comprising the neural network 300 as described in Figure4 and in the corresponding text segments of the specification, the positioning of the agent100 could be made in real-time without time lag with a relatively unsophisticated computa- tional resources.
Figure 6 illustrates an example of a method 600 according to an embodiment. The flow chartin Figure 6 shows the method 600 for creating a computational model to be applied for posi-tioning of an agent 100 within a region 200. ln order to be able to create the computational model for agent positioning, the method 600may comprise a number of steps 601-606. However, some of these steps 601 -606 may beperformed solely in some alternative embodiments or performed differently in different em-bodiments. Further, the described steps 601-606 may be performed in a somewhat differentchronological order than the numbering suggests. The steps 601-606 may in some embodi-ments be repeated for a plurality of possible positions of the agent 100 within the region 200.The method 600 may comprise the subsequent steps: Step 601 comprises selecting a number of landmarks 210 in the region 200.
The landmarks 210 may be selected to be easily distinguished from the environment. Animportant factor is to select landmarks 210 which are permanent such as building parts,poles, road marks and similar structures. A parked vehicle or oil stains on the underneathare very inappropriate to be selected as landmarks 210 as the vehicle may be repositionedand the oil stains may suddenly disappear during a cleaning campaign.
Step 602 comprises determining coordinates and an identity reference of the selected 601respective landmarks 210.
The identity reference may comprise any kind of locally, i.e. within the region 200, uniquenumber or string of characters, for avoiding misidentification of the landmark 210.
The coordinates may be absolute or relative in different embodiments; comprising cartesiancoordinates, polar coordinates etc.
Step 603 comprises observing at least two landmarks 220a, 220b, 220c, out of the selected601 landmarks 210, from a first position within the region 200.
The observed landmarks 220a, 220b, 220c may be arranged in the neural network 300 in apredetermined order, such as in order of decreasing (or alternatively increasing) distancefrom the agent 100: the first landmark may be the closest, the second landmark may be thesecond closest, the third may be the third closest etc., in some embodiments.
Step 604 comprises determining vehicle position in relation to the observed 603 landmarks220a, 220b, 220c.
The vehicle position may be a relative position for example distance and/ or an angle inrelation to the landmark 220a, 220b, 220c, but may in alternative embodiments be expressedas absolute Step 605 comprises providing a neural network 300 with data comprising the determined602 coordinates and identity references of the selected 601 landmarks 210, the observed603 landmarks 220a, 220b, 220c, and the determined 604 vehicle position in relation to theobserved 603 landmarks 220a, 220b, 220c.
Step 606 comprises determining weights of the data provided 605 to the neural network 300 for successfully mapping the observed 603 landmarks 220a, 220b, 220c with the selected601 landmarks 210, thereby creating the computational model to be applied for positioningof the agent 100 within the region 200.
Figure 7 illustrates an example of a method 700 for positioning of an agent 100 within aregion 200, based on a computational model. The computational model has been appropri-ately trained by the method 600 for creating the computational model disclosed in Figure 6and described in the corresponding text segment of the specification. ln order to be able to create the computational model for agent positioning, the method 700may comprise a number of steps 701-706. However, some of these steps 701-706 may beperformed solely in some alternative embodiments, like step 701, or be performed somewhatdifferently in different embodiments. Further, the described steps 701-706 may be performedin a somewhat different chronological order than the numbering suggests. The method 700may comprise the subsequent steps: Step 701 which only may be performed in some embodiments, comprises determining thatthe agent 100 is approaching the region 200, based on an identity reference of the region200. ln some embodiments, the agent 100 continuously operates within the region 200, then thetrained computational model of that particular region 200 may be stored permanently in adatabase 520 of the agent 100.
Step 702 comprises observing at least two landmarks 220a, 220b, 220c within the region200, from a position which is desired to determine.
The observation may typically be made by one single sensor 110a, 110b, 110c. Alternatively,several sensors 110a, 110b, 110c may be used, which may comprise the same or differenttypes of sensors such as a camera, a stereo camera, an infrared camera, a video camera,a radar, a lidar, an ultrasound device, a time-of-flight camera, or similar device in differentembodiments.
Step 703 comprises obtaining the computational model of the region 200.The computational model has previously been established and trained by the neural network 300 of the machine learning based system 400 by performance of the above describedmethod steps 601-606.
The computational model may be obtained from the database 405 of the machine learningbased system 400 in some embodiments, or from a database 520 of the control arrangement510.
Step 704 comprises providing data concerning the observed 702 landmarks 220a, 220b,220c to the obtained 703 computational model.
The provided data comprises sensor-based data of the observed 702 landmarks 220a, 220b,220c, distance estimation, position in relation to the respective landmark 220a, 220b, 220c,etc.
Step 705 comprises obtaining identities and coordinates of the landmarks 210 in the region200, corresponding to the observed landmarks 220a, 220b, 220c, based on the computa-tional model.
By running the trained computational model of the machine learning based system 400 thathas been obtained, the most plausible landmarks 210 in the region 200, corresponding tothe observed landmarks 220a, 220b, 220c are output by the computational model.
Step 706 comprises determining position of the agent 100 based on relational position of theagent 100 in relation to the observed landmarks 220a, 220b, 220c and the obtained 705coordinates of the landmarks 210.
Hereby a solution is provided for finding the position of the agent 100 in environments thateither GPS positioning is denied, or GPS satellites has very poor coverage and/ or are sat-ellite-based positioning is inaccurate. The obtained 703 computational model forms a mapover the region, with which sensor detections of observed landmarks 220a, 220b, 220c maybe made.
The location of the agent 100 may hereby be made by matching currently made detectionsof landmarks 220a, 220b, 220c with identities and coordinates of the landmarks 210 in the region 200, using Machine Learning techniques.
The control arrangement 510 in the agent 100 is configured to determine position of theagent 100 within a region 200, based on at least some of the method steps 701-706.
The described method steps 701-706 may be performed by a computer algorithm, a machine executable code, a non-transitory computer-readable medium, or a software instruction pro-grammed into a suitable programmable logic such as a processor in the control arrangement 510, in various embodiments.
The computer program product mentioned above may be provided for instance in the formof a data carrier carrying computer program code for performing at least some of the step701-706 according to some embodiments when being loaded into the one or more proces-sors of the control arrangement 510. The data carrier may be, e.g., a hard disk, a CD ROIVIdisc, a memory stick, an optical storage device, a magnetic storage device or any otherappropriate medium such as a disk or tape that may hold machine readable data in a non-transitory manner. The computer program product may furthermore be provided as computerprogram code on a server and downloaded to the control arrangement 510 remotely, e.g., over an Internet or an intranet connection.
The terminology used in the description of the embodiments as illustrated in the accompa-nying drawings is not intended to be limiting of the described methods 600, 700, machinelearning based system 400, control arrangement 510; computer program, and/ or computer-readable medium. Various changes, substitutions and/ or alterations may be made, withoutdeparting from invention embodiments as defined by the appended claims.
As used herein, the term "and/ or" comprises any and all combinations of one or more of theassociated listed items. The term "or" as used herein, is to be interpreted as a mathematicalOR, i.e., as an inclusive disjunction; not as a mathematical exclusive OR (XOR), unless ex-pressly stated othen/vise. ln addition, the singular forms "a", "an" and "the" are to be inter-preted as "at least one", thus also possibly comprising a plurality of entities of the same kind,unless expressly stated othen/vise. lt will be further understood that the terms "includes","comprises", "including" and/ or "comprising", specifies the presence of stated features, ac-tions, integers, steps, operations, elements, and/ or components, but do not preclude thepresence or addition of one or more other features, actions, integers, steps, operations, ele-ments, components, and/ or groups thereof. A single unit such as e.g. a processor may fulfilthe functions of several items recited in the claims. The mere fact that certain measures arerecited in mutually different dependent claims does not indicate that a combination of thesemeasures cannot be used to advantage. A computer program may be stored/ distributed ona suitable medium, such as an optical storage medium or a solid-state medium suppliedtogether with or as part of other hardware, but may also be distributed in other forms such as via Internet or other wired or wireless communication system.

Claims (9)

1. A method (600) for creating a computational model to be applied for positioning ofan agent (100) within a region (200), wherein the method (600) comprises the steps of: selecting (601) a number of landmarks (210) in the region (200); determining (602) coordinates and an identity reference of the selected (601) re-spective landmarks (210); observing (603) at least two landmarks (220a, 220b, 220c), out of the selected (601)landmarks (210), from a first position within the region (200); determining (604) vehicle position in relation to the observed (603) landmarks(220a, 220b, 220c); providing (605) a neural network (300) with data comprising the determined (602)coordinates and identity references of the selected (601) landmarks (210), the observed(603) landmarks (220a, 220b, 220c), and the determined (604) vehicle position in relation tothe observed (603) landmarks (220a, 220b, 220c); and determining (606) weights of the data provided (605) to the neural network (300) forsuccessfully mapping the observed (603) landmarks (220a, 220b, 220c) with the selected(601) landmarks (210), thereby creating the computational model to be applied for position-ing of the agent (100) within the region (200).
2. The method (600) according to claim 1, wherein the steps 601-606 are repeated fora plurality of possible positions of the agent (100) within the region (200).
3. The method (600) according to any one of claim 1 or claim 2, wherein the observed(603) landmarks (220a, 220b, 220c) are arranged in the neural network (300) in a predeter- mined order.
4. A machine learning based system (400) comprising a neural network (300), config-ured to create a computational model to be applied for positioning of an agent (100) within aregion (200), by training the machine learning based system (400), based on the method(600) according to any one of claims 1-3.
5. A method (700) for positioning of an agent (100) within a region (200), based on acomputational model, wherein the method (700) comprises the steps of: observing (702) at least two landmarks (220a, 220b, 220c) within the region (200),from a position which is desired to determine; obtaining (703) the computational model of the region (200); providing (704) data concerning the observed (702) landmarks (220a, 220b, 220c) to the obtained (703) computational model; obtaining (705) identities and coordinates of the landmarks (210) in the region (200),corresponding to the observed landmarks (220a, 220b, 220c), based on the computationalmodel; and determining (706) position of the agent (100) based on re|ationa| position of theagent (100) in relation to the observed landmarks (220a, 220b, 220c) and the obtained (705)coordinates of the landmarks (210).
6. The method (700) according to claim 5, further comprising the step of:determining (701) that the agent (100) is approaching the region (200), based onan identity reference of the region (200).
7. A control arrangement (510) in an agent (100), configured to determine position ofthe agent (100) within a region (200), based on a computational model, created according tothe method (700) according to any one of claims 5-6.
8. A computer program comprising instructions which, when the program is executedby a computer, cause the computer to carry out the steps of the method (600, 700) accordingto any one of claims 1-3 and/ or claims 5-6.
9. A computer-readable medium comprising instructions which, when executed by acomputer, cause the computer to carry out the steps of the method (600, 700) according toany one of claims 1-3 and/ or claims 5-6.
SE2050258A 2020-03-06 2020-03-06 Machine learning based system, methods, and control arrangement for positioning of an agent SE2050258A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
SE2050258A SE2050258A1 (en) 2020-03-06 2020-03-06 Machine learning based system, methods, and control arrangement for positioning of an agent
PCT/SE2021/050189 WO2021177887A1 (en) 2020-03-06 2021-03-04 Machine learning based system, methods, and control arrangement for positioning of an agent

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
SE2050258A SE2050258A1 (en) 2020-03-06 2020-03-06 Machine learning based system, methods, and control arrangement for positioning of an agent

Publications (1)

Publication Number Publication Date
SE2050258A1 true SE2050258A1 (en) 2021-09-07

Family

ID=74875261

Family Applications (1)

Application Number Title Priority Date Filing Date
SE2050258A SE2050258A1 (en) 2020-03-06 2020-03-06 Machine learning based system, methods, and control arrangement for positioning of an agent

Country Status (2)

Country Link
SE (1) SE2050258A1 (en)
WO (1) WO2021177887A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12025985B2 (en) * 2021-05-26 2024-07-02 Drobot, Inc. Methods and apparatus for coordinating autonomous vehicles using machine learning
CN113867342B (en) * 2021-09-18 2023-09-26 中国人民解放军海军工程大学 An anti-ship missile formation recognition target selection system based on Hough transform and optimized K-means clustering

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150228077A1 (en) * 2014-02-08 2015-08-13 Honda Motor Co., Ltd. System and method for mapping, localization and pose correction
WO2018104563A2 (en) * 2016-12-09 2018-06-14 Tomtom Global Content B.V. Method and system for video-based positioning and mapping
US20180188736A1 (en) * 2016-08-16 2018-07-05 Faraday&Future Inc. System and method for vehicle localization assistance using sensor data
EP3346418A1 (en) * 2016-12-28 2018-07-11 Volvo Car Corporation Method and system for vehicle localization from camera image
US20190137280A1 (en) * 2016-08-09 2019-05-09 Nauto, Inc. System and method for precision localization and mapping

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10788830B2 (en) 2017-07-28 2020-09-29 Qualcomm Incorporated Systems and methods for determining a vehicle position
US10929713B2 (en) 2017-10-17 2021-02-23 Sri International Semantic visual landmarks for navigation
US10767996B2 (en) * 2018-05-08 2020-09-08 Honeywell International Inc. System and methods for reducing the map search space requirements in a vision-inertial navigation system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150228077A1 (en) * 2014-02-08 2015-08-13 Honda Motor Co., Ltd. System and method for mapping, localization and pose correction
US20190137280A1 (en) * 2016-08-09 2019-05-09 Nauto, Inc. System and method for precision localization and mapping
US20180188736A1 (en) * 2016-08-16 2018-07-05 Faraday&Future Inc. System and method for vehicle localization assistance using sensor data
WO2018104563A2 (en) * 2016-12-09 2018-06-14 Tomtom Global Content B.V. Method and system for video-based positioning and mapping
EP3346418A1 (en) * 2016-12-28 2018-07-11 Volvo Car Corporation Method and system for vehicle localization from camera image

Also Published As

Publication number Publication date
WO2021177887A1 (en) 2021-09-10

Similar Documents

Publication Publication Date Title
US11527084B2 (en) Method and system for generating a bird&#39;s eye view bounding box associated with an object
EP3916623B1 (en) Devices and methods for accurately identifying objects in a vehicle&#39;s environment
US10203695B2 (en) Autonomous vehicle re-tasking during performance of a programmed task based on detection of a task interruption scenario
CN109859533B (en) Vision-based Collaborative Conflict Avoidance
Barry et al. High‐speed autonomous obstacle avoidance with pushbroom stereo
Yu et al. Cooperative path planning for target tracking in urban environments using unmanned air and ground vehicles
Coppola et al. On-board communication-based relative localization for collision avoidance in micro air vehicle teams
Alhafnawi et al. A survey of indoor and outdoor uav-based target tracking systems: Current status, challenges, technologies, and future directions
CN116540706A (en) An unmanned aerial vehicle provides local path planning system and method for ground unmanned vehicles
SE2050258A1 (en) Machine learning based system, methods, and control arrangement for positioning of an agent
Scerri et al. Transitioning multiagent technology to UAV applications.
Patel et al. Point me in the right direction: Improving visual localization on UAVs with active gimballed camera pointing
Cheung et al. UAV-UGV Collaboration with a PackBot UGV and Raven SUAV for Pursuit and Tracking of a Dynamic Target
Cao et al. Cooperative lidar localization and mapping for v2x connected autonomous vehicles
Toussaint et al. Localizing RF targets with cooperative unmanned aerial vehicles
Ramirez et al. Moving target acquisition through state uncertainty minimization
Kim et al. Detecting and localizing objects on an unmanned aerial system (uas) integrated with a mobile device
Alhafnawi et al. A review of indoor uav-based tracking systems: classification, status, and challenges
Jia et al. A distributed method to form UAV swarm based on moncular vision
Sharma et al. An insight on UAV/drone autonomous navigation methods and applications: a review
Talwandi et al. An Automatic Navigation System for New Technical Advanced Drones for Different Alpplications
Shu et al. An imu/sonar-based extended kalman filter for mini-uav localization in indoor environment
Guler et al. Infrastructure-free Localization of Aerial Robots with Ultrawideband Sensors
WO2021106388A1 (en) Information processing device, information processing method, and information processing program
Choi et al. Collision avoidance scheme for micro UAVs delivering information

Legal Events

Date Code Title Description
NAV Patent application has lapsed