[go: up one dir, main page]

CN104378735B - Indoor orientation method, client and server - Google Patents

Indoor orientation method, client and server Download PDF

Info

Publication number
CN104378735B
CN104378735B CN201410643391.9A CN201410643391A CN104378735B CN 104378735 B CN104378735 B CN 104378735B CN 201410643391 A CN201410643391 A CN 201410643391A CN 104378735 B CN104378735 B CN 104378735B
Authority
CN
China
Prior art keywords
positioning
user
server
picture
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410643391.9A
Other languages
Chinese (zh)
Other versions
CN104378735A (en
Inventor
徐涵
杨铮
赵弋洋
苗欣
毛续飞
刘克彬
刘云浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruan Internet Of Things Technology Group Co ltd
Run Technology Co ltd
Original Assignee
WUXI RUIAN TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUXI RUIAN TECHNOLOGY CO LTD filed Critical WUXI RUIAN TECHNOLOGY CO LTD
Priority to CN201410643391.9A priority Critical patent/CN104378735B/en
Publication of CN104378735A publication Critical patent/CN104378735A/en
Application granted granted Critical
Publication of CN104378735B publication Critical patent/CN104378735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a kind of indoor orientation method, client, server and system, wherein method includes:Obtain the location sensors of terminal device when indoor positioning picture and the shooting of user's using terminal equipment shooting;The positioning picture and the location sensors are sent to server, so that the server is according to orientation where the positioning picture and location sensors positioning user;Receive the positioning result that server returns.

Description

Indoor positioning method, client and server
Technical Field
The invention relates to the technical field of mobile positioning, in particular to an indoor positioning method, a client and a server.
Background
With the continuous deepening and development of pervasive computing research and the need of more applications to know the location information of the target, location-based services are receiving more and more attention. The accompanying indoor positioning problem has also been emphasized.
With the continuous progress and development of society, high buildings in cities are continuously pulled out, the daily life time of people is more and more spent in indoor environments, and the demand of accurate and convenient indoor positioning service is more and more urgent. The main application scenarios of indoor positioning include indoor navigation of complex buildings (such as airports, superstores, museums, etc.), pervasive computing based on geographic positions (such as crowd awareness), security monitoring related to position information, accurate placement of advertisements, serving social network functions, etc.
Most of the existing indoor positioning technologies are based on radio frequency signals, and the main idea is to adopt additional fixed reference tags (or called auxiliary tags) which are used as reference points in the positioning system, and to calculate the coordinates of the tag to be positioned through the comparison between the signal intensity value of the reference point and the signal intensity value of the tag to be positioned. In a real environment, due to a complicated indoor environment structure and human walking, reflection, refraction, absorption and the like of a wireless signal are caused, so that propagation of the wireless signal has uncertainty, and it is difficult to accurately locate an indoor position.
Disclosure of Invention
In view of this, the present invention provides an indoor positioning method, a client and a server, which can achieve accurate indoor positioning.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention discloses an indoor positioning method, which comprises the following steps:
acquiring an indoor positioning picture shot by a user by using terminal equipment and sensor positioning data of the terminal equipment during shooting;
sending the positioning picture and the sensor positioning data to a server so that the server can position the position of the user according to the positioning picture and the sensor positioning data;
and receiving a positioning result returned by the server.
Further, the server locates the position of the user according to the positioning picture and the sensor positioning data, including:
the server searches a database to match the sensor positioning data to obtain an initial positioning scene candidate set;
and matching the positioning pictures in the initial positioning scene candidate set to determine the direction of the user.
Further, the matching the positioning picture in the initial positioning scene candidate set to determine the position of the user includes:
determining a shooting scene of a user and a corresponding 3D model;
calculating the orientation of a user in the 3D model by adopting an SFM technology;
the orientation in the 3D model is converted into a physical orientation by coordinate conversion.
Further, after determining the position of the user, the method further includes:
and updating the data in the database according to the positioning picture and the sensor positioning data.
Further, after displaying the position of the user, the method further includes:
receiving a target position input by a user, sending the target position to the server, so that the server calculates an optimal path for the user to reach the target position according to the position of the user and the target position, and returning navigation information to a client, wherein the navigation information comprises the optimal path;
and receiving the navigation information returned by the server.
The invention also discloses a client, comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring an indoor positioning picture shot by a user by using terminal equipment and sensor positioning data of the terminal equipment during shooting;
the first sending module is used for sending the positioning picture and the sensor positioning data to a server so that the server can position the position of the user according to the positioning picture and the sensor positioning data;
and the first receiving module is used for receiving the positioning result returned by the server.
Further, the first receiving module is further configured to receive a target location input by a user after receiving the positioning result returned by the server, and receive navigation information returned by the server;
the first sending module is further configured to send the target location to the server, so that the server calculates an optimal path for the user to reach the target location according to the position of the user and the target location, and returns navigation information to a client, where the navigation information includes the optimal path.
Further, an indoor scene positioning picture associated with the optimal path is also included in the navigation information.
The invention also discloses a server, comprising:
the second receiving module is used for receiving a positioning picture and sensor positioning data sent by the client, wherein the positioning picture is an indoor picture which is acquired by the client and is shot by a user through terminal equipment, and the sensor positioning data is acquired by the client and is sensor positioning data of the terminal equipment when the user shoots;
the positioning module is used for positioning the position of the user according to the positioning picture and the sensor positioning data;
and the second sending module is used for returning the positioning result to the client so that the client receives the positioning result.
Further, the positioning module is specifically configured to determine a user shooting scene and a corresponding 3D model; calculating the orientation of a user in the 3D model by adopting an SFM technology; the orientation in the 3D model is converted into a physical orientation by coordinate conversion.
The invention utilizes the camera shooting or photographing function of the terminal equipment, obtains the indoor picture shot by the user for positioning and the sensor positioning data generated when the terminal equipment shoots through the client, and sends the positioning picture and the sensor positioning data to the server, and utilizes the server to position the position of the user according to the positioning picture and the sensor positioning data. Indoor location is carried out through using indoor location picture and sensor location data, and does not receive the restriction of indoor wireless radio frequency signal intensity, makes indoor location more accurate.
Drawings
Fig. 1 is a schematic flow chart of an indoor positioning method according to embodiment 1 of the present invention;
fig. 2 is a schematic flow chart of an indoor positioning method according to embodiment 2 of the present invention;
FIG. 3 is a schematic diagram of a method for estimating a scaling factor of a 3D model according to embodiment 2 of the present invention;
fig. 4a and 4b are schematic diagrams illustrating simulation of 3D model estimation rotation parameters by using a K-Edges algorithm according to embodiment 2 of the present invention;
fig. 5a and 5b are schematic diagrams illustrating the simulation of the reference point for estimating the translation of the 3D model according to embodiment 2 of the present invention;
fig. 6 is a schematic flow chart of an indoor positioning method according to embodiment 3 of the present invention;
fig. 7 is a schematic diagram of a client structure provided in embodiment 4 of the present invention;
FIG. 8 is a schematic diagram of a server according to embodiment 5 of the present invention;
fig. 9 is a schematic structural diagram of a system provided in embodiment 6 of the present invention.
Detailed Description
The technical solution of the present invention will be further described in the following detailed description with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings.
It should be noted that the main application scenarios of the indoor positioning method, the client, the server and the system provided by the embodiment of the present invention include, but are not limited to, the following scenarios: indoor navigation of complex buildings (e.g., airports, superstores, museums, etc.), pervasive computing based on geographic location (e.g., crowd awareness), security monitoring related to location information, accurate placement of advertisements, serving social network functions, etc.
Fig. 1 is a schematic flow chart of an indoor positioning method according to embodiment 1 of the present invention, as shown in fig. 1, including the following steps:
s101, acquiring an indoor positioning picture shot by a user through the terminal equipment and sensor positioning data of the terminal equipment during shooting.
Specifically, the execution main body of the embodiment of the present invention is an application client installed on a terminal device, and the terminal device is preferably a mobile terminal device having a camera shooting or photographing function, such as an Ipad, an iphone, a general mobile phone, a notebook computer, and the like. When a user is in an unfamiliar indoor environment and does not know the position of the user, the position of the user is required to be located, the user can use a terminal device carried with the user, such as a mobile phone, to take a picture of a more remarkable indoor building, such as a shop trademark, a poster, a road sign and the like, and the sensor locating data on the terminal device is obtained when the picture is taken. The sensor positioning data may be compass direction and WIFI signal strength.
S102, sending the positioning picture and the sensor positioning data to a server so that the server can position the position of the user according to the positioning picture and the sensor positioning data.
Specifically, after the client acquires the positioning picture and the sensor positioning data, the positioning picture and the sensor positioning data are uploaded to the server, so that the server positions the position of the user according to the positioning picture and the sensor positioning data, and returns a positioning result to the client.
S103, receiving a positioning result returned by the server.
Specifically, the client receives a positioning result returned by the server, where the positioning result includes the position of the user, and the client may display the positioning result on a screen of the terminal device or notify the positioning result to the user in a voice manner.
The embodiment of the invention utilizes the camera shooting or photographing function of the terminal equipment, obtains the indoor picture shot by the user for positioning and the sensor positioning data generated when the terminal equipment shoots through the client, sends the positioning picture and the sensor positioning data to the server, and positions the position of the user according to the positioning picture and the sensor positioning data by utilizing the server. Indoor positioning is carried out by using indoor positioning pictures and sensor positioning data without being limited by indoor wireless radio frequency signal strength, so that the indoor positioning is more accurate, and a user can observe own geographical position in real time by using a terminal device carried with the user.
Fig. 2 is a schematic flow chart of an indoor positioning method according to embodiment 2 of the present invention, as shown in fig. 2, including the following steps:
s201, the client acquires an indoor positioning picture shot by a user through the terminal device and sensor positioning data of the terminal device during shooting.
S202, the client sends the positioning picture and the sensor positioning data to a server.
S203, the server searches a database to match the sensor positioning data, and an initial positioning scene candidate set is obtained.
In particular, if the database for storing the indoor scene pictures is huge in size and the number of indoor scenes is large, the usability of the system is greatly reduced due to the overhead of image processing if the user is accurately located by only performing image matching on the location pictures. Therefore, the embodiment of the invention provides multi-modal indoor positioning, namely, an initial positioning scene candidate set is determined before image matching is carried out on a positioning picture, and the data volume of a matched image is reduced, so that the search space of image matching is greatly reduced.
Before this step is performed, a database is pre-established for storing sensor positioning data, preferably compass direction and WIFI signal strength. When a user shoots an indoor scene, sensor data of the terminal device at the moment, such as compass direction and WIFI signal strength, are recorded while an image is acquired. And generating a compass data Gaussian distribution parameter and a WIFI signal fingerprint for the scene through a plurality of groups of compass directions and WIFI signal intensity data about the scene. Therefore, when the user actually positions, the WIFI signal intensity and compass data uploaded simultaneously with the positioning picture are matched to obtain a candidate scene set with a small scale, namely an initial positioning scene candidate set. And then, fine-grained accurate positioning is carried out by using an image matching method.
And S204, the server matches the positioning picture in the initial positioning scene candidate set to determine the direction of the user.
Specifically, before executing this step, a 3D model database needs to be established in advance, and fine-grained location and direction estimation are performed on the user by extracting features of the location picture and matching the 3D model. The 3D model may be built using the following method: the initial database is established by using crowd sensing, namely a mode of giving certain rewards to users who actively take part in uploading indoor positioning pictures and a mode of actively uploading the indoor positioning pictures by merchants. Specifically, a scene can be conveniently added to the database by the following steps:
A. the user stretches the left hand to shoot a positioning picture of the scene and transmits the positioning picture to the database;
B. the user stretches the right hand to shoot another positioning picture of the scene and transmits the picture to the database;
C. encouraging the user to repeat the operations of the steps 1 and 2 for a plurality of times, and directly entering the step D;
D. a user indicates a shot scene screenshot in the electronic plan and uploads the shot scene screenshot to a database;
E. the user enters his or her arm extension or height.
Through the simple operations, a scene can be added into the database, and then the pictures are processed to establish a simple 3D model for the scene. In the initial stage of the system, the 3D model is limited to the number of pictures, is not very accurate, but is already sufficient to obtain a rough orientation of the user by assisted localization and image recognition. And then, with the increase of uploaded data, the 3D model is continuously updated and updated through continuous updating of the database, so that higher positioning accuracy can be obtained. For example, after the user makes a positioning query, the uploaded positioning picture is added to the 3D model training set of the scene, so as to obtain a more accurate 3D model about the scene. And users often select remarkable indoor scenes for shooting, positioning pictures uploaded by the scenes are natural and many, and models of the scenes are more accurate.
Specifically, the server determines a shooting scene and a corresponding 3D model of the user according to the indoor positioning picture, calculates the orientation of the user in the 3D model by adopting an SFM technology, and converts the orientation in the 3D model into a physical orientation through coordinate conversion, so that the actual orientation of the user is determined.
Since the orientation in the 3D model obtained by SFM is only a relative structure with respect to the scene. The structure also needs to be coordinate transformed to be able to transform the orientation in the 3D model to a physical orientation for indoor positioning. Where the coordinate transformation includes scaling, rotation, and translation. As shown in fig. 3, the embodiment of the present invention estimates the scaling factor by the following method:
the scaling coefficient estimation is converted into an optimization problem by adopting the following formula I or formula II:
the formula I is as follows:
Subject to
the formula II is as follows:
Subject to
whereinRespectively, the coordinates of the ith pair of positioning pictures in the top view of the 3D model for the scene. When the positioning picture is shot, the compass readings of the sensor of the terminal equipment are respectivelyAnd anderror of two readings respectively. The coordinates of the shooting scene in the 3D model are (a, b), SiIndicating the absolute physical distance between the two positioning pictures when the pair of positioning pictures is taken, s indicating the abduction of the user,and lambda is the scaling factor to be obtained for the error between the distance between the two positioning pictures and the user's arm spread.Andthe absolute physical distances from the shooting positions of the two positioning pictures to the shot object are respectively.
The embodiment of the invention adopts the following method to estimate the rotation parameters:
taking an example of finding an entrance of a certain scene, as shown in fig. 4a, wherein an abscissa represents an x-coordinate position of a user in a 3D model, and an ordinate represents a y-coordinate position of the user in the 3D model, according to a photographing position and feature points of the user in the 3D model, four Edges of the scene are found by using a "K-Edges" algorithm, as shown in fig. 4b, so as to determine the entrance of the scene. By comparing the orientation of the portal in the 3D model with the orientation in the actual physical world, the rotation parameters of the 3D model relative to the physical world can be derived.
The fitted straight line obtained by the above algorithm is the geometric representation of the scene entrance in the 3D model. And then, by comparing the indoor plane maps, the rotation of the whole 3D model relative to the physical world can be determined.
The embodiment of the invention adopts the following method to estimate translation:
a reference point needs to be found whose coordinates must be known both in the physical world and in the 3D model. The most prominent scenes are always intended to be placed in the middle of the positioning picture according to the shooting habits of a large number of users. Considering the positioning picture as a ray from the terminal device, the rays generated by the positioning pictures about the scene tend to converge into a point, which is the reference point to be found. For example, also taking the above-mentioned entry to find a certain scene as an example, as shown in fig. 5a, if there are n positioning pictures about the scene, n photographing rays can be determined. It can be theoretically determined from the n photographing raysAnd (4) a point of intersection. The DBSCAN algorithm can be used here to cluster these intersections, and the center of the closest cluster is the reference point, as shown in fig. 5 b. And determining the translation relation of the 3D model and the physical world according to the reference point.
And S205, receiving a positioning result returned by the server.
Compared with other existing indoor positioning methods, the method and the device have the advantages of being low in cost, convenient to deploy, high in precision and the like. Specifically, the embodiment of the invention realizes indoor positioning by using terminal equipment such as a mobile phone and a sensor comprising a camera thereof, and does not need additional special equipment such as a radio frequency transmitter and the like. The embodiment of the invention realizes high-precision indoor positioning by utilizing the 3D reconstruction and image matching technology, and the experimental result shows that the positioning error of the method is within 20 cm. And by analyzing and utilizing the sensor positioning data of the terminal equipment, a mechanism for updating the database according to user query is introduced, and the requirement for the number of pictures for establishing the 3D model is reduced. In addition, the embodiment of the invention reduces the search space of image matching through multi-mode positioning, greatly reduces the overhead of system positioning and realizes the near-real-time indoor positioning effect. Even if the user actually positions, the WIFI signal intensity and compass direction data uploaded simultaneously with the picture are matched through the server to obtain a candidate scene set with a small scale, and then fine-grained accurate positioning is carried out by using an image matching method. By the method provided by the embodiment of the invention, the real-time performance of the system is ensured. The experimental result shows that the system overhead can be reduced by more than 75% through multi-modal positioning.
The embodiment of the invention can provide indoor navigation with auxiliary images besides the most basic indoor positioning function. The basic idea is to provide indoor navigation service for users by using rich image information in the database. The following describes in detail how the present invention provides an indoor navigation service in detail by way of embodiment 3.
Fig. 6 is a schematic flow chart of an indoor positioning method according to embodiment 3 of the present invention, as shown in fig. 6, including the following steps:
s301, the client receives the target position input by the user and sends the target position to the server.
Specifically, after the client finishes user positioning, the client can also receive a target position input by the user, that is, a place to which the user is going to go, and send the target position to the server, so as to provide a navigation service for the user.
S302, the server calculates the optimal path of the user to the target position according to the position of the user and the target position, and returns navigation information to the client, wherein the navigation information comprises the optimal path.
Specifically, the server calculates the shortest path according to the position of the user, i.e., the current position of the user, and the input target position, i.e., the destination, and returns the optimal path information to the client. The server can also find the salient scenes that the user will pass through according to the path, and return a picture of the expected observation angle for the user for each scene.
S303, the client receives the navigation information returned by the server.
Specifically, the client receives navigation information returned by the server, wherein the navigation information comprises the optimal path and a remarkable scene photo possibly encountered in the path, so that the user can enjoy the navigation information. The client may also display an indoor floor plan, with the best path labeled in the floor plan.
Fig. 7 is a schematic structural diagram of a client according to embodiment 4 of the present invention, as shown in fig. 7, including: an acquisition module 11, a first sending module 12 and a first receiving module 13. Wherein,
an obtaining module 11, configured to obtain an indoor positioning picture taken by a user using a terminal device and sensor positioning data of the terminal device during shooting;
a first sending module 12, configured to send the positioning picture and the sensor positioning data to a server, so that the server locates an orientation of a user according to the positioning picture and the sensor positioning data;
a first receiving module 13, configured to receive a positioning result returned by the server.
Further, the first receiving module 13 is further configured to receive a target location input by a user after receiving the positioning result returned by the server, and receive navigation information returned by the server;
the first sending module 12 is further configured to send the target location to the server, so that the server calculates an optimal path for the user to reach the target location according to the position of the user and the target location, and returns navigation information to a client, where the navigation information includes the optimal path.
Further, the navigation information further includes an indoor scene positioning picture associated with the optimal path.
Further, the sensor positioning data comprises compass direction and WIFI signal strength.
The client described in this embodiment is used for executing the method steps related to the client in the indoor positioning method shown in fig. 1, fig. 2 and fig. 6, and the technical principle and the generated technical effect are similar, which refer to the related description of the embodiments shown in fig. 1, fig. 2 and fig. 3 specifically.
Fig. 8 is a schematic structural diagram of a server according to embodiment 5 of the present invention, as shown in fig. 8, including: a second receiving module 21, a positioning module 22 and a second sending module 23. Wherein,
a second receiving module 21, configured to receive a positioning picture and sensor positioning data sent by the client, where the positioning picture is an indoor picture taken by a user using a terminal device and acquired by the client, and the sensor positioning data is sensor positioning data of the terminal device, acquired by the client, when the user takes a picture;
a positioning module 22, configured to position the location of the user according to the positioning picture and the sensor positioning data;
a second sending module 23, configured to return a positioning result to the client, so that the client receives the positioning result.
Further, the positioning module 22 is specifically configured to search a database to match the sensor positioning data, so as to obtain an initial positioning scene candidate set; and matching the positioning pictures in the initial positioning scene candidate set to determine the direction of the user.
Further, the positioning module 22 is specifically configured to determine a shooting scene of the user and a corresponding 3D model; calculating the orientation of a user in the 3D model by adopting an SFM technology; the orientation in the 3D model is converted into a physical orientation by coordinate conversion.
Further, the server further includes:
and an updating module 24, configured to update the data in the database according to the positioning picture and the sensor positioning data after the positioning module 22 determines the position of the user.
Further, the second receiving module 21 is further configured to receive a target position sent by the client after the client displays the location of the user, where the target position is input by the user at the client;
the server further includes:
a navigation module 25, configured to calculate an optimal path for the user to reach the target location according to the position of the user and the target location;
the second sending module 23 is further configured to return navigation information to the client, where the navigation information includes the optimal path.
Further, the navigation information further includes an indoor scene positioning picture associated with the optimal path.
Further, the sensor positioning data comprises compass direction and WIFI signal strength.
The server according to this embodiment is used for executing the method steps related to the server in the indoor positioning method shown in fig. 1, fig. 2 and fig. 6, and the technical principle and the generated technical effect are similar, which refer to the related description of the embodiments shown in fig. 1, fig. 2 and fig. 3 specifically.
Fig. 9 is a schematic structural diagram of a system according to embodiment 6 of the present invention, as shown in fig. 9, including: a client 31, a server 32 and a terminal device 33. Wherein,
the client 31 is installed on the terminal device 33, and performs image capturing or photographing using a camera of the terminal device 33.
The client 31 is configured to obtain an indoor positioning picture taken by a user using the terminal device 33 and sensor positioning data of the terminal device during shooting; sending the positioning picture and the sensor positioning data to a server 32, so that the server 32 positions the position of the user according to the positioning picture and the sensor positioning data; receiving the positioning result returned by the server 32;
further, the client 31 is further configured to receive a target location input by a user after receiving the positioning result returned by the server 32, and receive navigation information returned by the server 32; the target position is sent to the server 32, so that the server 32 calculates the optimal path for the user to reach the target position according to the direction of the user and the target position.
The server 32 is configured to receive a positioning picture and sensor positioning data sent by the client 31, where the positioning picture is an indoor picture taken by a user using a terminal device 33 and acquired by the client 31, and the sensor positioning data is sensor positioning data of the terminal device 33 when the user takes a picture and acquired by the client 31; positioning the position of the user according to the positioning picture and the sensor positioning data; and returning a positioning result to the client terminal 31 so that the client terminal 31 receives the positioning result.
Further, the server 32 is specifically configured to search a database to match the sensor positioning data, so as to obtain an initial positioning scene candidate set; and matching the positioning pictures in the initial positioning scene candidate set to determine the direction of the user.
Further, the server 32 is specifically configured to determine a shooting scene of the user and a corresponding 3D model; calculating the orientation of a user in the 3D model by adopting an SFM technology; the orientation in the 3D model is converted into a physical orientation by coordinate conversion.
Further, the server 32 is further configured to update the data in the database according to the positioning picture and the sensor positioning data after determining the position of the user.
Further, the server 32 is further configured to receive a target position sent by the client 31 after the client 31 displays the location of the user, where the target position is input by the user at the client 31;
the server 32 is further configured to calculate an optimal path for the user to reach the target location according to the position of the user and the target location; and returning navigation information to the client terminal 31, wherein the navigation information comprises the optimal path.
Further, the navigation information further includes an indoor scene positioning picture associated with the optimal path.
Further, the sensor positioning data comprises compass direction and WIFI signal strength.
The system according to this embodiment is used for executing the method steps related to the indoor positioning method shown in fig. 1, fig. 2 and fig. 6, and the technical principle and the generated technical effect are similar, which refer to the related description of the embodiment shown in fig. 1, fig. 2 and fig. 3 specifically.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent replacements, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (1)

1. An indoor positioning method, comprising:
acquiring an indoor positioning picture shot by a user by using terminal equipment and sensor positioning data of the terminal equipment during shooting, wherein the sensor positioning data comprises compass direction and WIFI signal intensity;
sending the positioning picture and the sensor positioning data to a server so that the server can position the position of the user according to the positioning picture and the sensor positioning data; the server positions the position of the user according to the positioning picture and the sensor positioning data, and the method comprises the following steps: the server searches a database to match the sensor positioning data to obtain an initial positioning scene candidate set; a 3D model database is established in advance, and fine-grained positioning and direction estimation are carried out on a user by extracting the characteristics of a positioning picture and matching the positioning picture with the 3D model; the 3D model may be built using the following method: establishing an initial database by using crowd sensing, namely a mode of giving certain rewards to users actively participating in uploading indoor positioning pictures and a mode of actively uploading the indoor positioning pictures by merchants; wherein, the step of establishing an initial database comprises the following steps: A. the user stretches the left hand to shoot a positioning picture of the scene and transmits the positioning picture to the database; B. the user stretches the right hand to shoot another positioning picture of the scene and transmits the picture to the database; C. encouraging the user to repeat the operations of the steps 1 and 2 for a plurality of times, and directly entering the step D; D. a user indicates a shot scene screenshot in the electronic plan and uploads the shot scene screenshot to a database; E. the user inputs the own arm exhibition or height; matching the positioning pictures in the initial positioning scene candidate set to determine the direction of the user; the matching the positioning picture in the initial positioning scene candidate set to determine the position of the user comprises: determining a shooting scene of a user and a corresponding 3D model; calculating the orientation of a user in the 3D model by adopting an SFM technology; converting the orientation in the 3D model into a physical orientation by coordinate conversion; after the determining the position of the user, the method further comprises: updating data in the database according to the positioning picture and the sensor positioning data; wherein the coordinate transformation comprises scaling, rotation, and translation;
the parameter estimation for the scaling comprises:
whereinCoordinates of the ith pair of positioning pictures in the top view of the 3D model respectively related to the scene; when the positioning picture is shot, the compass readings of the sensor of the terminal equipment are respectivelyAndanderror of two readings, respectively; the coordinates of the shooting scene in the 3D model are (a, b),indicating the absolute physical distance between the two positioning pictures when the pair of positioning pictures is taken, s indicating the abduction of the user,as an error between the distance between the two positioning pictures and the user's arm spread,is the required scaling factor;andrespectively obtaining absolute physical distances from the shooting positions of the two positioning pictures to the shot object;
receiving a positioning result returned by the server, displaying the positioning result on a screen of the terminal equipment or informing a user in a voice mode by the client, specifically comprising: receiving a target position input by a user, sending the target position to the server, so that the server calculates an optimal path for the user to reach the target position according to the position of the user and the target position, and returning navigation information to a client, wherein the navigation information comprises the optimal path; and receiving the navigation information returned by the server.
CN201410643391.9A 2014-11-13 2014-11-13 Indoor orientation method, client and server Active CN104378735B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410643391.9A CN104378735B (en) 2014-11-13 2014-11-13 Indoor orientation method, client and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410643391.9A CN104378735B (en) 2014-11-13 2014-11-13 Indoor orientation method, client and server

Publications (2)

Publication Number Publication Date
CN104378735A CN104378735A (en) 2015-02-25
CN104378735B true CN104378735B (en) 2018-11-13

Family

ID=52557331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410643391.9A Active CN104378735B (en) 2014-11-13 2014-11-13 Indoor orientation method, client and server

Country Status (1)

Country Link
CN (1) CN104378735B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107851264A (en) * 2015-07-29 2018-03-27 诺基亚技术有限公司 For the device of budget allocation, method and computer program product in Data Collection
CN106470478B (en) * 2015-08-20 2020-03-24 西安云景智维科技有限公司 Positioning data processing method, device and system
CN105792353B (en) * 2016-03-14 2020-06-16 中国人民解放军国防科学技术大学 Crowd-sensing WiFi signal fingerprint-assisted image matching indoor positioning method
CN105865471A (en) * 2016-04-01 2016-08-17 深圳安迪尔智能技术有限公司 Robot navigation method and navigation robot
CN105975967B (en) * 2016-04-29 2019-04-23 殳南 A kind of object localization method and system
CN106289263A (en) * 2016-08-25 2017-01-04 乐视控股(北京)有限公司 Indoor navigation method and device
CN106658409A (en) * 2016-12-07 2017-05-10 雷蕾 Positioning method and system
CN107105410A (en) * 2017-05-17 2017-08-29 深圳市伊特利网络科技有限公司 Using the realization method and system for being positioned at middle historical path on foot
WO2019000461A1 (en) * 2017-06-30 2019-01-03 广东欧珀移动通信有限公司 Positioning method and apparatus, storage medium, and server
CN108053447A (en) * 2017-12-18 2018-05-18 纳恩博(北京)科技有限公司 Method for relocating, server and storage medium based on image
CN110290455A (en) * 2018-03-15 2019-09-27 奥孛睿斯有限责任公司 Method and system are determined based on the target scene of scene Recognition
WO2020244576A1 (en) * 2019-06-05 2020-12-10 北京外号信息技术有限公司 Method for superimposing virtual object on the basis of optical communication apparatus, and corresponding electronic device
CN110764514A (en) * 2019-11-27 2020-02-07 四川虹美智能科技有限公司 Washing machine and method for controlling movement of washing machine
CN111256701A (en) * 2020-04-26 2020-06-09 北京外号信息技术有限公司 Equipment positioning method and system
CN111879305B (en) * 2020-06-16 2022-03-18 华中科技大学 A multi-modal perception and positioning model and system for high-risk production environments

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103249142A (en) * 2013-04-26 2013-08-14 东莞宇龙通信科技有限公司 Locating method, locating system and mobile terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8180146B2 (en) * 2009-12-22 2012-05-15 The Chinese University Of Hong Kong Method and apparatus for recognizing and localizing landmarks from an image onto a map
US9154919B2 (en) * 2013-04-22 2015-10-06 Alcatel Lucent Localization systems and methods
CN103424113B (en) * 2013-08-01 2014-12-31 毛蔚青 Indoor positioning and navigating method of mobile terminal based on image recognition technology
CN103491631A (en) * 2013-09-26 2014-01-01 舒泽林 Indoor positioning system and method based on two-dimension code and wifi signals

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103249142A (en) * 2013-04-26 2013-08-14 东莞宇龙通信科技有限公司 Locating method, locating system and mobile terminal

Also Published As

Publication number Publication date
CN104378735A (en) 2015-02-25

Similar Documents

Publication Publication Date Title
CN104378735B (en) Indoor orientation method, client and server
US9324003B2 (en) Location of image capture device and object features in a captured image
Verma et al. Indoor navigation using augmented reality
US9292936B2 (en) Method and apparatus for determining location
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
US11243288B2 (en) Location error radius determination
US20120120101A1 (en) Augmented reality system for supplementing and blending data
US20140126769A1 (en) Fast initialization for monocular visual slam
CN104331423B (en) A kind of localization method and device based on electronic map
Feng et al. Augmented reality markers as spatial indices for indoor mobile AECFM applications
CN105431708A (en) Image processing device, image processing method, and program
CN104977003A (en) Indoor people search method, cloud server, and system based on shared track
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
KR101413011B1 (en) Augmented Reality System based on Location Coordinates and Augmented Reality Image Providing Method thereof
TWM560099U (en) Indoor precise navigation system using augmented reality technology
Zhang et al. Seeing Eye Phone: a smart phone-based indoor localization and guidance system for the visually impaired
WO2025037291A2 (en) Enhancement of the 3d indoor positioning by augmenting a multitude of 3d imaging, lidar distance corrections, imu sensors and 3-d ultrasound
JP5920886B2 (en) Server, system, program and method for estimating POI based on terminal position / orientation information
Shi et al. A novel individual location recommendation system based on mobile augmented reality
US20260000976A1 (en) Estimating Pose for a Client Device Using a Pose Prior Model
US10614308B2 (en) Augmentations based on positioning accuracy or confidence
US9188444B2 (en) 3D object positioning in street view
CN111783849B (en) Indoor positioning method and device, electronic equipment and storage medium
WO2019081754A2 (en) Orientation determination device and method, rendering device and method
CN108512888A (en) A kind of information labeling method, cloud server, system, electronic equipment and computer program product

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 214135 Room 501, A District, Qingyuan Road, Wuxi science and Technology Park, Wuxi New District, Jiangsu

Patentee after: RUN TECHNOLOGY CO.,LTD.

Address before: 214135 Room 501, A District, Qingyuan Road, Wuxi science and Technology Park, Wuxi New District, Jiangsu

Patentee before: WUXI RUN TECHNOLOGY CO.,LTD.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201800 room j1958, building 6, 1288 Yecheng Road, Jiading District, Shanghai

Patentee after: Ruan Internet of things Technology Group Co.,Ltd.

Address before: 214135 Room 501, A District, Qingyuan Road, Wuxi science and Technology Park, Wuxi New District, Jiangsu

Patentee before: RUN TECHNOLOGY CO.,LTD.