CN113961659A - A kind of navigation method, system, computer equipment and medium based on complex terrain - Google Patents
A kind of navigation method, system, computer equipment and medium based on complex terrain Download PDFInfo
- Publication number
- CN113961659A CN113961659A CN202111230764.6A CN202111230764A CN113961659A CN 113961659 A CN113961659 A CN 113961659A CN 202111230764 A CN202111230764 A CN 202111230764A CN 113961659 A CN113961659 A CN 113961659A
- Authority
- CN
- China
- Prior art keywords
- client
- user
- map
- sending
- receiving
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Graphics (AREA)
- Navigation (AREA)
- Instructional Devices (AREA)
Abstract
The embodiment of the invention discloses a navigation method, a navigation system, computer equipment and a medium based on complex terrain, wherein the navigation method comprises the following steps: receiving a first instruction sent by a client in response to a first operation of a user, acquiring positioning information and sending a first map to the client; sending first positioning mode prompt information, receiving a second instruction sent by the client in response to a second operation of the user, and starting a first real-scene positioning mode; receiving a third instruction sent by the client in response to a third operation of the user, receiving and identifying a live-action image uploaded by the user, and sending a second map to the client; and navigating according to the second map and sending navigation data to the client. According to the invention, the accurate positioning of the user position is realized in a mode of combining GPS positioning and live-action scanning, and the purpose of navigating the user trip is achieved by loading the three-dimensional live-action map of the position, so that the problems that the user cannot be positioned and lost in a complex environment are solved.
Description
Technical Field
The invention relates to the technical field of internet of things, in particular to a navigation method, a navigation system, computer equipment and a computer readable storage medium based on complex terrain.
Background
Under the complex terrain environments such as a shared-terrain picnic, a visiting or a scenic spot and the like, a user is easy to lose the direction of the user, particularly, a two-dimensional plane graph obtained by traditional map navigation through GPS positioning is difficult to identify, and especially when the field environment is complex and GPS signals are weak, navigation information is easy to deviate, so that the problem that the navigation requirements of the user cannot be met is solved.
Disclosure of Invention
In order to solve at least one of the above problems, the present application proposes a navigation method, system, computer device and medium based on complex terrain.
The first embodiment of the invention provides a navigation method based on complex terrain, which is applied to a server and comprises the following steps:
receiving a first instruction sent by a client in response to a first operation of a user, acquiring positioning information and sending a first map to the client;
sending first positioning mode prompt information, receiving a second instruction sent by the client in response to a second operation of the user, and starting a first real-scene positioning mode;
receiving a third instruction sent by the client in response to a third operation of the user, receiving and identifying a live-action image uploaded by the user, and sending a second map to the client;
and navigating according to the second map and sending navigation data to the client.
In a specific embodiment, the navigating according to the second map and sending the navigation data to the client further includes:
and receiving a fourth instruction sent by the client in response to a fourth operation of the user, determining a target location, planning and navigating according to the second map to acquire first navigation information, and sending the first navigation information to the client for display.
In a specific embodiment, the receiving, by the client, a first instruction sent by the client in response to a first operation of the user, to obtain the positioning information and send the first map to the client further includes: sending user portrait prompt information, receiving a fifth instruction sent by the client in response to a fifth operation of the user, and receiving and storing the user portrait of the user;
the navigating according to the second map and sending the navigation data to the client further comprises: and navigating according to the user portrait of the user and the second map and sending navigation data to the client.
In a specific embodiment, the navigating according to the second map and sending the navigation data to the client further includes:
and sending multi-window prompt information, receiving a sixth instruction sent by the client in response to a sixth operation of the user, navigating by using the first map and the second map respectively to obtain second navigation information, and sending the second navigation information to the client for display.
In a specific embodiment, the receiving, by the client, a first instruction sent by the client in response to a first operation of the user, to obtain the positioning information and send the first map to the client further includes: detecting and displaying positioning authority prompt information; and/or
The sending the first positioning mode prompt message and receiving a second instruction sent by the client in response to a second operation of the user, and the starting the first live-action positioning mode further comprises: detecting and displaying camera shooting permission prompt information; and/or
The receiving a third instruction sent by the client in response to a third operation of the user, receiving and identifying a live-action image uploaded by the user, and sending a second map to the client further comprises: and if the real-scene image display is not identified, the prompt information is uploaded again.
A second embodiment of the present application provides a navigation method based on complex terrain, which is applied to a client, and the method includes:
responding to a first operation of a user, sending a first instruction to a server, and loading and displaying a first map;
receiving and displaying first positioning mode prompt information sent by the server, and sending a second instruction to the server in response to a second operation of a user so that the server starts a first real positioning mode;
responding to a third operation of a user, and sending the live-action image uploaded by the user to the server so that the server identifies the live-action image;
and loading and displaying a second map for navigation, wherein the second map is navigation data generated after the server identifies the live-action image.
A third embodiment of the present invention provides a navigation system including a server and at least one client, the server configured to:
receiving a first instruction sent by a client in response to a first operation of a user, acquiring positioning information and sending a first map to the client;
sending first positioning mode prompt information, receiving a second instruction sent by the client in response to a second operation of the user, and starting a first real-scene positioning mode;
receiving a third instruction sent by the client in response to a third operation of the user, receiving and identifying a live-action image uploaded by the user, and sending a second map to the client;
and navigating according to the second map and sending navigation data to the client.
In one embodiment, the servers are server clusters including a front-end server cluster, a back-end server cluster, and a data server cluster, wherein
The front-end server cluster is used for providing related page display such as page display, map model display, searching and searching for the client;
the background server cluster is used for providing background service, user management, map module partitioning and interface management;
the data server cluster is used for storing a map database, a user portrait, a recommended route planning file system, caching and providing data access and storage.
A fourth embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method according to the first embodiment; or the program when executed by a processor implements a method as described in the second embodiment.
A fifth embodiment of the present application provides a computer device, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method according to the first embodiment; or the processor, when executing the program, implements the method according to the second embodiment.
The invention has the following beneficial effects:
aiming at the existing problems, the invention sets a navigation method, a navigation system, computer equipment and a navigation medium based on complex terrain, realizes the accurate positioning of the user position by combining GPS positioning and live-action scanning, and achieves the purpose of navigating the user trip by loading the three-dimensional live-action map of the position, thereby solving the problem that the user can not position and lose lost in the complex environment, effectively improving the convenience of the user trip and having wide application prospect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a flow diagram of a complex terrain based navigation method according to an embodiment of the present application;
FIG. 2 illustrates an exemplary navigation system architecture diagram suitable for use in the present application;
FIG. 3 shows a swim lane diagram of a complex terrain based navigation method according to one embodiment of the present application;
FIG. 4 shows a flow diagram of a complex terrain-based navigation method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer device according to another embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the invention, the invention is further described below with reference to preferred embodiments and the accompanying drawings. Similar parts in the figures are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and is not to be taken as limiting the scope of the invention.
Under the complex terrain environments such as a shared picnic, a hall or a scenic spot with staggered terrains, a user is easy to lose the direction of the user, particularly, the problem that the two-dimensional plane graph obtained through GPS positioning is difficult to identify in the traditional map navigation, especially, the problem that the navigation information is easy to deviate when the field environment is complex and the GPS signal is weak, and the problem that the navigation requirement of the user cannot be met. To this end, an embodiment of the present application proposes a navigation method applied to a server based on complex terrain, as shown in fig. 1, including:
s100, receiving a first instruction sent by a client in response to a first operation of a user, acquiring positioning information and sending a first map to the client;
s102, sending first positioning mode prompt information, receiving a second instruction sent by the client in response to a second operation of the user, and starting a first real-scene positioning mode;
s104, receiving a third instruction sent by the client in response to a third operation of the user, receiving and identifying a live-action image uploaded by the user, and sending a second map to the client;
and S106, navigating according to the second map and sending navigation data to the client.
The embodiment realizes accurate positioning of the user position through a mode combining GPS positioning and live-action scanning, and achieves the purpose of navigating the user trip by loading the three-dimensional live-action map with higher resolution at the position, can solve the problem that the user cannot position and lose in a complex environment, effectively improves the convenience of the user trip, and has wide application prospect.
In one embodiment, the present application provides a navigation system, as shown in fig. 2, comprising at least one client 10 and a server 20, wherein it is presently recommended to use a 5G mobile signal base station for information transfer between the client and the server via a wireless network.
The client 10 is an intelligent device with functions of a camera, GPS positioning, and a touch screen, for example, a smart phone or a tablet computer, and the navigation method of this embodiment may be applied to the client 10 of an android system or an apple IOS system, and may also be applied to a webpage and a WeChat applet of the client 10, which is not limited in this application.
The server 20 may be an independent server or a server cluster, for example, when the area of the related navigation area is small, the data requirement can be satisfied by using one server, and the configuration is flexible and convenient; for example, when the area of the related navigation area is large or the data volume is large, the server cluster is used, and the server cluster includes a front-end server cluster, a background server cluster, and a data server cluster. The front-end server cluster comprises an android end server cluster, an IOS end server cluster and a web end (including an applet) server cluster, and is respectively used for providing related display pages such as page display, map model display, searching and searching for a user; the background server cluster is respectively used for providing background service, user management, map module partitioning, interface management and the like, and mainly provides service for front-end application; the data server cluster is used for storing a series of data access and storage such as a map database (such as three-dimensional map data and two-dimensional map data), user information portraits, recommended route planning file systems, cache and the like, wherein the algorithm and the map module are recommended to be deployed on a local physical server.
It should be noted that, due to the diversity of the clients, the clients may be clients using an android system or clients using an apple system, and therefore, the front-end server cluster is set to include an android-end server cluster, an apple IOS-end server cluster, and a server cluster including a web end and a wechat applet, which can effectively improve the loading speed of the clients.
In this embodiment, as shown in fig. 3, a navigation interaction method based on a complex terrain such as a garden is described by taking a live-action navigation application program running on a smart phone as an example:
the method comprises the steps that firstly, a server receives a first instruction sent by a client in response to a first operation of a user, acquires positioning information and sends a first map to the client;
in this embodiment, a user logs in a live-action navigation application program through a smart phone, and sends positioning information of a local GPS to a server to obtain a first map sent by the server, as shown in fig. 3, the user performs a first operation (login) on a client 10, so that the client 10 sends a first instruction to the server 20, where the first instruction includes the local GPS positioning information, and the server 20 sends the first map including the GPS positioning information to the client according to the first instruction. In other words, the client 10 sends a first instruction including local GPS positioning information to the server according to the login operation of the user to acquire the first map, i.e., the client sends the first instruction to the server in response to the first operation of the user, loads and displays the first map. The first map is a local map based on GPS positioning information of the client, and can be a two-dimensional map or a common three-dimensional map.
Considering that there is a case where the client does not open the positioning right, in an optional embodiment, the navigation method further includes: the receiving a first instruction sent by a client in response to a first operation of a user, acquiring positioning information and sending a first map to the client further comprises: and detecting and displaying the positioning authority prompt information.
In this embodiment, when the server responds to a first instruction sent by the user after the user logs in the client, where the first instruction does not include GPS positioning information, it is determined that the client used by the user does not start GPS permission, and thus a positioning permission start prompt message is sent to the client to prompt the user to start GPS permission to acquire GPS positioning information of the user, so as to provide a local map including a user location to the client.
It should be noted that, in the present application, the permission detection and prompt of the client are not specifically limited, and the client may also detect and prompt the positioning permission, even the network permission, and those skilled in the art should set the permission according to the actual application requirement, which is not described herein again.
In view of the user characteristics of different users, in an alternative embodiment, the navigation method further includes: the receiving a first instruction sent by a client in response to a first operation of a user, acquiring positioning information and sending a first map to the client further comprises: and sending user portrait prompt information, receiving a fifth instruction sent by the client in response to a fifth operation of the user, and receiving and storing the user portrait of the user.
In this embodiment, if the user logs in the application for the first time and sends the first instruction to the server, the server sends the user image prompt information to the client, for example, prompts the user to fill in user information, such as age, gender, occupation, scenic spot type recommendation, personal tour/lover tour/family tour, tour start time, estimated tour duration, and the like, on a login page, so as to collect the user image, thereby facilitating planning and recommendation of subsequent navigation information.
And secondly, the server sends the prompt message of the first positioning mode and receives a second instruction sent by the client in response to a second operation of the user, and a first real-scene positioning mode is started.
In this embodiment, the server sends a live-action positioning mode prompt message to the client to prompt the user to select the live-action positioning mode, and when the user selects the live-action positioning mode, the server performs navigation in the live-action positioning mode. As shown in fig. 3, the user performs a second operation on the client 10, that is, selects the live-action positioning mode according to the live-action positioning mode prompt information sent by the server, so that the client 10 sends a second instruction to the server 20, and the server 20 performs the live-action positioning mode according to the second instruction. In other words, the client 10 receives and displays the first positioning mode prompt message sent by the server, and sends a second instruction to the server in response to a second operation of the user, so that the server starts the first live-action positioning mode.
Considering that there is a case where the client does not turn on the camera right, in an optional embodiment, the navigation method further includes: the sending the first positioning mode prompt message and receiving a second instruction sent by the client in response to a second operation of the user, and the starting the first live-action positioning mode further comprises: and detecting and displaying the camera shooting permission prompt information.
In this embodiment, when the server responds to a second instruction sent by the user after the client selects the live-action positioning mode, the server sends a camera permission start prompt message to the client to prompt the user to open the camera permission to acquire a live-action picture taken by the user, so as to identify the live-action picture.
It should be noted that, in the present application, the permission detection and prompt of the client are not specifically limited, and the client may also detect and prompt the permission of the camera, and those skilled in the art should set the permission according to the actual application requirements, which is not described herein again.
Thirdly, the server receives a third instruction sent by the client in response to a third operation of the user, receives and identifies a live-action image uploaded by the user and sends a second map to the client
In this embodiment, the server prompts the user to upload the live-action image after entering the live-action positioning mode, as shown in fig. 3, the user uses the client to shoot an actual scene nearby and upload the actual scene to the client 10, so that the client 10 sends a third instruction to the server 20, the third instruction includes a local actual scene picture, the server 20 performs identification by combining GPS positioning according to the actual scene picture of the third instruction, and sends a second map including the actual scene picture to the client after identifying the actual scene picture. In other words, the client 10 transmits the live-action image uploaded by the user to the server in response to the third operation of the user, so that the server recognizes the live-action image. The second map is a local three-dimensional live-action map based on an actual scene picture of the client, so that a user can conveniently find directions and actual scenes.
In view of the fact that the server does not recognize the actual scene picture, in an alternative embodiment, the navigation method further comprises: the receiving a third instruction sent by the client in response to a third operation of the user, receiving and identifying a live-action image uploaded by the user, and sending a second map to the client further comprises: and if the real-scene image display is not identified, the prompt information is uploaded again.
In this embodiment, when the server responds to a third instruction sent after the user selects the live-action positioning mode and does not recognize that the third instruction includes an actual scene picture, sending a prompt message of uploading the picture again to the client to prompt the user to take the actual scene picture again, so as to recognize and provide the local three-dimensional live-action map including the actual scene picture to the client.
In this embodiment, after the server successfully identifies, the second map is sent to the client, and the client loads and displays the three-dimensional live-action map of the current location. And if the server does not recognize the actual scene picture, displaying the prompt information uploaded again until the recognition is successful.
It should be noted that, the server of this embodiment divides a garden with a complex terrain into a plurality of modules, performs a fine modeling on each module, stores each module on the server, issues corresponding data through a corresponding mapping relationship when a client requests live-action data, and reduces the problems of too long loading time and too large data amount of client equipment due to too large data amount by adopting a segmented loading manner, thereby optimizing user experience.
That is to say, when the user starts the live-action positioning mode, the loaded three-dimensional live-action map is loaded in segments, on one hand, the complex terrain in the current area can be displayed by using the accurate three-dimensional live-action map, and on the other hand, the segmented loading ensures that the data volume downloaded each time is smaller and the speed is higher.
In this example, the live-action positioning mode is used for further accurate positioning, and is used to solve the problems of weak GPS signals and complex terrain. Specifically, when the GPS positioning signal is weak, the positioning is not accurate, actual scene images can be used for comparing data, in this embodiment, feature identification points of each scene in the campus are bound to the database, one-to-one mapping between positions of the scenes, position information, and actual scene images is realized, the actual scene images are uploaded to the server, the server identifies the feature identification points and compares the feature identification points with data information stored in the database, and therefore it is ensured that the current position of a user is identified under the condition that the GPS positioning signal is weak, and a three-dimensional real scene map corresponding to a two-dimensional map is loaded so that the user can perform navigation application.
And fourthly, the server navigates according to the second map and sends navigation data to the client.
In the embodiment, the server realizes the accurate positioning of the user position according to the identified actual scene image, and sends a three-dimensional live-action map including the user position to the client so as to facilitate the navigation of the user in the process of visiting. In other words, the client loads and displays a second map for navigation, wherein the second map is navigation data generated after the server identifies the live-action image.
In an optional embodiment, the navigating according to the second map and sending the navigation data to the client further comprises:
and receiving a fourth instruction sent by the client in response to a fourth operation of the user, determining a target location, planning and navigating according to the second map to acquire first navigation information, and sending the first navigation information to the client for display.
In this embodiment, the server navigates according to the three-dimensional live-action map, and sends navigation data to the client for the user to refer to, and the user can select a target location by interacting gestures, for example, zooming in or zooming out the three-dimensional live-action map, or the user can input the target location through the search box to perform route navigation.
Specifically, after the user clicks the sight spot icon at the non-current position, the model is switched to the fine mode navigation, the path planning occurs, and the user can perform interactive operations such as forward, backward, zooming in and zooming out.
In a specific embodiment, in order to obtain global information and local information simultaneously, meet different user requirements, and improve user experience, the navigating according to the second map and sending navigation data to the client further includes:
and sending multi-window prompt information, receiving a sixth instruction sent by the client in response to a sixth operation of the user, navigating by using the first map and the second map respectively to obtain second navigation information, and sending the second navigation information to the client for display.
In this embodiment, the first map and the second map may be displayed on the screen of the smartphone at the same time, for example, in a multi-window mode of split-screen or floating-window display, for example, the first map may be displayed in a small map window with a large display range, and the second map may be displayed in a small map window with a large map window. It should be noted that in this embodiment, a plurality of windows may also be set according to user requirements, which is not limited to two windows, and this application is not limited to this.
In an alternative embodiment, the server navigates according to the second map based on the collected user image and sends navigation data to the client.
In the embodiment, the server performs navigation recommendation according to the user portrait and user preferences, for example, a map performs path planning and path recommendation according to age, interest and the like, for example, when the user is identified as an old person, the navigation system plans an optimal path for saving physical strength; when the user is recognized to be a young person, the navigation system recommends a playing path capable of reaching multiple scenic spots, so that the user experience is improved.
Based on the navigation method of the foregoing embodiment, another embodiment of the present application provides a navigation method applied to a client based on a complex terrain, as shown in fig. 4, including:
s200, responding to a first operation of a user, sending a first instruction to a server, and loading and displaying a first map;
s202, receiving and displaying first positioning mode prompt information sent by the server, and responding to a second operation of a user to send a second instruction to the server so that the server starts a first real positioning mode;
s204, responding to a third operation of a user, and sending the live-action image uploaded by the user to the server so that the server can identify the live-action image;
and S206, loading and displaying a second map for navigation, wherein the second map is navigation data generated after the server identifies the live-action image.
The embodiment realizes the accurate positioning of the user position through the mode of combining the GPS positioning and the live-action scanning, and the purpose of navigating the user trip is achieved by loading the three-dimensional live-action map of the position, the problem that the user cannot be positioned and lost in a complex environment can be solved, the convenience of the user trip is effectively improved, and the method has wide application prospect.
Those skilled in the art will appreciate that the foregoing embodiments and the attendant advantages are equally applicable to this embodiment, and accordingly, the description of like parts is omitted.
Yet another embodiment of the present application provides a navigation system, as shown in fig. 2, comprising a server 20 and at least one client 10, wherein the server 20 is configured to:
receiving a first instruction sent by a client in response to a first operation of a user, acquiring positioning information and sending a first map to the client;
sending first positioning mode prompt information, receiving a second instruction sent by the client in response to a second operation of the user, and starting a first real-scene positioning mode;
receiving a third instruction sent by the client in response to a third operation of the user, receiving and identifying a live-action image uploaded by the user, and sending a second map to the client;
and navigating according to the second map and sending navigation data to the client.
In one embodiment, the servers 20 are server clusters, including a front-end server cluster, a back-end server cluster, and a data server cluster, wherein
The front-end server cluster is used for providing page display, map model display, searching and other related page display for the client;
the background server cluster is used for providing background service, user management, map module partitioning and interface management;
the data server cluster is used for storing a map database, a user portrait, a recommended route planning file system, caching and providing data access and storage.
In this example, the advantage of using the server cluster is high reliability, and it is ensured that the external service provision is not affected by the downtime of one server.
Yet another embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the complex terrain-based navigation method applied to a server or applied to a client as described in the foregoing embodiments.
In practice, the computer-readable storage medium may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present embodiment, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
As shown in fig. 5, a schematic structural diagram of a computer device according to another embodiment of the present invention is provided. The computer device 12 shown in FIG. 5 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 5, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The processor unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, to implement a complex terrain-based navigation method provided by the foregoing embodiments.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention, and it will be obvious to those skilled in the art that other variations or modifications may be made on the basis of the above description, and all embodiments may not be exhaustive, and all obvious variations or modifications may be included within the scope of the present invention.
Claims (10)
1. A navigation method based on complex terrain is applied to a server, and is characterized by comprising the following steps:
receiving a first instruction sent by a client in response to a first operation of a user, acquiring positioning information and sending a first map to the client;
sending first positioning mode prompt information, receiving a second instruction sent by the client in response to a second operation of the user, and starting a first real-scene positioning mode;
receiving a third instruction sent by the client in response to a third operation of the user, receiving and identifying a live-action image uploaded by the user, and sending a second map to the client;
and navigating according to the second map and sending navigation data to the client.
2. The navigation method of claim 1, wherein navigating according to the second map and sending navigation data to the client further comprises:
and receiving a fourth instruction sent by the client in response to a fourth operation of the user, determining a target location, planning and navigating according to the second map to acquire first navigation information, and sending the first navigation information to the client for display.
3. The navigation method of claim 1,
the receiving a first instruction sent by a client in response to a first operation of a user, acquiring positioning information and sending a first map to the client further comprises: sending user portrait prompt information, receiving a fifth instruction sent by the client in response to a fifth operation of the user, and receiving and storing the user portrait of the user;
the navigating according to the second map and sending the navigation data to the client further comprises: and navigating according to the user portrait of the user and the second map and sending navigation data to the client.
4. The navigation method of claim 1, wherein navigating according to the second map and sending navigation data to the client further comprises:
and sending multi-window prompt information, receiving a sixth instruction sent by the client in response to a sixth operation of the user, navigating by using the first map and the second map respectively to obtain second navigation information, and sending the second navigation information to the client for display.
5. The navigation method according to any one of claims 1 to 4,
the receiving a first instruction sent by a client in response to a first operation of a user, acquiring positioning information and sending a first map to the client further comprises: detecting and displaying positioning authority prompt information;
and/or
The sending the first positioning mode prompt message and receiving a second instruction sent by the client in response to a second operation of the user, and the starting the first live-action positioning mode further comprises: detecting and displaying camera shooting permission prompt information;
and/or
The receiving a third instruction sent by the client in response to a third operation of the user, receiving and identifying a live-action image uploaded by the user, and sending a second map to the client further comprises: and if the real-scene image display is not identified, the prompt information is uploaded again.
6. A navigation method based on complex terrain is applied to a client side, and is characterized by comprising the following steps:
responding to a first operation of a user, sending a first instruction to a server, and loading and displaying a first map;
receiving and displaying first positioning mode prompt information sent by the server, and sending a second instruction to the server in response to a second operation of a user so that the server starts a first real positioning mode;
responding to a third operation of a user, and sending the live-action image uploaded by the user to the server so that the server identifies the live-action image;
and loading and displaying a second map for navigation, wherein the second map is navigation data generated after the server identifies the live-action image.
7. A navigation system comprising a server and at least one client, the server configured to:
receiving a first instruction sent by a client in response to a first operation of a user, acquiring positioning information and sending a first map to the client;
sending first positioning mode prompt information, receiving a second instruction sent by the client in response to a second operation of the user, and starting a first real-scene positioning mode;
receiving a third instruction sent by the client in response to a third operation of the user, receiving and identifying a live-action image uploaded by the user, and sending a second map to the client;
and navigating according to the second map and sending navigation data to the client.
8. The navigation system of claim 7, wherein the servers are server clusters including a front-end server cluster, a back-end server cluster, and a data server cluster, wherein
The front-end server cluster is used for providing page display, map model display, searching and other related page display for the client;
the background server cluster is used for providing background service, user management, map module partitioning and interface management;
the data server cluster is used for storing a map database, a user portrait, a recommended route planning file system, caching and providing data access and storage.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that,
the program when executed by a processor implementing the method of any one of claims 1-5;
or
Which program, when being executed by a processor, carries out the method of claim 6.
10. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor,
the processor, when executing the program, implementing the method of any one of claims 1-5;
or
The processor, when executing the program, implements the method of claim 6.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111230764.6A CN113961659A (en) | 2021-10-22 | 2021-10-22 | A kind of navigation method, system, computer equipment and medium based on complex terrain |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111230764.6A CN113961659A (en) | 2021-10-22 | 2021-10-22 | A kind of navigation method, system, computer equipment and medium based on complex terrain |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN113961659A true CN113961659A (en) | 2022-01-21 |
Family
ID=79465988
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111230764.6A Pending CN113961659A (en) | 2021-10-22 | 2021-10-22 | A kind of navigation method, system, computer equipment and medium based on complex terrain |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113961659A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119850077A (en) * | 2025-03-19 | 2025-04-18 | 辽宁数能科技发展有限公司 | A storage cargo transportation system and method |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105444773A (en) * | 2015-11-26 | 2016-03-30 | 中山大学 | Navigation method and system based on real scene recognition and augmented reality |
| CN107036609A (en) * | 2016-10-18 | 2017-08-11 | 中建八局第建设有限公司 | Virtual reality air navigation aid, server, terminal and system based on BIM |
| CN110470315A (en) * | 2019-06-27 | 2019-11-19 | 安徽四创电子股份有限公司 | A kind of sight spot tourist air navigation aid |
| CN112945253A (en) * | 2019-12-10 | 2021-06-11 | 阿里巴巴集团控股有限公司 | Travel route recommendation method, system and device |
-
2021
- 2021-10-22 CN CN202111230764.6A patent/CN113961659A/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105444773A (en) * | 2015-11-26 | 2016-03-30 | 中山大学 | Navigation method and system based on real scene recognition and augmented reality |
| CN107036609A (en) * | 2016-10-18 | 2017-08-11 | 中建八局第建设有限公司 | Virtual reality air navigation aid, server, terminal and system based on BIM |
| CN110470315A (en) * | 2019-06-27 | 2019-11-19 | 安徽四创电子股份有限公司 | A kind of sight spot tourist air navigation aid |
| CN112945253A (en) * | 2019-12-10 | 2021-06-11 | 阿里巴巴集团控股有限公司 | Travel route recommendation method, system and device |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119850077A (en) * | 2025-03-19 | 2025-04-18 | 辽宁数能科技发展有限公司 | A storage cargo transportation system and method |
| CN119850077B (en) * | 2025-03-19 | 2025-06-03 | 辽宁数能科技发展有限公司 | Warehouse cargo transportation system and method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11165959B2 (en) | Connecting and using building data acquired from mobile devices | |
| US12332060B2 (en) | Localizing transportation requests utilizing an image based transportation request interface | |
| EP3989115A1 (en) | Method and apparatus for vehicle re-identification, training method and electronic device | |
| US9661214B2 (en) | Depth determination using camera focus | |
| CN107450088B (en) | Location-based service LBS augmented reality positioning method and device | |
| KR101932003B1 (en) | System and method for providing content in autonomous vehicles based on perception dynamically determined at real-time | |
| RU2741443C1 (en) | Method and device for sampling points selection for surveying and mapping, control terminal and data storage medium | |
| US9432421B1 (en) | Sharing links in an augmented reality environment | |
| US10147399B1 (en) | Adaptive fiducials for image match recognition and tracking | |
| US10606824B1 (en) | Update service in a distributed environment | |
| US9996895B2 (en) | Image display system, information processing apparatus, and image display method | |
| US20150206353A1 (en) | Time constrained augmented reality | |
| MX2013011249A (en) | Face recognition based on spatial and temporal proximity. | |
| Anagnostopoulos et al. | Gaze-Informed location-based services | |
| CN107656962B (en) | Panoramic display method in electronic map, server and computer readable medium | |
| WO2019080747A1 (en) | Target tracking method and apparatus, neural network training method and apparatus, storage medium and electronic device | |
| CN113009908B (en) | A motion control method, device, equipment and storage medium for unmanned equipment | |
| CN112487871A (en) | Handwriting data processing method and device and electronic equipment | |
| CN109711340A (en) | Information matching method, device, instrument and server based on automobile data recorder | |
| CN112650300A (en) | Unmanned aerial vehicle obstacle avoidance method and device | |
| CN113961659A (en) | A kind of navigation method, system, computer equipment and medium based on complex terrain | |
| CN104596509B (en) | Positioning method and system, and mobile terminal | |
| CN110188833B (en) | Method and apparatus for training a model | |
| CN109270925B (en) | Human-vehicle interaction method, device, equipment and storage medium | |
| JP7577608B2 (en) | Location determination device, location determination method, and location determination system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |