[go: up one dir, main page]

CN113191841B - Scientific and technological innovation and culture sharing intelligent platform based on augmented reality technology - Google Patents

Scientific and technological innovation and culture sharing intelligent platform based on augmented reality technology Download PDF

Info

Publication number
CN113191841B
CN113191841B CN202110463769.7A CN202110463769A CN113191841B CN 113191841 B CN113191841 B CN 113191841B CN 202110463769 A CN202110463769 A CN 202110463769A CN 113191841 B CN113191841 B CN 113191841B
Authority
CN
China
Prior art keywords
video camera
camera device
virtual
area
augmented reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110463769.7A
Other languages
Chinese (zh)
Other versions
CN113191841A (en
Inventor
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110463769.7A priority Critical patent/CN113191841B/en
Publication of CN113191841A publication Critical patent/CN113191841A/en
Application granted granted Critical
Publication of CN113191841B publication Critical patent/CN113191841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • G06Q30/0625Directed, with specific intent or strategy
    • G06Q30/0627Directed, with specific intent or strategy using item specifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The intelligent platform mode method for technological innovation and culture sharing based on the augmented reality technology is applied to the smart city field and the technological and culture exhibition field; the method is applied to the technical transaction field; the method is applied to the commodity transaction field combining online with offline; the method is applied to the field of sports live broadcasting. In order to solve the problem that the existing science and technology and culture exhibition lacks on-line experience field feeling, the problem that non-material cultural heritage and artistic products lack social cognition degree, the problem that technical transaction information is asymmetric, and the problem that on-line watching sports live broadcast lacks on-line feeling, the invention designs a science and technology innovation and culture sharing intelligent platform mode method based on an augmented reality technology, establishes a science and technology and culture information sharing platform, and the platform mainly comprises mobile terminal application program software, a background software system and a field video camera device, relies on 5G communication and the augmented reality technology, and takes a mobile terminal application program as a user experience mode.

Description

Scientific and technological innovation and culture sharing intelligent platform based on augmented reality technology
Technical Field
The invention relates to a science and technology innovation and culture sharing intelligent platform based on an augmented reality technology, which can display the science and technology innovation of users, share innovation ideas and achievements, realize virtual shopping and sharing of intelligent city street landscapes and cultures, and is applied to the fields of intelligent cities, science and technology and culture exhibition, and virtual tourist science and technology exhibition, sports stadium and art museums; the method is applied to the fields of patent examination and retrieval, book and periodical retrieval and knowledge innovation public inquiry and technical transaction; the method is applied to the field of computer-side and mobile-side platform application software; the method is applied to the commodity transaction field combining online with offline; the method is applied to the field of sports live broadcasting.
Background
The cluttered internet forum lacks orderly and scientific guidance and specifications and specialized management. The diversified technological innovation design major games lack a unified information release and online holding platform, so that the transparency of the information channel is low. At present, the platform for showing the public conventional art is various and complicated, the art such as singing and dancing is well-developed, and technological innovation is very fresh. The existing technical transaction information is asymmetric, and the market lacks a unified technical result transaction information platform; the existing virtual shopping tour modes comprise a live broadcast mode and a virtual reality technology mode, wherein the live broadcast mode is that audiences passively follow the rhythm of guided broadcast to browse, and experience of actively selecting to browse is lacking. The virtual reality technical mode is to watch virtual street scenes through wearing VR glasses, but not real-time tour scenes, and the traditional entity shopping mode lacking in on-site humane atmosphere can not meet the cultural entertainment demands of the public, in particular to cultural ancient streets and artistic areas. The existing market trading modes of artworks such as painting and sculpture and national non-material cultural heritage commodities mainly lack of information transparency and market promotion strength through the mode of off-line exhibition selling, and the existing on-line mall mode lacks of artworks diversified and specialized scale effects and classified authentication services, so that consumers lack of detailed knowledge of the commodities.
Disclosure of Invention
In order to solve the problem that the existing science and technology and culture exhibition lacks on-line experience on-site feeling, the problem that non-material cultural heritage and artistic products lack social cognition degree, the problem that technical transaction information is asymmetric, and the problem that on-line watching sports live broadcast lacks on-site feeling, the invention designs a science and technology innovation and culture sharing intelligent platform mode method based on the augmented reality technology:
establishing a science and culture information sharing platform, wherein the platform content is divided into a science and technology plate and a culture plate, and the science and technology plate is divided into a science and technology innovation display sharing function module, a science and technology tricky problem function module, a technology transaction function module, a smart city construction opinion collection function module, an exhibition and event function module; the cultural plate is divided into a smart city virtual tour function module, a cultural and artistic exhibition function module, a cultural and artistic commodity transaction function module and a sports stadium live broadcast module, and all the function modules control virtual figures to visit a real site in real time through a mobile terminal application program based on an augmented reality technology. The augmented reality technology is mainly characterized in that a real site is divided into a plurality of virtual grids, a plurality of video camera devices are deployed on the site, each video camera device corresponds to one or a plurality of virtual grids, and a platform enables the corresponding video camera device based on the coordinate position of a tourist, namely a virtual portrait, and the projection point of the field of view of the tourist to simulate the field of view of human eyes.
The platform mainly comprises mobile terminal application program software, a background software system and a field video camera device, wherein the mobile terminal application program software is used as a user experience mode, the background software system operates on a cloud server, and the mobile terminal application program software, the background software system and the field video camera device are indirectly connected through a 5G communication mode. The specific functional modules are as follows:
1. The technological innovation display sharing function module is classified into a first class classification, a second class classification, a third class classification, … and an N class classification according to the technical field step by step, for example, the first class classification comprises intelligent transportation and medical treatment; the secondary classification is further subdivided on the basis of the primary classification, for example, intelligent traffic is classified into signal lights, vehicle navigation and unmanned driving, and medical treatment is classified into medical equipment and life medicine; the three-level classification is further subdivided on the basis of the two-level classification, such as unmanned driving is classified into vehicles and engineering application, and medical instruments are classified into detection instruments and treatment instruments; the fourth-class classification is further subdivided, the vehicle is divided into a sensor and a control system, and the detection instrument is divided into an image instrument and a sample assay instrument; further subdividing, the sensors are classified into laser radars, ultrasonic radars, millimeter wave radars, videos and others, and the images are classified into nuclear magnetism, CT, ultrasound and others; further subdivisions, lidar is divided into hardware innovations and software algorithm innovations.
After the step-by-step classification, the step-by-step classification enters a brand new interface, namely a B interface, from the last stage of the technical field, wherein the B interface comprises searching, consulting and publishing, and the step-by-step classification enters a next stage interface, namely a C interface, through the 'publishing', wherein the C interface comprises a conceptual method and a detailed scheme, the conceptual method comprises a solution problem and a solution idea, the solution problem refers to a pre-solution problem, and the solution idea refers to a solution method and an implementation mode. The solution problem and the solution thinking respectively correspond to pre-filled or imported contents, wherein the contents comprise characters, pictures and videos; the user can search the content index published in the corresponding technical field at the interface D by entering the next-level interface D, namely, the interface D, and enter the specific content interface E from the content index.
The technological innovation display sharing main interface is provided with shortcut indexes of 'search', 'review', 'publication', the shortcut indexes enter a review interface and a publication interface from 'review', 'publication', the technical fields of review interface and publication interface selection are classified, and then the shortcut indexes enter a D interface or a C interface; the method comprises the steps of entering a search interface from 'search', directly inquiring content based on input keywords at the search interface, displaying an inquiry result in a content index list form, and entering an E interface from a content index.
The user can review the browsed times, the browsed user number, the interest index, the exploration intention index and the reference value index of the published content, and can review the number of fan of the user; the content interface, namely the E interface, has interestingness, exploration intention and attention, the interestingness is divided into four score segments, namely 25, 50, 75 and 100, the score segments respectively correspond to uninteresting, general, interested and very interested from low to high, a viewer expresses the interestingness by selecting one of the four options of uninteresting, general, interested and very interested, and the sum of the interestingness (corresponding scores) of all viewers is an interest index. The search intention is divided into unintended search and further search, and the search intention corresponds to scores 0 and 10 respectively, when a reader selects the further search, the search direction is needed to be input by characters, namely the content which needs to be further understood, and the sum of the search intention (multiple corresponding scores) of all the readers is the search intention index; the comment column is arranged on the content interface, namely the E interface, so that a reader can comment on the comment column, and the content publisher can answer the comment of the reader. The viewer selects to pay attention to the fan that will become the content publisher.
The content publisher can determine whether to continue to publish the second content according to the exploration intention index of the published content and the content which the reader wants to further know, the content which the content publisher firstly publishes is the first content, the second content is more detailed content published on the basis of the first content, the second content is associated with the first content, the third content can be published as well, the third content is associated with the second content layer by layer, each level of interface is the same, and the content is divided into a solution problem, a solution idea, and has comments, interestingness, exploration intention and attention.
Besides the conventional methods of characters, pictures, videos and the like, the method adopts an augmented reality technology, the published contents are displayed in real time, and a reader is placed on the published real contents in a virtual tour mode to realize real-time sightseeing of the real contents from different angles.
The method supports users to inquire the live objects in an augmented reality mode.
2. The technical and technical tricky problem function module adopts the same thought with the technical innovation display sharing module, adopts the augmented reality technology to describe the technical tricky problem, supports a reader to observe the real-time description of the technical tricky problem from different angles, is classified step by step according to the technical field, and is different in that published contents are divided into tricky problem generalizations and descriptions, corresponding to the tricky problem generalizations and descriptions, the text, picture and video input importing columns and virtual enhancement real-time descriptions are respectively provided, and a solution option is provided, so that the reader selects a solution to enter a C interface of the technical innovation display sharing module, namely the technical tricky problem module is associated with the technical innovation display sharing module, and the reader can observe the virtual enhancement real-time description of the technical tricky problem from different angles. The publisher of the technological troublesome problem decides whether the solution thought of the responder is disclosed or not, and if privacy is selected, the reply content of the responder can only be seen by the publisher of the technological troublesome problem. The technological troublesome question publisher selects 'higher reference value', 'general reference value', 'no reference value' (respectively corresponding to different scores, namely, reference value indexes of the publication contents of the respondents) according to the answer contents of the respondents.
The platform users are divided into individual users, enterprise users, scientific research institution users and government users, and in the technological troublesome problem module, the enterprise users and the scientific research institution users decide whether to pay rewards to respondents according to the selected " higher reference value, the common reference value and the no reference value, and the respondents can see the user properties of the technological troublesome problem proposer.
All users of the platform are registered by adopting a real-name system, and the name, the identity card number, the contact way and the address are real information, so that the published original content is permanently recorded.
3. And the technical transaction function module is used for a user to issue and browse intellectual property information and online transactions of the intellectual property, adopts an augmented reality technology to display a patent certificate and a real-time achievement, supports a sightseeing person to virtually sit on a technical achievement display site, and observes related achievement of the technology in real time from different angles. .
After the user issues the intellectual property transfer information, the platform background searches and verifies whether the information is true, if true, the issued information is published on the platform, otherwise, the information is not published.
4. The intelligent city construction opinion collection function module is divided into intelligent city construction opinion collection and intelligent city construction problem feedback, the issued opinion collection supports selection areas, and the regional level is classified into provincial and direct jurisdiction, cities, city and county areas, community villages and towns, streets and road sections, and the areas of the opinion collection need to be selected to fill out problem description, targets and planning. The user can browse the opinion collection information and fill out advice, the opinion collection party evaluates the value of the opinion collection party according to the content of the answer, and selects 'higher reference value', 'general reference value', 'no reference value' to decide whether to pay rewards to the respondent, and the respondent can see the evaluation to which the advice is subjected. The method and the system support users to issue smart city construction problem feedback, support selection areas, select problem types (road problems, such as road damage, vehicle violations, road congestion, road waterlogging, garbage problems, building problems, river problems and pollution problems), and fill out problem descriptions.
The problems to be solved by the smart city construction opinions are similarly realized by adopting an augmented reality technology, and the users virtually visit the real-time sites related to the problems and experience the real-time conditions of the problems in the presence.
5. The exhibition and event function module adopts the augmented reality technology to display the conditions of the exhibition and the event in real time, and the audience virtually visits and watches the exhibition and the event in real time. The system supports the release of technical exhibition and event advance notice information by users, but can release the information only after the platform background audit is passed, supports the users to browse the exhibition and event advance notice information and sort the information by time and date, and supports the retrieval according to categories (such as traffic electronic equipment, unmanned driving and virtual reality technology) and regions (according to provinces and cities). The user releases the technological exhibition and event advance notice information to pay deposit on line, and commits false release to assume legal responsibility and pay default, allowing the user to modify the exhibition and event advance notice information (such as time and place change), but submitting the application modification on line and explaining the reason, and the user can modify after the platform backstage approves;
The platform supports users to virtually visit a exhibition and an event scene in real time on line, video camera devices are deployed and designed at multiple angles and multiple positions on the exhibition and the event scene, and scenes watched at different positions and different visual angles on the scene of human eyes are simulated, wherein the scenes comprise fields of view of audience seats in a scene sightseeing area, fields of view tables at high positions, fields of view of taking aircrafts to overlook, fields of view of taking ground sightseeing vehicles and fields of view of walking in a scene walking area. The three modes simulate the visual field of the human eyes, wherein one mode is to adopt a fixed-focus video camera device or fixedly set the angle and the focal length of each video camera device, a plurality of video camera devices with different positions and different focal lengths are covered and combined, and the visual field of the human eyes sightseeing at different positions and different focal lengths is simulated; secondly, a video camera device with rotation and zooming of a holder is adopted to simulate the field of view of a focusing area of a human eye, the number of the video camera devices can be reduced, but when a plurality of users virtually sightseeing and share a single video camera device, the plurality of users can sightly see with different fields of view, at the moment, the collision between the rotation angle and the focal length of the holder of the video camera device exists, the solution thinking is that a priority mechanism is adopted, the priority mechanism ranks according to the number of users in the same field of view, the fields of view with more users are preferentially simulated by the video camera device, and the users with the fields of view with more users being ranked need to wait or turn to other fields of view; and thirdly, adopting a mode of mixing and arranging the fixed type and the cradle head rotary type, focusing and zooming video camera devices, namely a mode of mixing and arranging two or more of the cradle head rotary type zooming video camera devices, the cradle head rotary type fixed focusing video camera devices, the fixed focusing video camera devices and the fixed zooming video camera devices on a exhibition and an event site. When multiple users call the same rotary or zoom video camera device at different angles or different focal lengths, a priority mechanism is adopted, namely, a request with a certain angle or a plurality of focal length users is preferably executed. The rotation of the cradle head means that the angle of a lens is changed by mechanical rotation of the video camera device, the fixed focus means that the video camera device adopts a fixed focus lens, namely, the focal length is fixed, and the zooming means that the focal length can be changed.
In the three modes for simulating the human eye visual field, when the depth of field of the video camera device is shallower than the depth of the human eye visual field, the focusing area of the human eye visual field is simulated by changing the focusing mode of the video camera device.
The fixed focus video camera device mainly comprises a camera (fixed focus lens, an image sensor and a digital signal processing chip), a communication module and a fixing device; the fixed zoom video camera device mainly comprises a camera (a zoom lens, an image sensor and a digital signal processing chip), a communication module and a fixing device; the cradle head rotary fixed focus video camera device mainly comprises a camera (fixed focus lens, an image sensor and a digital signal processing chip), a cradle head (motor and a rotating shaft), a communication module and a fixing device; the rotary zoom video camera device of the cradle head mainly comprises a camera (zoom lens, an image sensor and a digital signal processing chip), a cradle head (a motor and a rotating shaft), a communication module and a fixing device. The video camera device is networked by a 5G wireless communication mode.
The method comprises the steps that a user is supported to control the moving direction of a virtual tourist through front, back, left and right buttons of a mobile terminal platform application program or a PC terminal platform webpage, and a sightseeing focusing area is controlled through a focus button; and the user is supported to simulate the walking, visual angle and focusing area of the tourist through the head-wearing and handheld VR equipment.
The method comprises the steps that a plurality of sound collectors are deployed at a exhibition and an event site, and the sound is simulated in site environment sounds heard by human ears, wherein the sound collectors comprise two simulation modes, one simulation mode is that a focusing area of a video camera device corresponds to the sound collectors of the focused area, namely a human eye view field area corresponds to the sound collectors of the focusing area of the video camera device simulating a human eye view field, and the heard sound is sourced from the human eye view field area, namely the focusing area of the video camera device; secondly, the heard sound originates from the region near the position of the person, namely, the virtual position of the person in the exhibition and the event scene has coordinates, the virtual position of the person is supported to be seen through the screen of the PC end or the mobile end, and the environment sound collected by the pickup of the region near the coordinates of the virtual position is the heard sound.
The platform supports online and entity coordination to hold the exhibition and the event, the platform grants the user permission to visit the specific area of the exhibition and the event site, different login users are granted different permission, namely, video cameras and sound pick-up devices in different site areas are called based on the user permission; the platform supports voice chat and video call between login users based on the permission and the field area, and supports voice chat and video call between login users.
6. The intelligent city virtual sightseeing function module is similar to the exhibition and event function module, adopts the augmented reality technology, and supports public areas such as city squares, roads and the like for users to see in a virtual real-time overhead mode based on the authority.
7. The cultural and artistic exhibition functional module is similar to the exhibition and event functional module, and adopts the augmented reality technology to support the user to virtually sightseeing the cultural and artistic work exhibition market site.
8. The culture and art commodity transaction function module also adopts an augmented reality technology, and an exhibit transaction function is added on the basis of the culture and art exhibit function module, each exhibit has a unique identification number and an identification code, the identification number of the exhibit is input into the exhibit information inquiry and purchase column of the platform, the specific word introduction and price of the exhibit are supported to be checked, and the online purchase of the exhibit is supported; the method is similar to the function of inputting the display identification number, and is different in that the video camera device has an identification code recognition function, the identification code of the video camera device in use is clicked and called through a platform interface button to read the identification function, the specific text introduction and the price of the display are checked, and the display is purchased online.
9. And the live broadcast function module of the sports stadium supports the virtual arrangement of users on the competition scene and watches the competition details in real time at different visual angles.
Drawings
Fig. 1 is an azimuthal schematic view, wherein the angle of rotation of the horizontal angle between the o-point clockwise direction and the target direction line from the o-point to the ray (north-pointing direction line) in the arrow direction directly above is 360 ° (degrees), and wherein: point o is the origin; the number 0 refers to an angle in the north direction, namely 0 degrees, and is overlapped with an angle of 360 degrees which is one circle of rotation; the numeral 90 refers to the angle in the forward direction, i.e., 90 °; the numeral 180 refers to the angle in the forward direction, 180 °; the numeral 270 refers to the angle of positive west, 270 °.
Fig. 2 and 3 are schematic vertical angles, which refer to the angles between the eye's sight line and its horizontal sight line, wherein: o is the point where the human eye is located; oP 0 is the human eye horizontal line of sight vector; oP 1 is the vector of the human eye's line of sight at 90 °, i.e., the vector of the line of sight at a vertical angle of 90 ° when the line of sight is vertically upward; oP 2 is the vector of the human eye's line of sight at-90 °, i.e., the vector of the line of sight at-90 ° vertical, when the line of sight is vertically downward; oP i is a human eye sight line vector, wherein a vertical angle is more than or equal to 0 degree and less than or equal to 90 degrees, and angle P 0oPi is a vertical angle; d 1d2 is the diameter of the circle C 0 and the diameter of the circle C 1, d 1d2 is perpendicular to oP i, the plane of the circle C 0 is perpendicular to the plane of the circle C 1, the areas of the circle C 0 and the circle C 1 are equal, and the circle C 1 is the field of view area of the human eye when the vertical angle is +.P 0oPi.
Fig. 4 is a schematic view of a tour area scenario wherein: 1. 2, 3,4, 5 are video camera 1, video camera 2, video camera 3, video camera 4, video camera 5, respectively; quadrilateral adhe is a corridor (sidewalk or street, and the same), quadrilateral abcd and quadrilateral efgh are exhibition areas (exhibition rooms, store, artwork placement areas and building outer walls) on two sides of the corridor respectively, abcd is split into quadrilateral abpq and quadrilateral pqdc, and efgh is split into quadrilateral efmn and quadrilateral mnhg; the video cameras 1,2, 3,4, 5 cover the photographing abpq, pqdc, efmn, mnhg, adhe area from different angles.
Fig. 5 is a schematic geometric diagram of the calculation of coordinates of projection points, where T is the point o, i.e., the projection point of the tourist, W is the projection point of the projection point on the horizontal plane, V is the projection point of the point o on the plane (e.g., the plane of the quadrilateral abcd in fig. 4) on which the point T, W, V is located, and oN is the north vector.
FIG. 6 is a flow chart of a virtual tour screen invocation approach wherein numbers and letters in the graphic symbols are interpreted as: 1, starting a flow; 2, matching the scene, namely dividing the virtual sightseeing area into a plurality of planes, wherein each plane consists of a plurality of grids, calibrating coordinate values of each grid according to a unified coordinate system, and establishing a corresponding relation between the virtual position of the sightseeing person and the coordinates of the scene; calculating projection point coordinates, namely calculating projection point coordinates of virtual position points of the tourist on a plane of the tourist area corresponding to the periphery; calculating projection point coordinates, namely projection point coordinates of the tourist sight corresponding to the plane of the tourist area around; 5, matching the coordinate area of the projection point, namely dividing the plane of the corresponding sightseeing area around into a plurality of subareas, and matching the specific subarea where the coordinate of the projection point is positioned; 6, judging whether the projection point coordinates and the projection point coordinates are in the same sub-area, wherein N is 'no', namely the projection point is not in the projection point sub-area, and Y is 'yes', namely the projection point is in the projection point sub-area; 7 invoking a video camera (e.g., video camera 5 in fig. 4, which is focused in area adhe); 8 invoking video corresponding to the proxel area (e.g., in fig. 4, invoking video camera 2 when proxels are in abpq sub-areas); and 9, ending the flow.
FIG. 7 is a second flow chart of a virtual tour screen invocation mode wherein numbers and letters in the graphical symbols are interpreted as: 1, starting a flow; 2, matching the scene, namely dividing the virtual sightseeing area into a plurality of planes, wherein each plane consists of a plurality of grids, calibrating coordinate values of each grid according to a unified coordinate system, and establishing a corresponding relation between the virtual position of the sightseeing person and the coordinates of the scene; calculating projection point coordinates, namely calculating projection point coordinates of virtual position points of the tourist on a plane of the tourist area corresponding to the periphery; calculating projection point coordinates, namely projection point coordinates of the tourist sight corresponding to the plane of the tourist area around; 5, matching the coordinate area of the projection point, namely dividing the plane of the corresponding sightseeing area around into a plurality of subareas, and matching the specific subarea where the coordinate of the projection point is positioned; 6 determining whether the projected point coordinates are within the projected point coordinates adjacent sub-regions (e.g., pqcd and abpq are two adjacent sub-regions, mnhg and efmn are two adjacent sub-regions in fig. 4), N is "no" that the projected point is not within the projected point adjacent sub-region, Y is "yes" that the projected point is within the projected point adjacent sub-region; 7, calling a certain video camera device and focusing the video camera device on a projection point sub-area, wherein the projection point of the video camera device is in the projection point sub-area of the tourist (for example, in fig. 4, calling the video camera device 2 to focus on a pqcd sub-area, and the projection point of the video camera device 2 is in a abpq sub-area); 8, judging whether the projection point coordinates and the projection point coordinates are in the same sub-area, wherein N is 'no', namely the projection point is not in the projection point sub-area, and Y is 'yes', namely the projection point is in the projection point sub-area; 9 invoking a video camera device whose proxels are within the proxel sub-area of the tourist (e.g. in fig. 4, video camera device 2 is focused on the pqcd sub-area) and focusing it on the proxel sub-area; 10 call a certain video camera (e.g. video camera 5 in fig. 4, which is focused in area adhe); 11 ends the flow.
FIG. 8 is a schematic illustration of an exhibition and shopping mall based on augmented reality technology, the adhe area being the corridor area, with the squares broken into which the dashed lines are divided being a grid; the rectangle abcd, efgh is the sightseeing plane area corresponding to adhe area, abcd is divided into abpq area (subregion) and pqdc area (subregion), efgh is divided into efmn area (subregion) and mnhg area (subregion), several works of art or commodity are displayed in each subregion, the circle in each subregion represents the displayed works of art or commodity, wherein the number represents commodity number and two-dimensional code, the works of art or commodity in abpq area is 21, 22, 23, 24, 25, 26, the works of art or commodity in pqdc area is 41, 42, 43, 44, 45, 46, the works of art or commodity in efmn area is 11, 12, 13, 14, 15, 16, the works of art or commodity in mnhg area is 31, 32, 33, 34, 35, 36, respectively; the cylinder and the number thereof represent the video camera, wherein the video camera 1 is responsible for image acquisition and code scanning of the efmn area, the video camera 2 is responsible for image acquisition and code scanning of the abpq area, the video camera 3 is responsible for image acquisition and code scanning of the mngh area, the video camera 4 is responsible for image acquisition and code scanning of the pqdc area, the video camera 5 is responsible for image acquisition of the adhe area, and simultaneously the video camera 5 overlook acquires abcd and efgh area images.
Fig. 9 is a schematic view of a real-time view of a game of a football field based on an augmented reality technique, wherein a solid outline is a football field illustration, namely, a top view, a dotted line part is a football field virtual dividing line, a dotted line grid formed by a dotted line and a dotted line, the dotted line and a solid outline outside the football field is a focusing area of a single video camera device, and numerals 1,2,3,4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36 respectively represent a dotted line grid, and a perspective view of each dotted line grid is shown in fig. 10; points a0b0、a1b0、a2b0、a3b0、a4b0、a5b0、a6b0、a0b1、a1b1、a2b1、a3b1、a4b1、a5b1、a6b1、a0b2、a1b2、a2b2、a3b2、a4b2、a5b2、a6b2、a0b3、a1b3、a2b3、a3b3、a4b3、a5b3、a6b3、a0b4、a1b4、a2b4、a3b4、a4b4、a5b4、a6b4、a0b5、a1b5、a2b5、a3b5、a4b5、a5b5、a6b5、a0b6、a1b6、a2b6、a3b6、a4b6、a5b6、a6b6 are the video camera arrays respectively at the demarcation points ;U1i、U2i、U3i、U4i、U5i、U6i、U7i、U8i、U9i、U10i、U11i、U12i、U13i、U14i、U15i、U16i、U17i、U18i、U19i、U20i、U21i、U22i、U23i、U24i between the virtual grids, wherein the corner marks i={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72}, are two groups of video cameras each with 36 video cameras, similar to the compound eyes of insects, each of the video cameras in one group focuses on one virtual grid area respectively, one group covers the upper part of 36 virtual grids (such as cuboid cdefghjk in fig. 10), the other group covers the lower part of 36 virtual grids (such as cuboid abcdjkmn in fig. 10), each video camera array is 72 video cameras covers the 36 virtual grid areas, and the 24 video camera arrays each capture 36 virtual grid areas from different positions.
Fig. 10 is an axial perspective view of any one of the virtual grids in fig. 9, wherein the virtual grids in fig. 9 are 36 projected views of fig. 10, that is, the virtual grid quadrangle (e.g., a 0b0a1b0a1b1a0b1, i.e., virtual grid 1) seen in fig. 9 is a quadrangle efgh in fig. 10, and the quadrangles efgh, cdjk, abmn overlap after the virtual grids are vertically projected; in fig. 10, the figure where the o point is located is a virtual figure of a viewer, and the user controls the figure to virtually navigate on a real football field through an application program, abmn is an area in the football field, the dotted grid in abmn is a coordinate grid, each coordinate grid is a coordinate value, each step of walking of the virtual figure corresponds to the coordinate grid, for example, each step of walking spans one coordinate grid.
Fig. 11 is a schematic view of the video camera array of fig. 9, in which 1,2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36 are one set of video cameras, i.e., the first set of video cameras ,37、38、39、40、41、42、43、44、45、46、47、48、49、50、51、52、53、54、55、56、57、58、59、60、61、62、63、64、65、66、67、68、69、70、71、72, and the other set of video cameras, i.e., the second set of video cameras.
Fig. 12 and 13 are schematic diagrams for automatically starting an array of video cameras, wherein fig. 12 is the same as fig. 9, and the difference is that o, W, Q points and the schematic of a connecting line and a plane dotted line where the array of video cameras is located are added, where o point is a position point of a tourist, W is a projection point (like the point W in fig. 5) of a projection point of the tourist on a certain virtual grid surface on a horizontal plane, and Q is an intersection point of a vector Wo and the plane where the array of video cameras is located; fig. 13 is a further schematic illustration (perspective view) of the intersection point of the vector Wo and the plane of the video camera array in fig. 12, where abcd and cdef are virtual grids, ghbf represents the plane of the video camera array, U1 i、U2i is the video camera array, and the shortest distances from U1 i、U2i to bf are equal, i.e. the horizontal heights of the planes of the distances from U1 i、U2i to abef are the same.
Detailed Description
1. Software and hardware system
The intelligent platform for scientific innovation and culture sharing based on the augmented reality technology comprises hardware and software, wherein the hardware comprises a 5G communication network, a background cloud server, front-end mobile application equipment and a field video camera device, the front-end mobile application equipment in the hardware comprises a mobile phone or a tablet personal computer (VR) glasses, the VR glasses are connected and communicated with the mobile phone or the tablet personal computer through Bluetooth or WIFI technology, the application function of the platform is directly realized through the mobile phone or the tablet personal computer under the condition that the VR glasses are not used, and the difference is that three-dimensional stereoscopic impression of a viewing field of the VR glasses is lacked. The software comprises a background system and a mobile application program, a user views a functional interface through the mobile application program, sends a request and a sightseeing request result, the platform function is realized through the background system, the background system operates on a background cloud server, the mobile application program operates on front-end mobile application equipment, and the mobile application program, the background system and the video camera device establish a mutual connection relation through a 5G communication network by depending on hardware.
2. Platform mobile end application function implementation example
The index of science and technology and culture is displayed in a portal, and the index enters a science and technology plate interface or a culture plate interface from the portal interface, and enters a science and technology innovation display sharing main interface, a science and technology troublesome main interface or a technology interaction main interface from the science and technology plate interface.
The video camera device of the public area is uniformly installed and deployed with the parallel network access platform by the public area management department, and private areas such as shops and the like are automatically decided by shopowners to install and deploy the video camera device and network the access platform.
Each position point of the exhibition and event site, the art and cultural goods mall site and the cultural street site has a three-dimensional coordinate, the smaller the radius of the position point is, the higher the coordinate precision is, and each camera deployed on the site also has a three-dimensional coordinate;
The exhibition and event sites, art and cultural goods mall sites and cultural street sites are provided with corresponding Geographic Information System (GIS) maps, the GIS maps correspond to the coordinates of each position point of the sites, GIS map software modules of the event sites, the art and cultural goods mall sites and the cultural street sites are embedded in mobile terminal application programs of the platform mobile phone, magnetometers, gravitational accelerometers and gyroscopes in the mobile phone measure azimuth angles, vertical angles and angular speeds of the mobile phone, and the directions (azimuth angles and vertical angles) of the mobile phone are changed through gestures, so that a pre-watched site view angle is sent; the viewing angle of the viewing site is controlled by the up, down, left and right buttons in the mobile terminal application program interface of the mobile phone, and the same functions as the viewing angle of the pre-viewing site are changed by changing the direction of the mobile phone.
Azimuth angle, north is 0 degree, south is 180 degrees, east is 90 degrees, west is 270 degrees, 360 degrees coincide with 0 degree, north up, south down, west left, east right rotate from 0 degree along origin right until 360 degrees, as shown in fig. 1; the vertical angle, with azimuth angle 0 degrees, rotates up from the origin by 90 degrees up to a maximum vertical angle, and rotates down by a maximum-90 degrees, as shown in fig. 2.
The field of view coordinate area of the tourist is determined based on the on-site virtual position coordinates of the tourist, the azimuth and vertical angles of the line of sight, and the coordinates of the on-site position points.
And the platform business mode is that the exhibition and shopping mall side freely selects whether to be on line or not, and if the exhibition and shopping mall side is on line or off line, the exhibition and shopping mall side pays a payment for the platform.
3. Site layout algorithm design based on augmented reality technology
1. Virtual tourism function module of smart city, culture and art exhibition function module, culture and art commodity transaction module, exhibition and event function module, smart city construction opinion collection function module, technological innovation show sharing module, technological transaction module, technological troublesome problem module on-site layout algorithm design example of augmented reality technology based:
Dividing the sightseeing site into a plurality of planes, namely a sightseeing plane and a walking plane, wherein each plane is represented by a rectangular area, each rectangular area is divided into a plurality of rectangular subareas, each rectangular subarea is divided into a plurality of grids, and each grid has unique coordinates. The rectangular region representing the travel plane is referred to as a travel region, the rectangular region representing the viewing plane is referred to as a viewing region, and the viewing region and the travel region are associated with each other based on a coordinate range, that is, one or a plurality of viewing regions (coordinate ranges) are associated with each travel region (coordinate range). As shown in fig. 4, abcd represents a viewing plane, and abcd is divided into abpq area (sub-area) and pqdc area (sub-area); efgh represents a viewing plane, which is divided into efmn area (subarea) and mnhg area (subarea); adhe denotes a walking plane, and when the coordinates of adhe corresponding to abcd and efgh areas, i.e. tourists, are located in adhe area, the surrounding areas are sightseeing areas abcd and efgh.
As shown in fig. 4, the video camera devices respectively establish a correspondence with the photographing areas abpq, pqdc, efmn, mnhg, adhe: (1) The video camera device and the shooting area are in corresponding relation I, the video camera devices 1,2,3,4 and 5 are fixed focus video camera devices, wherein the video camera device 1 focuses on a cover efmn area, the video camera device 2 focuses on a cover abpq area, the video camera device 3 focuses on a cover mnhg area, the video camera device 4 focuses on a cover pqdc area and the video camera device 5 focuses on a cover adhe area; (2) The video camera device and the shooting area correspond to each other, the video camera devices 1,2,3 and 4 are pan-tilt zoom video camera devices, wherein the video camera device 1 focuses on the covering efmn area and the mngh area respectively through a pan-tilt rotation and a zoom mode, the video camera device 2 focuses on the covering abpq area and the pqdc area respectively through a pan-tilt rotation and a zoom mode, the video camera device 3 focuses on the covering efmn area and the mn g h area respectively through a pan-tilt rotation and a zoom mode, the video camera device 4 focuses on the covering abpq area and the pqdc area respectively through a pan-tilt rotation and a zoom mode, namely, the focusing covering areas of the video camera device 1 and the video camera device 3 are the same, but the shooting angles are different, and the focusing covering areas of the video camera device 2 and the video camera device 4 are the same but the shooting angles are different. The video camera 5 is a fixed focus video camera, focusing on the area adhe. The abcd, efgh, adhe area is virtually divided into a plurality of grids (each grid is called a point) with equal size, the coordinates of each grid (point) are positioned, and each video camera corresponds to the area with the coordinate range set.
As shown in FIG. 5, the o-point is the virtual coordinate point of the tourist, the V-point is the nearest distance point from the o-point to the plane of abcd in FIG. 4, that is, oV is perpendicular to the plane of abcd, and the equation of abcd in the space coordinate system (three-dimensional space formed by x-axis, y-axis and z-axis) is
Ax+By+Cz+D=0 (1)
Wherein A, B, C, D is a known constant. The point T is a focal region center point of a human eye, which is projected from the point o (according to azimuth angle and vertical angle of a tourist) to the plane abcd, the point T is (x T,yT,zT), the point V is (x V,yV,zV), the point o is known by the geospatial positioning technique, namely (x o,yo,zo), the coordinate value (x v,yv,zV) of the projection point V of the closest coordinate point o to the point o, namely the plane abcd is selected from the points (networks) of the plane abcd, the line segment oV is perpendicular to the line segment VT, namely oV T, the line segment oV is perpendicular to the line segment VW, namely oV T, the triangle TWV is in the abcd region, the oN is an azimuth vector from the point o to north, the oV is parallel to the normal vector of the plane abcd, the oW is a projection line of the oT on the horizontal plane, the projection line VT is on the horizontal plane, the oN is on the same plane as the triangle oVW, the azimuth angle of 2 is known, namely the angle of the perpendicular to the plane vwt is known, and the azimuth angle of the triangle is known as vwt=p.
The normal vector of the plane of abcd, i.e., the direction vector oV, is (A, B, C), and the equation of the straight line of oV is
x=kA+xo,y=kB+yo,z=kC+zo (3)
Substituting formula (3) into formula (1)
k(A2+B2+C2)+Axo+Byo+Czo+D=0 (4)
I.e.
The V point coordinate is in the plane of abcd, so substituting V point coordinate (x V,yV,zV) into (3) to obtain
xV=kA+xo,yV=kB+yo,zV=kC+zo (6)
Substituting formula (5) into formula (6)
Further simplifying the above formula (7) to obtain
The length of line oV isCalculating oV lengths according to the coordinates of the o point and the V point; assuming oN is parallel to VW (in practical application, when oN is not parallel to VW, a vector oN 0 parallel to VT is led out from o-point, and the azimuth angle of oN 0 is measured, and the azimuth angle of oN 0 is subtracted from the calculation of the correlation angle, then α= WoV =90° angle NoW, according to cosine theorem, i.e./>Calculating oW the length, and similarly calculating the length of the VW according to the sine theorem, and calculating the y-axis coordinates of the W point and the T point; according to cosine theorem/>Calculating the length of oT according to sine theorem/>Calculating the length of TW and determining the z-axis coordinate of the T point; and calculating the x coordinates of the V point, the W point and the T point according to the o point coordinates and oV length. WT is parallel to the z-axis of the spatial coordinate system, VW is parallel to the y-axis, oV is parallel to the x-axis; in practical application, if the WT is not parallel to the z-axis of the spatial coordinate system, or the VW is not parallel to the y-axis, or oV is not parallel to the x-axis, the coordinate value of another point may be calculated based on the coordinates of one point, the lengths of two point segments, and the angles of two point segments, and using sine, cosine, tangent, and cotangent theorem.
The virtual portrait of the tourist (user) is displayed in the video covering the portrait coordinate position, the virtual portrait in the user control screen (application program interface) walks in the corridor or street (such as the adhe area in fig. 4) of the tourist site in the video picture, and the real-time coordinate of the tourist (such as the o point in fig. 5) is calculated once every walking step, namely every changing the position of the tourist in the adhe area. The adhe area is divided into a plurality of grids with equal size, and the corresponding relation between each step of walking and the number of grids can be set, for example: the walking step corresponds to one grid, and the corresponding relation parameter is set to be 1; the walking step corresponds to two grids, and the corresponding relation parameter is set to be 2; when walking one step corresponds to n grids, the corresponding relation parameter is set to be n, the size of n determines the walking step length (the distance between two adjacent feet) of the tourist, and the walking step length and the frequency (the number of steps of walking per second) determine the walking speed (the walking distance in unit time) of the tourist.
The coordinates of all grids (coordinate grids) of the adhe area are in corresponding relation with the coordinates of grids (coordinate grids, similar to the grid division method of the adhe area) of the adjacent areas abcd and efgh, namely, when a tourist walks on any coordinate grid of the adhe area, plane equations of the abcd and efgh areas are triggered, projection points (such as V points in fig. 5) of the tourist (o points in fig. 5) in the abcd and efgh areas are calculated, and meanwhile, coordinates of sight projection points (such as T points in fig. 5) of the tourist are calculated, wherein the projection points are based on the fact that the tourist controls the sight azimuth angle and the vertical angle through a mobile terminal application program. If the projection point (V point in fig. 5) of the tourist is located in the abpq or efmn area grid coordinate range and the projection point (T point in fig. 5) is located in the abpq area grid coordinate range, enabling the video picture of the abpq area, wherein the video camera 2 in fig. 2 is responsible for the real-time image acquisition of the abpq area; if the projection point (V point in fig. 5) of the tourist is located in the abpq or efmn area coordinate grid range and the projection point (T point in fig. 5) is located in the efmn area coordinate grid range, enabling the video picture of the efmn area, wherein the video camera 1 in fig. 2 is responsible for the real-time image acquisition of the efmn area; if the projection point (V point in fig. 5) of the tourist is located in the abpq or efmn area coordinate grid range, but the projection point (T point in fig. 5) coordinate is located in neither the abpq area coordinate grid range nor the efmn area coordinate grid range, enabling the video image of the adhe area, wherein the video camera 5 in fig. 2 is responsible for real-time image acquisition of the adhe area, and the video camera 5 also covers partial images of abcd and efgh areas;
When the projection point of the tourist is positioned in pqdc or mnhg area, namely the tourist approaches to pqdc and mnhg area, if the coordinate of the projection point (T point in fig. 5) is within the range of the coordinate grid of pqdc area, enabling the video picture of pqdc area, wherein the video camera 4 in fig. 2 is responsible for the real-time image acquisition of pqdc area; if the coordinates of the projection point (T point in FIG. 5) are within the mnhg area coordinate grid range, enabling the video picture of the mnhg area, wherein the video camera 3 in FIG. 2 is responsible for real-time image acquisition of the mnhg area; if the tourist's projection point (V point in FIG. 5) is located within pqdc or mnhg area coordinate grid, but the projection point (T point in FIG. 5) coordinate is located neither within pqdc area coordinate grid nor within mnhg area coordinate grid, then the video view of the adhe area is enabled.
When the tourist enters the walking area adhe, plane equations respectively represented by the sightseeing areas abcd and efgh are started, projection points of the tourist in the abcd and the efgh are calculated, azimuth angles and vertical angles of the tourist are calculated, and projection points of the sight line of the tourist in the sightseeing areas abcd and the efgh are calculated. If the projected point is on the plane of abcd in fig. 4, it is further calculated whether the projected point belongs to the abcd area in fig. 4 and whether the projected point belongs to the abpq area or the pqcd area, there are two calculation methods, one is to search through the search mode, that is, whether the coordinate grid in the abpq and pqcd areas has the same coordinate value as the projected point, if the coordinate value of the grid (coordinate grid) identical to the projected point exists in the abpq area, the projected point is in the abpq area; secondly, taking a line segment pq as a boundary between abpq and pqcd, taking a line segment ab as a boundary between abpq and other areas, taking a line segment cd as a boundary between pqcd and other areas, respectively comparing the y-axis coordinate value of a projection point with the y-axis coordinate values of grids on the line segments ab, pq and cd if the grid y-axis coordinate value of the pqcd area is larger than the grid y-axis coordinate value of the abpq area, and if the grid y-axis coordinate value of the pqcd area is larger than all grid y-axis coordinate values on the cd or smaller than all grid y-axis coordinate values on the ab, the projection point is in a region except abcd; if the coordinate values are larger than all grid y-axis coordinate values on ab and smaller than all grid y-axis coordinate values on pq, the projection point is in abpq area; if the coordinate value is larger than all grid y-axis coordinate values on qp and smaller than all grid y-axis coordinate values on cd, the projection point is in pqcd area. The specific flow is shown in fig. 6 or fig. 7, namely, the virtual tour screen calling mode one flow or the virtual tour screen calling mode two flow.
Entering the exhibition and shopping mall from any one area in abpq, pqdc, efmn, mnhg in fig. 4 as shown in fig. 8, invoking the video camera image based on the principle in fig. 4, adding a display or commodity scanning function, enabling the scanning function through gestures or voice operation application program when the tourist finds the interested display or commodity, for example, when the sight line projection point of the tourist is aligned to abpq area, automatically invoking the real-time image of the video camera 2 by the system, clicking the 'display detail' button of the application program by the tourist, enabling the image scanning code recognition function by the system, viewing the two-dimensional code recognition selection prompt symbol of all displays or commodities in abpq area by the tourist through the application program interface, enabling the two-dimensional code recognition selection prompt symbol of the displays or commodities 21, 22, 23, 24, 25, 26 to correspond to the display or commodity placing position in the interface, clicking the display or commodity two-dimensional code recognition selection prompt symbol required to be known by the tourist, enabling the system to enter the display or commodity page, displaying text, picture, video, VR (virtual image, virtual detail supporting the purchase detail, and the 'shopping detail' shopping cart 'on the page'.
2. Football court live broadcast function module, football court layout algorithm design example based on augmented reality technique:
As shown in fig. 10, when the human-shaped image where the o-point is located, that is, the field of view of the virtual tourist, is focused on the quadrangle abef, based on the calculation principle in fig. 4 and 5, the projection point and the projection point coordinate of the o-point on the plane where the quadrangle abef is located are calculated, the user, that is, the tourist, selects the field of view focusing area through the focusing area button in the application program, the selected focusing area corresponds to the plane equation of the virtual grid, when the selected field of view focusing area corresponds to the quadrangle abef of the virtual grid, the system automatically starts the plane equation where the quadrangle abef is located, calculates the projection point and the projection point coordinate of the tourist, that is, the projection point on the plane where the quadrangle abef is located, then the system automatically selects the corresponding video camera array based on the coordinate and the projection point coordinate, automatically selects the first group or the second group of video cameras in the video camera array based on the upper part or the lower part of the virtual grid, and then automatically starts the video cameras focusing on the projection point area in the video camera array.
The first set of video cameras in fig. 11 is focused on the lower part of the 36 virtual grids in fig. 9, i.e., the cuboid abcdkjmn in fig. 10, and the second set of video cameras is focused on the upper part of the 36 virtual grids, i.e., the cuboid cdefghjk in fig. 10.
The side a 0b0a6b0 in fig. 9 is called down, the side a 6b0a6b6 is called right, the side a 0b6a0b0 is called left, and the side a 0b6a6b6 is called up, when the tourist focuses on a 0b1a1b1, namely, quadrangle bckm or cfgk in fig. 10, from the lower side of a 0b1a1b1, the field area of the default video camera is virtual grid 7 in fig. 9; when the tourist focuses on a 1b1a2b1 from below a 1b1a2b1, the field of view area of the default video camera is the virtual grid 8 in fig. 9; when the tourist focuses on a 2b1a2b0 in fig. 9 from the right of a 2b1a2b0, the field of view area of the default video camera is virtual grid 2 in fig. 9; when the tourist focuses on a 2b1a2b0 in fig. 9 from the left of a 2b1a2b0, the field of view area of the default video camera is virtual grid 3 in fig. 9; when the tourist focuses on a 0b1a1b1 from above a 0b1a1b1, the field of view area of the default video camera is virtual grid 1 in fig. 9; and so on for the rest. The fields of view projected onto the same virtual grid from different viewing angles are different, for example, the fields of view projected onto virtual grid 1 from virtual grids 8 and 9 are different, i.e. the video camera array is started according to the corresponding relation in table 1, then the projection point coordinates are calculated based on the position coordinates of the tourist, the projection point coordinates, the plane equation of the virtual grid, the azimuth angle of the field of view and the vertical angle, and then the specific video camera is started based on the corresponding relation between the video camera and the virtual grid.
In fig. 9, the 24 video camera arrays are used for shooting local areas of the football field with shallow depth of field, in addition, video cameras with large (deep) depth of field and wide angle shooting functions are designed and deployed around the football field to realize panoramic shooting, and application programs support selection of 'wide angle sightseeing' function buttons and 'focusing sightseeing' function buttons to realize free switching between local area sightseeing and panoramic sightseeing.
Football field layout design based on augmented reality technology includes the following steps:
(1) Enabling video camera arrays
When a tourist is virtually positioned at a certain coordinate position and a field of view focuses on a certain virtual grid, a corresponding video camera array is started, a certain relation among the position of the tourist, a field of view focusing area and the video camera array needs to be established, and the method comprises the steps of calibrating the field of view area and automatically calculating by a system:
① The field area mode is calibrated, namely, the position of the tourist (virtual grid) and the video camera device array to be started are in corresponding relation. As shown in table 1, the virtual grids in the table 1 correspond to fig. 9, 36 virtual grids are distinguished by arabic numerals, when the tourist, i.e. the person with the o point in fig. 10, is located in the virtual grid 1 in fig. 9, when the field of view is focused on any one of the surfaces of the virtual grids 7-36 (the virtual grid 7, the virtual grids 8, … and the virtual grid 36), the video camera array U1 i is enabled, and when the field of view is focused on any one of the surfaces of the virtual grids 2-6, the video camera array U7 i is enabled; when the tourist is located in the virtual grid 21, the video camera array U22 i is enabled if the field of view is focused on a 0b4a1b4, the video camera array U24 i is enabled if the field of view is focused on a 0b5a1b5, the video camera array U22 i is enabled if the field of view is focused on a 0b4a1b4, the video camera array U15 i is enabled if the field of view is focused on a 2b0a3b5, and the video camera array U13 i is enabled if the field of view is focused on a 5b0a6b0. Further calibration examples are shown in table 1.
TABLE 1
Based on the above principle, if the virtual position of the tourist and the sense of reality of the field of view are further calibrated, the support establishes a corresponding relationship with the video camera array according to the coordinate grid position of the tourist in the virtual grid (such as the coordinate grid in the virtual grid abmn in fig. 10), for example: the video camera array U1 i will be enabled when the visitor is in the grid (x i,yi,zi) and the field of view is focused on a 0b1a1b1 in fig. 9, the video camera array U2 i will be enabled when the visitor is in the grid (x i+1,yi+1,zi+1) and the field of view is focused on a 0b1a1b1 in fig. 9, a 0b1a1b1 in fig. 9 refers to the quadrilateral bfgm projection in fig. 10, a 1b1a1b0 refers to the quadrilateral ghmn projection, and the rest of the same principles are analogized.
② Automatic system calculation mode
As shown in fig. 12 and 13, the o-point coordinates and the center point coordinates of the video camera array U1 i、U2i are known, the W-point coordinates are calculated based on the calculation principle in fig. 5, and the two-point equation of the straight line oW is obtained according to the o-point and W-point coordinates, that is:
Wherein the o-point coordinate is (x 0,y0,z0), the W-point coordinate is (x w,yw,zw), the center point coordinates of the video camera array U1 i、U2i are (x 1,y1,,z1)、(x2,y2,z2), the other point coordinate of the plane in which the video camera array U1 i、U2i is located is (x 3,y3,z3), and the point french equation of the plane in which the video camera array U1 i、U2i is located is:
knowing (x1,y1,,z1)、(x2,y2,z2)、(x3,y3,z3) values, substituting the values into the formula (10) respectively, calculating to obtain A, B, C values, and finally obtaining an equation of the plane where the video camera array U1 i、U2i is located.
Based on the equation of the straight line oW and the plane equation of the video camera array U1 i、U2i, the Q-point coordinate (x Q,yQ,zQ) which is the intersection point of the straight line oW and the plane of the video camera array U1 i、U2i can be obtained, where the equation is:
let z Q be 0, calculate x Q、yQ value to get the coordinate of the Q point.
The distance QU1 i、QU2i between the Q point and the center point coordinates (x 1,y1,,z1)、(x2,y2,z2) of the video camera array U1 i、U2i, respectively, is calculated:
based on equations (12) and (13), distances from a point Q to center points (coordinates of center points of all video camera arrays are known) of 24 video camera arrays in fig. 9 are calculated respectively, 24 distance values are obtained, sizes of the 24 distance values are compared, and a video camera array corresponding to the minimum value is automatically started by the system, namely, a video camera array closest to the point Q is automatically started by the system.
(2) Video camera set with video camera array
Each video camera array is divided into two video camera groups, the upper part and the lower part of the virtual grid are focused respectively, the video camera group 2 is started if the projection point coordinate is in the upper part coordinate range of the virtual grid based on the projection point coordinate (value), and the video camera group 1 is started if the projection point coordinate is in the lower part coordinate range of the virtual grid.
(3) Enabling a particular video camera that is a group of video cameras
Each video camera device group consists of 36 video camera devices, the number of the video camera devices in each group is determined by the virtual grid division number of the football field, and if the football field is divided into n virtual grids, each video camera device group has n video camera devices. Each video camera corresponds to (field of view focuses) one virtual grid, e.g. 36 video cameras correspond to 36 virtual grids, respectively, e.g.: 1 (video camera 1) in fig. 9 corresponds to 1 (virtual grid 1) in fig. 11; 2 corresponds to 2;3 corresponds to 3;4 corresponds to 4;5 corresponds to 5;6 corresponds to 6;7 corresponds to 7;8 corresponds to 8;9 corresponds to 9;10 corresponds to 10;11 corresponds to 11;12 corresponds to 12;13 corresponds to 13;14 corresponds to 14;15 corresponds to 15;16 corresponds to 16;17 corresponds to 17;18 corresponds to 18;19 corresponds to 19;20 corresponds to 20;21 corresponds to 21;22 corresponds to 22;23 corresponds to 23;24 corresponds to 24;25 corresponds to 25;26 corresponds to 26;27 corresponds to 27;28 corresponds to 28;29 corresponds to 29;30 corresponds to 30;31 corresponds to 31;32 corresponds to 32;33 corresponds to 33;34 corresponds to 34;35 corresponds to 35;36 corresponds to 36.
As shown in fig. 9, when the position of the tourist is in the virtual grid 1 and the projection point (the focus of the field of view) is at a 0b1a1b1, the default is that the projection point is at the virtual grid 7; when the position of the tourist is in the virtual grid 1 and the projection point (the focus of the field of view) is positioned at a 1b2a1b1, the default is that the projection point is positioned at the virtual grid 8; by analogy, the number of video cameras to be activated is determined on the principle that the line of sight (field of view projection line) goes forward Fang Yanshen.

Claims (2)

1. Scientific and technological innovation and culture sharing intelligent platform based on augmented reality technology is characterized by:
The platform content is divided into a science and technology plate and a culture plate, wherein the science and technology plate is divided into a science and technology innovation display sharing function module, a science and technology tricky problem function module, a technology transaction function module, a smart city construction opinion collection function module, an exhibition and event function module; the cultural plate is divided into a smart city virtual tour function module, a cultural and artistic exhibition function module, a cultural and artistic commodity transaction function module and a sports stadium live broadcast module; all the functional modules control the virtual portrait to visit the real site in real time through the mobile terminal application program based on the augmented reality technology;
the field algorithm layout based on the augmented reality technology comprises the following steps:
Dividing a sightseeing site into a plurality of planes, namely a sightseeing plane and a walking plane, wherein each plane is represented by a rectangular area, each rectangular area is divided into a plurality of rectangular subareas, each rectangular subarea is divided into a plurality of virtual grids, and each virtual grid has unique coordinates; deploying a plurality of video camera devices on site, wherein each video camera device corresponds to one or more virtual grids;
the human eye visual field is simulated by adopting a fixed focus video camera or fixedly setting the angle and the focal length of each video camera, and covering and combining a plurality of video cameras with different positions and different focal lengths to simulate the visual field of the human eye sightseeing at different positions and different focal lengths; secondly, a video camera device with rotation and zooming of a tripod head is adopted to simulate the field of view of a focusing area of a human eye, when a plurality of users share a single video camera device at the same time and the fields of view are different, the rotation angle and the focal length of the tripod head of the video camera device are in conflict, a priority mechanism is adopted to sort the fields of view according to the number of users in the same field of view, the fields of view with a large number of users are preferentially simulated by the video camera device, and users in the fields of view after sorting need to wait or turn to other fields of view; thirdly, adopting a mode of mixed deployment of a fixed type and a rotary type of a tripod head and a focusing and zooming video camera device, and adopting a priority mechanism when a plurality of users call the same rotary type of the tripod head or the zooming video camera device at the same time at different angles or different focal lengths;
A plurality of sound collectors are deployed at the exhibition and the event site, the on-site environment sound heard by the human ear is simulated in any mode, one sound collector corresponding to the focusing area of the video camera device is started, and the sound collector in the area nearby the virtual portrait coordinate is started through the virtual portrait coordinate;
The step of enabling the video camera device comprises:
Enabling a corresponding video camera device by the platform based on the coordinate position, the projection point and the view field projection point of the virtual character; the projection points are intersection points of the virtual character coordinate points perpendicular to the plane where the sightseeing area is located, and the view field projection points are intersection points projected to the plane where the sightseeing area is located according to the azimuth angle and the vertical angle of the virtual character;
Or alternatively
The first step, the video camera device array is started, which specifically comprises the following steps: enabling the video camera device array according to the corresponding relation between the virtual character coordinate position and the video camera device array, or enabling the video camera device array closest to the virtual character coordinate position;
Secondly, starting video camera device groups of video camera device arrays, wherein each video camera device array is divided into two video camera device groups, focusing the upper part and the lower part of a virtual grid respectively, and starting the corresponding video camera device groups based on view field projection point coordinates;
And thirdly, starting specific video cameras of the video camera group, wherein the starting quantity is determined by the quantity of the virtual grids of the field, and each video camera corresponds to one virtual grid.
2. The intelligent platform for sharing technological innovation and culture based on the augmented reality technology according to claim 1, wherein the functional module for sharing technological innovation display is used for displaying the published contents in real time in a real-time manner, and a reader is placed on the site of the published real contents in a virtual tour manner to browse the real contents in real time from different angles;
The technical troublesome problem function module supports real-time sightseeing of real-time description of technical troublesome problems of a reader from different angles;
The technical transaction function module adopts an augmented reality technology to display a patent certificate and a real-time achievement, supports a sightseeing person to virtually put on a technical achievement display site, and observes related achievement of the technology in real time from different angles;
the intelligent city construction opinion levering function module adopts an augmented reality technology, and a user virtually browses a real-time scene related to a problem and experiences the real-time condition of the problem in an immersive manner;
The exhibition and event function module adopts an augmented reality technology to display the conditions of the exhibition and event in real time, and audiences virtually visit and watch the real-time scene of the exhibition and event, and a video camera device is arranged and designed at multiple angles and multiple positions on the exhibition and event scene to simulate scenes watched at different positions and different visual angles on the human eye scene;
the intelligent city virtual sightseeing function module supports users to virtually and real-time overlook and see public areas of cities and roads based on rights by adopting an augmented reality technology;
The cultural and artistic exhibition functional module adopts an augmented reality technology to support the virtual sightseeing cultural and artistic work exhibition mall site of users; the culture and art commodity transaction function module adopts an augmented reality technology, adds an exhibit transaction function on the basis of a culture and art exhibition function module, inputs the identification number of an exhibit into a exhibit information inquiry and purchase column of a platform, supports checking specific text introduction and price of the exhibit, supports online purchase of the exhibit, has an identification code recognition function except manual input of the identification number inquiry commodity information, and invokes an identification code reading recognition function of the video camera device in use through a platform interface button click to check the specific text introduction and price of the exhibit and purchase the exhibit online;
and the live broadcast function module of the sports stadium supports the virtual arrangement of users on the competition scene and watches the competition details in real time at different visual angles.
CN202110463769.7A 2021-04-28 2021-04-28 Scientific and technological innovation and culture sharing intelligent platform based on augmented reality technology Active CN113191841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110463769.7A CN113191841B (en) 2021-04-28 2021-04-28 Scientific and technological innovation and culture sharing intelligent platform based on augmented reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110463769.7A CN113191841B (en) 2021-04-28 2021-04-28 Scientific and technological innovation and culture sharing intelligent platform based on augmented reality technology

Publications (2)

Publication Number Publication Date
CN113191841A CN113191841A (en) 2021-07-30
CN113191841B true CN113191841B (en) 2024-06-14

Family

ID=76980148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110463769.7A Active CN113191841B (en) 2021-04-28 2021-04-28 Scientific and technological innovation and culture sharing intelligent platform based on augmented reality technology

Country Status (1)

Country Link
CN (1) CN113191841B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113962694A (en) * 2021-09-29 2022-01-21 华夏文广传媒集团股份有限公司 Intelligent platform mode method for scientific and technological innovation and culture sharing
CN114554154A (en) * 2022-02-24 2022-05-27 世邦通信股份有限公司 Audio and video pickup position selection method and system, audio and video collection terminal and storage medium
CN116540872B (en) * 2023-04-28 2024-06-04 中广电广播电影电视设计研究院有限公司 VR data processing method, device, equipment, medium and product
CN119255108A (en) * 2023-07-03 2025-01-03 荣耀终端有限公司 Display control method, mobile terminal and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127554A (en) * 2016-06-24 2016-11-16 张睿卿 The business system combined based on virtual reality, augmented reality
CN109067822A (en) * 2018-06-08 2018-12-21 珠海欧麦斯通信科技有限公司 The real-time mixed reality urban service realization method and system of on-line off-line fusion

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11249714B2 (en) * 2017-09-13 2022-02-15 Magical Technologies, Llc Systems and methods of shareable virtual objects and virtual objects as message objects to facilitate communications sessions in an augmented reality environment
CN109034748B (en) * 2018-08-09 2021-08-31 哈尔滨工业大学 Construction method of mold disassembly engineering training system based on AR technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127554A (en) * 2016-06-24 2016-11-16 张睿卿 The business system combined based on virtual reality, augmented reality
CN109067822A (en) * 2018-06-08 2018-12-21 珠海欧麦斯通信科技有限公司 The real-time mixed reality urban service realization method and system of on-line off-line fusion

Also Published As

Publication number Publication date
CN113191841A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN113191841B (en) Scientific and technological innovation and culture sharing intelligent platform based on augmented reality technology
KR101985703B1 (en) Augmented reality service Software as a Service based Augmented Reality Operating System
JP5739874B2 (en) Search system and method based on orientation
US10445772B1 (en) Label placement based on objects in photographic images
US8649610B2 (en) Methods and apparatus for auditing signage
Hile et al. Landmark-based pedestrian navigation from collections of geotagged photos
US8218943B2 (en) CV tag video image display device provided with layer generating and selection functions
Fan et al. Immersive cultural heritage digital documentation and information service for historical figure metaverse: a case of Zhu Xi, Song Dynasty, China
US20090116764A1 (en) Method of constructing panoramic electronic map service
TWI410608B (en) Use the point of interest information to display the system and method of the smartphone lens image
Möller et al. Experimental evaluation of user interfaces for visual indoor navigation
US20130095855A1 (en) Method, System, and Computer Program Product for Obtaining Images to Enhance Imagery Coverage
US20080170755A1 (en) Methods and apparatus for collecting media site data
US9583074B2 (en) Optimization of label placements in street level images
CN104517001A (en) Browser-based method for displaying to-be-constructed construction information
Zhang et al. ARGIS-based outdoor underground pipeline information system
US20150254694A1 (en) System and Method for Providing Redeemable Commercial Objects in Conjunction with Geographic Imagery
Wither et al. Using aerial photographs for improved mobile AR annotation
Aydın et al. ARCAMA-3D–a context-aware augmented reality mobile platform for environmental discovery
Zhou et al. Customizing visualization in three-dimensional urban GIS via web-based interaction
TW201039156A (en) System of street view overlayed by marked geographic information
Lee et al. Using augmented reality technology to construct a venue navigation and spatial behavior analysis system
Han et al. Design and Research of Campus Culture Application Based on Sensor Data and Metaverse Technology.
Sharples et al. Zapp: learning about the distant landscape
Ma et al. Enhanced expression and interaction of paper tourism maps based on augmented reality for emergency response

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant