HK1240437A1 - Facilitating multimedia information delivery through a uav network - Google Patents
Facilitating multimedia information delivery through a uav network Download PDFInfo
- Publication number
- HK1240437A1 HK1240437A1 HK17113486.5A HK17113486A HK1240437A1 HK 1240437 A1 HK1240437 A1 HK 1240437A1 HK 17113486 A HK17113486 A HK 17113486A HK 1240437 A1 HK1240437 A1 HK 1240437A1
- Authority
- HK
- Hong Kong
- Prior art keywords
- information
- transport device
- passenger
- passengers
- vehicle
- Prior art date
Links
Abstract
Embodiments are provided for deliver multimedia information to a transportation apparatus through a UAV network. After the transportation apparatus enters an area, one or more UAVs may be configured to capture one or more images of an interior of the transportation apparatus. Geographical location of the transportation apparatus can be obtained. Image analysis may be employed to analyze the images to obtain passenger. Based on the geographical information regarding the transportation apparatus, and passenger, specific multimedia information can be determined for presentation to the passenger(s) in the transportation apparatus. The determined multimedia information may include media contents of interest to the passenger(s) and available in the geographical location the transportation apparatus is currently traveling in. The determined multimedia information can be transmitted to transportation apparatus for presentation to the passenger.
Description
Cross reference to related patent applications
This application claims priority to U.S. provisional patent application No. 62/274,112, filed on 31/12/2015, the disclosure of which is incorporated by reference in its entirety for all purposes.
The present application is related to the following co-pending U.S. non-provisional patent applications: U.S. non-provisional application No. 15/341,809 (attorney docket No. 101534-0969605(004920 US)) filed concurrently with the present application, U.S. non-provisional application No. 15/341,813 (attorney docket No. 101534-0969607(004930 US)) filed concurrently with the present application, U.S. non-provisional application No. 15/341,818 (attorney docket No. 101534-0969608(004940 US)) filed concurrently with the present application, and U.S. non-provisional application No. 15/341,831 (attorney docket No. 101534-0969609(004960US)) filed concurrently with the present application, the entire disclosures of each of these applications are incorporated herein by reference in their entirety for all purposes.
Technical Field
The present disclosure relates to the directional delivery of information, in particular to the directional delivery of information to a transport device via an Unmanned Aerial Vehicle (UAV) network.
Background
Unmanned flying vehicles (UAVs), commonly referred to as drones and also by several other names, are aircraft with no human pilot on top. The flight of the UAV may be autonomously controlled by an onboard computer or remotely controlled by a pilot on the ground or in another flight vehicle. UAVs are found primarily in military and special operations applications, but are also increasingly used in civilian applications, such as police, surveillance and fire fighting, as well as non-military safety work, such as inspection of electricity or pipelines. UAVs are good at collecting and displaying a large amount of visual information to the operator. However, interpreting the information collected by the UAV may require a significant amount of time and labor. In many cases, the information collected by UAVs is misread by human operators and analysts who have limited time to interpret the information.
It is generally known in the art to deliver media content over a computer network. Typically, traditional entertainment delivery over computer networks is delivered on demand, meaning that the user-selected media content is pushed to the user only when the user requests the content. Recently, active media streaming over computer networks has also gained wide popularity as an alternative to traditional wired networks. For example, the prior art allows a user to watch a live television program through a media stream on his/her portable device (e.g., a smart phone). In these technologies, as users travel, certain geographic limitations also apply to media content delivery. For example, certain television programs available to a user in one geographic area for viewing via a media content stream may not be available to the user when the user travels to another geographic area. To accomplish this, these techniques typically obtain a current location of the user and determine which media content is available for the user to view based on the current location of the user.
Disclosure of Invention
To overcome the deficiencies of the prior art, according to a first aspect of the present invention there is provided a method for facilitating directional delivery of multimedia information to a transportation device via an Unmanned Aerial Vehicle (UAV) network, the method being implemented in one or more processors configured to execute programmed components, the method comprising:
receiving, via the UAV network, one or more images of the transport device taken by a UAV;
obtaining information related to the transport device in response to receiving the image of the transport device;
analyzing the one or more images to obtain passenger information related to one or more passengers in the transport device;
determining multimedia information for presentation to one or more passengers based on the information related to the transport device and the passenger information; and is
Sending the multimedia information to the transport device for presentation.
The method as recited above, wherein the transportation device comprises a vehicle.
The method as above, wherein the passenger information includes a gender of each passenger in the transport equipment, an age group of each passenger in the transport equipment, and an identity of each passenger in the transport equipment.
The method as described above, further comprising processing the passenger information and one or more images to obtain location information regarding a location of the one or more passengers within the transport device.
The method as described above, wherein determining the multimedia information to present to one or more passengers comprises an interactive television channel guide and/or an on-demand entertainment guide.
The method as described above, wherein the multimedia information determined to be presented to one or more passengers comprises a video clip, an audio clip and a video game.
The method as above, wherein the multimedia information is presented to the one or more passengers via audio and/or video within the transport device.
The method as described above, determining one or more items for presentation to one or more passengers based on the geographic information of the transport device and the passenger information comprising:
determining a particular display device within the transport device for presenting the one or more items based on the information related to the transport device and the passenger information.
The method as described above, determining multimedia information for presentation to one or more passengers based on the geographical information of the transportation device and the passenger information comprises:
determining whether a set of multimedia content is available for viewing in a geographic area indicated by geographic information of the transport device and whether the set of multimedia content is of interest to the one or more passengers.
To overcome the deficiencies of the prior art, according to a second aspect of the present invention there is provided a system for facilitating the directional delivery of multimedia information to a transportation device via an Unmanned Aerial Vehicle (UAV) network, the system comprising one or more processors configured to:
receiving, via the UAV network, one or more images of the transport device taken by a UAV;
obtaining information related to the transport device in response to receiving the image of the transport device;
analyzing the one or more images to obtain passenger information related to one or more passengers in the transport device;
determining multimedia information for presentation to one or more passengers based on the information related to the transport device and the passenger information; and is
Sending the multimedia information to the transport device for presentation.
The system of above, wherein the transportation device comprises a vehicle.
The system as described above, wherein the passenger information includes a gender of each passenger in the transportation device, an age group of each passenger in the transportation device, and an identity of each passenger in the transportation device.
The system of above, wherein the processor performs processing the passenger information and one or more images to obtain location information regarding a location of the one or more passengers within the transport device.
The system as described above, wherein the multimedia information determined to be presented to one or more passengers comprises an interactive television channel guide and/or an on-demand entertainment guide.
The system as described above, wherein the multimedia information determined to be presented to one or more passengers comprises a video clip, an audio clip and a video game.
The system as described above, wherein the multimedia information is presented to the one or more passengers via audio and/or video within the transport device.
The system of any preceding claim, wherein determining one or more items for presentation to one or more passengers based on the geographic information of the transport device and the passenger information comprises:
determining a particular display device within the transport device for presenting the one or more items based on the information related to the transport device and the passenger information.
The system as described above, determining multimedia information for presentation to one or more passengers based on the geographic information of the transport device and the passenger information comprising:
determining whether a set of multimedia content is available for viewing in a geographic area indicated by geographic information of the transport device and whether the set of multimedia content is of interest to the one or more passengers.
The invention further provides embodiments for delivering multimedia information to a transport device via a UAV network. After the transport device enters an area monitored by one or more UAVs, the multimedia information may be delivered to the transport device. The UAV may be configured to capture images of an interior of the transport device. The image may include image information about one or more passengers within the transport device. The UAV may be configured to transmit the image to a processing center. In some embodiments, the UAV may be configured to transmit information about the transportation device to a processing center along with the image. However, this is not intended to be limiting.
The processing center may be configured to analyze images received from the UAV to obtain relevant passenger information about one or more passengers within the transport device. Such image processing by the processing center may involve identifying passengers in the transport equipment, their gender, their particular identity (e.g., name), their location in the transport equipment, and/or any other information about the passengers in the transport equipment. The processing center may also be configured to obtain a geographic location where the transportation device is currently located. As described above, in some embodiments, the UAV may obtain geographic location information about the transportation device when the transportation device enters a geographic location. In some embodiments, the geographic location of the transportation device may be obtained by the processing center through a GPS system. In one embodiment, the current location of the transportation device may be obtained periodically by the GPS system when the transportation device initiates a request to obtain multimedia information from the processing center.
In any case, based on the passenger information and the geographic location of the transport device, the processing center may be configured to determine the particular multimedia information to be sent to the transport device for presentation to the passenger within the transport device. For example, the passenger information may indicate that two teen or child passengers are located in the back row of the transport device, and the geographic location information about the transport device indicates that the transport device is within a particular geographic location. Based on such information, the processing center may be configured to determine a set of multimedia information for presentation to those passengers in this example. For example, the set of multimedia information may include media content that is appropriate for the identified teen or child passenger and that is currently available for the particular geographic location in which the transport device is located. For example, such media content may include a set of television channels available for viewing by children under 18 years of age, a set of non-R rated movies, a set of children's books, and/or any other type of multimedia content.
In order to present multimedia information in the transport device as determined by the processing center, one or more displays in the transport device may be equipped with a network connection. For example, the display may receive the determined multimedia information from the processing center over a network connection. In some implementations, the display can be operatively connected to a computing device, and the computing device can be configured to receive multimedia data. In one embodiment, the transportation device is a vehicle. The vehicle may have at least one cabin. In this embodiment, the transport device is equipped with a wide viewing angle display, such as a dashboard covered by an LCD screen; and a separate display mounted on one or more rear passenger seats.
Other objects and advantages of the present invention will be apparent to those skilled in the art based on the following drawings and detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the detailed description serve to explain the principles of the invention. No attempt is made to show structural details of the invention in more detail than may be necessary for a fundamental understanding of the invention and the various ways in which it may be practiced.
Fig. 1 illustrates an exemplary UAV network in accordance with the present disclosure.
Figure 2 conceptually illustrates utilizing a UAV to facilitate delivery of multimedia information to a transportation device in accordance with the present disclosure.
Fig. 3 shows an example of the processing center shown in fig. 2.
Fig. 4 illustrates an example method for facilitating delivery of multimedia information to a transport device in accordance with this disclosure.
FIG. 5 illustrates a simplified computer system that may be used to implement the various embodiments described and illustrated herein.
In the drawings, similar components and/or features may have the same numerical reference. Further, various components of the same type may be distinguished by following the reference label by a letter that distinguishes among the similar components and/or features. If only the first numerical reference label is used in the specification, the description applies to any one of the similar components and/or features having the same first numerical reference label, regardless of the alphabetic suffix.
Detailed Description
Various specific embodiments of the present disclosure will be described below with reference to the accompanying drawings, which form a part of the specification. It should be understood that although the structural parts and components of various examples of the present disclosure are described by using terms expressing directions, such as "front", "rear", "upper", "lower", "left", "right", etc., in the present disclosure, these terms are used for convenience of description only and are determined based on the exemplary directions shown in the drawings. Because the disclosed embodiments of the present disclosure can be arranged in a variety of orientations, these directional terms are used for descriptive purposes only and are not intended to be limiting. Wherever possible, the same or similar reference numbers used in this disclosure refer to the same components.
UAVs are well suited for applications where the payload consists of optical imaging sensors, such as cameras with powerful lightweight sensors suitable for various commercial applications (e.g., surveillance, video conferencing, vehicle positioning, and/or any other application). A UAV according to the present disclosure may collect multispectral images of any object in the area covered by the UAV. In certain embodiments, a UAV according to the present disclosure may fly at up to 65000 feet and may have a coverage of up to 500 km. One motivation for the present disclosure is to use UAVs to facilitate video conferencing involving at least one transportation device (e.g., a car, bus, or train). One or more UAVs may be employed to capture video images of the interior of the transportation device (e.g., the cabin of the transportation device). Since the UAV may be configured to move over the transport device at a speed consistent with the speed of the transport device, the UAV may simply take video images of the transport device without regulation as the transport device moves.
Another advantage of using a UAV to capture video images of a moving transportation device is that a wide-angle video image of the interior of the transportation device can be captured using a UAV equipped with a wide-angle (e.g., 360 degree) camera, so long as there is a clear view from the UAV into the interior of the transportation device. The images may be transmitted from the UAV to the processing center via a UAV network. The processing center may be configured to obtain information about the transportation device, such as the brand of the transportation device, one or more registration numbers of the transportation device, in response to receiving the image of the transportation device. In some embodiments, the processing center may be further configured to analyze the images to obtain passenger information and/or driver information about one or more passengers and/or drivers in the transport device. The passenger information may include information indicating the gender of each passenger within the transport, the age group of each passenger, the identity of each passenger, the location of each passenger, and/or any other passenger information. The driver information may include similar information about the driver. Based on the passenger information and/or driver information and the information related to the transport device, the processing center may be configured to determine one or more items to be presented to the passenger and/or the driver within the transport device. For example, based on passenger information, the processing center may determine an age group of passengers seated in the back row of the transport device and determine to present local marketing items that may be of interest to the passengers.
As used herein, a transport device may be defined as a device that is capable of moving a distance to transport people and/or goods. Examples of transportation equipment may include a vehicle (e.g., a car or truck), a bicycle, a motorcycle, a train, a watercraft, an aircraft, or a spacecraft, to name just a few. It should be understood that in the examples given below, although a vehicle is used in these examples, this is not intended to be limiting. In some embodiments, other types of transport devices may also be used in those examples.
Fig. 1 illustrates an exemplary UAV network 100 for facilitating communication of vehicles in accordance with the present disclosure. As shown, UAV network 100 may include a plurality of UAVs 102, such as UAVs 102 a-f. It should be understood that in certain embodiments, UAV network 100 may include hundreds, thousands, or even tens of thousands of UAVs 102. Each UAV102 in UAV network 100 (e.g., UAV102a) may fly between 50,000 and 65,000 feet of altitude above the ground. However, this is not intended to be limiting. In some examples, some or all UAVs 102 in UAV network 100 may fly hundreds or thousands of feet above the ground. As shown, individual UAVs 102 in UAV network 100 may communicate with each other through communications hardware carried by UAV102 or mounted on UAV 102. For example, the communication hardware on the UAV102 may include an antenna, a high frequency radio transceiver, an optical transceiver, and/or any other communication components for long range communications. A communication channel may be established between any two given UAVs 102 in UAV network 100, such as UAV102 c and UAV102 d.
One way to establish a communication channel between any two given UAVs is to have them establish a communication channel autonomously through the communication hardware on the two given UAVs 102. In this example, UAVs 102a, 102b, and 102c are neighboring UAVs such that they cover neighboring regions 104a, 104b, and 104c, respectively. They may be configured to communicate with each other once they are within a threshold distance. The threshold distance may be the maximum communication range of the transceivers on UAVs 102a, 102b, and 102 c. In this way, UAVs 102a, 102b, and 102c may send data to each other without an access point.
Another way to establish a communication channel between any two given UAVs 102 in UAV network 100 is to have them establish a communication channel through a controller. As used herein, a controller may be defined as a piece of hardware and/or software configured to control communications within UAV network 100. The controller may be provided by a ground processing station, such as ground controllers 110a, 110b, or 110 c. For example, the controller 110 may be implemented by a computer server housed in the controller 110. In certain embodiments, controller 110 may be provided by UAV102 in UAV network 100. For example, a given UAV102 (e.g., unmanned helicopter or balloon) in UAV network 100 may carry a payload that includes one or more processors configured to implement controller 110. In any case, controller 110 may be configured to determine network requirements based on applications supported by UAV network 100, and/or perform any other operations. In various embodiments, the control signals may be transmitted from the controller 110 to the UAV102 shown in fig. 1 via a control link.
As mentioned above, one important criterion for the UAV102 in the network is altitude. However, as the UAV102 increases in height, the signals transmitted by the UAV102 become weaker. UAV102 flying at an altitude of 65000 feet may cover an area up to 100 kilometers above the ground, but the signal loss may be significantly higher relative to that occurring with a land-based network. Radio signals typically require a large amount of power for transmission over long distances. On the other hand, the payload that a UAV102 that stays airborne for a longer time may carry is limited. As described above, solar energy may be used to power the UAV 102. However, this limits the weight of the payload that UAV102 can carry, as the rate at which solar radiation can be absorbed and converted to electrical energy is limited.
Free space optical communication (FSO) is an optical communication technique of transmitting light in free space to wirelessly transmit data for radio communication. Commercially available FSO systems use wavelengths around the visible spectrum of about 850 to 1550 nm. In a basic point-to-point FSO system, two FSO transceivers may be placed on either side of a transmission path having an unobstructed line of sight between the two FSO transceivers. A variety of light sources may be used to transmit data through the FSO transceiver. For example, LEDs and lasers may be used to transmit data in an FSO system.
The lasers used in FSO systems offer extremely high bandwidth and capacity comparable to land-based fiber optic networks, but they also consume much less power than microwave systems. The FSO unit may be included in the payload of the UAV102 for communication. The FSO unit may include an optical transceiver with a laser transmitter and receiver to provide full duplex bi-directional capability. The FSO cell may use a high power light source (i.e., a laser) and a lens to transmit the laser beam through the atmosphere to another lens that receives the information contained in the laser beam. The receive lens may be connected to the high sensitivity receiver by an optical fiber. An FSO unit included in the UAV102 in accordance with the present disclosure may enable optical transmission at speeds up to 10 Gbps.
FIG. 1 also shows vehicles 106 a-f. A given vehicle 106 may be equipped with communication hardware. The communication hardware in a given vehicle 106 may include the FSO unit described above, a radio transceiver, and/or any other type of communication hardware. Communication hardware included in the vehicles 106 may be used to establish a communication channel between the vehicles 106 via the UAV 102. The controller 110 may include an FSO unit configured to establish a communication channel with the FSO unit via a laser beam. Through the communication channel, UAV102 may be configured to communicate its geographic location to controller 110. Since the ground controller 110 is fixed, the geographic location of the ground controller 110 may be pre-configured onto an onboard computer in the UAV 102. Information intended for the vehicle 106 may be forwarded to the vehicle 106 by the ground controller 110. The surface controller 110 may be connected to a wired or wireless network. Information intended for the vehicle 106 may be communicated from or to another entity connected to the wired or wireless network through the wired or wireless network. Information intended for the vehicle 106 may first be communicated to the UAV102 via the laser beam, and the UAV102 may forward the information to the vehicle 106 via the laser beam 204 a.
In various embodiments, to locate the vehicle 106, a tracking signal may be transmitted from the UAV102 for tracking the vehicle 106. The tracking signal may be in various forms. For example, the UAV102 may scan the coverage area 104 with a camera onboard the UAV102 in a predetermined pattern. For example, UAV102 may scan coverage area 104 in a scan-line manner from one corner of coverage area 104 to an opposite corner of coverage area 104. As another example, UAV102 may start with an outer sphere within coverage area 104, and gradually enter an inner sphere within coverage area 104 in a concentric sphere manner until coverage area 104 is scanned centrally by coverage area 104. As yet another example, UAV102 may scan a coverage area along a predetermined line of area 104, such as a portion of a road into area 104 and another portion of a road out of area 104. In some embodiments, UAV102 may carry a radio transmitter configured to broadcast over radio signals within coverage area 104. In those examples, the broadcast radio signals may be used as tracking signals such that once they are intercepted by the vehicle 106 passing through the coverage area 104, the UAV102 may be configured to determine the location of the vehicle 106 within the coverage area 104.
After the vehicle 106 has been tracked by the UAV102, an identification number of the vehicle 106 may be captured. In some implementations, the identification number of the vehicle 106 may be captured by a camera carried by the UAV 102. For example, the UAV102 may be configured to take a picture of the license plate of the vehicle once the vehicle 106 has been tracked. As another example, the UAV102 may be configured to send a request to the vehicle 106 to query its identification number, and the vehicle 106 may send its identification number to the UAV102 in response to the request.
Any of the UAVs 102 shown in fig. 1 may be instructed to "monitor" or "zoom in" to a corresponding vehicle 106. For example, the UAV102a may receive location information about the vehicle 106a and instructions to pull-in to the vehicle 106 a. In this example, in response to receiving such location information and instructions, the UAV102a may be configured to track the vehicle 106a based on the received location information. This may include moving the UAV102a into proximity to the vehicle 106a such that the UAV102a has a clear view of the vehicle 106. As will be discussed below, the instructions received by the UAV102a may include taking one or more images of the interior of the vehicle 106 a. To accomplish this, the UAV102a may be equipped with one or more cameras. In some embodiments, the cameras carried by the UAV102a may include wide-angle cameras capable of capturing a wide field of view. In one embodiment, the wide-view camera carried by the UAV102a is an omnidirectional camera with a 360 degree field of view in the horizontal plane or a field of view that covers (approximately) the entire sphere.
In some embodiments, the camera carried by the UAV102a may include multiple cameras fixed at corresponding locations on the bottom of the UAV102 a. In one embodiment, multiple cameras may be arranged on the bottom of the UAV102a to form a loop. In one configuration, 8 cameras are used to form such a ring. Depending on the distance between the UAV102a and the vehicle 106a, the angle between the two, and/or any other factors, one or more of the cameras may be employed to photograph the interior of the vehicle 106 a. For example, the UAV102a may take images of the interior of the vehicle 106a from different angles using three cameras in a loop. In some implementations, the various cameras carried by the UAV102a can have panoramic view capabilities. For example, the UAV102a may carry various types of panoramic view cameras, including short-rotation, full-rotation, fixed-lens, and any other type of panoramic view camera.
With respect to UAV network 100 that has been generally described, attention is now directed to fig. 2, which conceptually illustrates utilizing UAVs to facilitate directional delivery of information to a transportation device in accordance with the present disclosure. Fig. 2 will be described with reference to fig. 1. As shown, each UAV102 in UAV network 100 may be instructed to take one or more images of the interior of vehicle 102, as described above. In fig. 2, it is shown that the UAV102a may be positioned such that it takes one or more images of the interior of the vehicle 106a upon request. In various embodiments, as described above, the UAV102a may be configured to detect the vehicle 106a when the vehicle 106a enters the area 104a covered by the UAV102 a. In response to detecting that the vehicle 106a has entered the area 104a, the UAV102a may be configured to position itself such that the UAV102a has a clear line of sight with respect to the vehicle 106 a. In some implementations, the position of the UAV102a relative to the vehicle 106a may be adjusted based on an image of the vehicle 106a as captured by the UAV102 a. For example, the UAV102a, the controller 110a, and/or the processing center 202 may be configured to determine a quality of an image captured by the UAV102 a. In that case, when the image quality is determined to not show a good view of the interior of the vehicle 106a, the UAV102a may be instructed to reposition itself until an acceptable image of the interior of the vehicle 106a is received. This may involve instructing the UAV102a to adjust its angle, distance, speed, and/or any other aspect relative to the vehicle 106 a. In one embodiment, such instructions may be generated by processing center 202 and transmitted to UAV102a via controller 110a through UAV network 100.
UAV102a may be configured to transmit captured images of vehicle 106a to processing center 202 via UAV network 100. As shown in this example, in some embodiments, the image of the vehicle 106a may first be transmitted to the controller 110a on the ground. The image transmission from the UAV102a to the controller 110 may vary. For example, image data may first be sent from UAV102a to another UAV in UAV network 100. For example, UAV102a may have more computing power or capability than UAV102a, which may be a lightweight UAV configured to follow a moving vehicle and take images of the interior of the moving vehicle. In this example, a UAV having more computing power may act as a relay station to relay image data from the UAV102a to the controller 110 a. In some embodiments, the image data may be sent to more than one UAV in the network 100 before it reaches the controller 110 a.
The controller 110a may be configured to: 1) communicating control instructions with the processing center 202 and with the UAV102 a; 2) receive image data from the UAV102 a; 3) sending image data from the UAV102a to the processing center 202; and/or perform any other operations. However, it should be understood that in some other embodiments, it may not be necessary to transmit the image data through the controller 110 a. In those embodiments, the image data may be sent from UAV102a to processing center 202a via UAV network 100 without going through controller 110 a.
The processing center 202 may be configured to analyze images taken by the UAV102a and obtain passenger information and/or driver information related to one or more passengers and/or drivers in the vehicle 106. For example, in response to receiving the image, the processing center 202 may be configured to analyze the image by employing an image analysis algorithm. In this example, the image analysis performed by the processing center 202 may include analyzing the images to identify one or more passengers and/or drivers. For example, the processing center 202 may employ facial feature analysis to extract one or more facial features for each passenger and/or driver in the vehicle 106 a. The extracted features may be used to match one or more passengers and/or drivers registered for the vehicle 106 a. Once a match is found, the identity of the passenger and/or driver may be determined and other information of the identified driver and/or passenger may be obtained, such as gender, age, user interests, user experience.
As another example, the facial features extracted for each passenger may be used to determine the gender of the passenger, the age group of the passenger, and/or any other characteristic information about one or more passengers. For example, in some cases, the exact identity of a particular passenger in the vehicle 106a may not be readily determined based on the received images. In that case, facial features may still be used to determine certain feature information, for example, the passenger is a male in an age group of a teenager. In some embodiments, the processing center 202 may be configured to determine the location of the occupant within the vehicle 106 a. For example, the position of each passenger relative to the front or rear rows in the vehicle 106a may be determined by analyzing the images. In some embodiments, such image analysis may include obtaining information about the vehicle 106a, such as the number of rows of seats the vehicle 106a has, the dimensions of the interior of the vehicle 106a, and/or any other information about the specifications of the vehicle 106 a. In this example, the location of a particular passenger may be determined, for example, passenger a is sitting in the left rear seat.
In some implementations, the t-processing center 202 may be configured to process vehicle images received from the UAV102a to obtain vehicle information related to the vehicle 106 a. For example, in response to receiving an image of a vehicle captured by the UAV102a, the processing center 202 may be configured to obtain information about the vehicle 106a as captured in the image. For example, the image may contain license plate information indicating the license plate number of the vehicle 106 a. Based on the license plate number of the vehicle 106a, the processing center 202 may obtain certain information about the vehicle 106a, such as the make of the vehicle 106a, one or more presentation capabilities of the vehicle 106a (e.g., audio, video, multimedia presentation capabilities: whether the vehicle 106a has a display device, how many display devices the vehicle 106a has, what type of display devices the vehicle 106a has, and/or any other performance information), one or more communication channels with the vehicle 106a (e.g., an internet address of one or more display devices equipped within the vehicle 106a, a telephone number of the vehicle 106a), and/or any other information related to the vehicle 106 a.
In some embodiments, the processing center 202 may be configured to obtain geographic information about the vehicle 106 a. In one embodiment, in response to identifying the vehicle 106a through image analysis (e.g., by identifying the license plate of the vehicle 106a), the processing center 202 may be configured to obtain geographic location information about the vehicle 106 a. For example, the processing center 202 may be configured to retrieve geographic location information about the vehicle 106a from a location database using the license plate identification number of the vehicle 106 a. In this example, the location database may be configured to store geographic locations of a plurality of vehicles. For example, the location database may be configured to store the geographic location of vehicles traveling through the area covered by UAVs in UAV network 100. As another example, the processing center 202 may be configured to obtain the geographic location of the vehicle 106a from a GPS system by providing license plate information about the vehicle 106 a.
The processing center 202 may be configured to determine the particular multimedia information for presentation to one or more passengers in the vehicle 106a based on geographic information, passenger information, and/or any other information (if any) associated with the vehicle 106 a. Passenger information may be used by the processing center 202 to select one or more sets of multimedia information from a database of such information. For example, the passenger information may indicate one or more specific passenger identities. Based on the identity of the particular passenger, processing center 202 may be configured to obtain passenger preferences regarding multimedia information. For example, in this example, data analysis may be employed by the processing center to analyze the passenger's multimedia information preferences based on the passenger's prior viewing experience. Illustratively, the passenger's preference for movies that are being played by an actor or of a certain type may be obtained, and based on the preference, the processing center 202 may select a set of movies that are being played by the actor or belong to that type. Other examples of selecting multimedia information based on the identity of the passenger are also contemplated.
As another example, in some cases, processing center 202 may be configured to determine multimedia information for presentation to passengers based on other aspects of the passenger that may be identified from the images, such as the passenger's gender, age group, race, and/or any other aspect. For example, in some instances, the exact identity of the passenger may not be determinable by the processing center 202, and the processing center 202 may still be configured to select multimedia information from the database for presentation to the passenger based on, for example, the passenger's gender, age group, and/or ethnicity. For example, the passenger may be identified by the processing center 202 as a male of a teenager. In this case, the processing center 202 may be configured to select for presentation to the passenger multimedia information that is of interest to and suitable for viewing by children under 18.
The geographic location information of the vehicle 106a may be used by the processing center 202 to determine which set of multimedia information is available for presentation to the vehicle 106 a. For example, the geographic location information of the vehicle 106a may indicate that the vehicle 106a is currently located in media market a, where certain local television channels are available and other television channels are not. In this example, the processing center 202 may determine that only available television channels are presented to the vehicle 106 a. As yet another example, the geographic location information of the vehicle 106a may indicate that certain media content (e.g., movies) may not be available in market a due to geographic location restrictions. In this example, the processing center 202 may be configured not to include those media content in the set of multimedia information for presentation to the vehicle 106 a. It should be appreciated that multimedia information, as used herein, may include media content such as video, movies, audio (e.g., music, radio programs, audio books), live television programs, still images, video games, and/or any other multimedia information. In some implementations, the multimedia information determined by processing center 202 based on the passenger information and the geographic location information of vehicle 106a may include media guide information, such as an interactive channel guide or an on-demand entertainment guide, so that once they are presented to the passengers in vehicle 106a, the passengers may select desired content (e.g., a television channel or an on-demand movie) for viewing by clicking on the description of the selected content in the guide. For example, once clicked on, the actual television program may be streamed from the network media server to vehicle 106a, or the actual movie may be streamed from the media server to vehicle 106 a. In some implementations, the multimedia information determined by processing center 102a may contain the actual media content for presentation to the passenger. For example, the determined multimedia information may be an entire movie that may be sent to the vehicle 106a for presentation to the passengers.
In some implementations, the processing center 202 can be configured to receive a request for multimedia content streaming from the vehicle 106 a. In these embodiments, the multimedia information may be sent to the vehicle 106a on demand, as requested. In these embodiments, the request from the vehicle 106a may include geographic location information about the vehicle 106 a. Still in those embodiments, after receiving the request from the vehicle 106a, the processing center 202 may be configured to generate control instructions to instruct one or more UAVs to take images of the interior of the vehicle 106a, as described above, and perform image analysis to obtain passenger information as described above. Also in these embodiments, the processing center 202 may be configured to determine one or more sets of multimedia information for presentation to passengers in the vehicle 106a who have requested media content streaming as described above.
Once such items are determined, the processing center 202 may be configured to send the determined multimedia information to the vehicle 106a for presentation on a display device appropriate for the passenger. For example, the image analysis described above may indicate that a passenger is sitting in the left rear seat, and the information related to the vehicle 106a may indicate that the left rear seat has a display device with a particular internet address. In this example, the processing center 202 may be configured to transmit the determined multimedia information to the display device through a specific internet address. In some embodiments, processing center 202 may transmit the project through UAV network 100.
Attention is now directed to FIG. 3, wherein an example of a processing center 202 is shown. As shown, the processing center 202 may include one or more processors 302 configured to execute program components. The program components may include a transportation device image component 304, a transportation device information component 306, an image analysis component 308, an orientation information component 310, a transmission component 312, and/or any other component. The transport device image component 304 may be configured to receive one or more images of a transport device (e.g., vehicle 106 a). The images received by the transportation device image component 304 may include images of the interior of the vehicle 106a taken by a UAV (e.g., UAV102a) from different angles. The image received by the transportation device image component 304 may include information that readily indicates the identity of the vehicle. For example, the one or more images may indicate a license plate number of the vehicle 106 a. However, this is not necessarily the only case. In some cases, the image received by the transportation device image component 304 may not contain such information. To address this, the transportation device image component 304 may be configured to generate control instructions to instruct a UAV (e.g., UAV102) to re-capture an image; and sends control instructions to UAV102 via UAV network 100. As described above, the control instructions may be sent to the UAV102 via a UAV network.
The transportation device information component 306 may be configured to obtain information related to the transportation device based on the image received by the transportation device image component 304. As described above, the image received by the transportation device image component 304 may contain information indicative of the license plate number of the vehicle 106 a. In some embodiments, the transportation device information component 306 may be configured to obtain information about the vehicle 106a based on such license plate information. For example, the transportation device information component 306 can be configured to query the vehicle registration database for the vehicle 106a using the license plate number of the vehicle 106 a. The information related to the vehicle 106 as obtained by the transportation device information component 306 may include the brand of the vehicle 106a (e.g., Toyota Kara 2014, Honda Accord 2016, etc.), one or more presentation capabilities of the vehicle 106a (e.g., audio, video, multimedia presentation capabilities: whether the vehicle 106a has a display device, how many display devices the vehicle 106a has, what type of display devices the vehicle 106a has, where each display is located within the vehicle 106a in the case of the vehicle 106a having more than one display, and/or any other capability information), one or more communication channels supported by the vehicle 106a, one or more multimedia formats supported by the vehicle 106a, and/or any other information related to the vehicle 106 a. For example, the information related to the vehicle 106a may include information indicating that the vehicle 106a has 3 display devices capable of presenting audio, video, and animation, where a first display device is located on the dashboard of the vehicle 106, a second display device is located rearward of the left front seat, and a third display device is located rearward of the right front seat.
In some implementations, the information related to the vehicle 106a as obtained by the transportation device information component 306 may include information indicative of various statistics about the vehicle 106 a. For example, the information may indicate an area (e.g., area 104a) in which the vehicle 106a is traveling, how long (e.g., 5 minutes) the vehicle 106a has traveled in the area, on which road the vehicle 106a is traveling, the speed of the vehicle 106, which area the vehicle 106a is traveling toward (e.g., area 104b), the size of the vehicle 106a, and/or any other statistical information about the vehicle 106 a.
In some implementations, the transportation device information component 306 may be configured to obtain geographic location information about the vehicle 106 a. As described above, the transportation device information component 306 may be configured to obtain geographic location information about the vehicle 106 from a location database, a GPS system, and/or the vehicle 106a itself, as described above.
The image analysis component 308 may be configured to analyze the images received by the transport device image component 304 and obtain passenger information and/or driver information related to one or more passengers and/or drivers in the vehicle 106 a. For example, in response to the transportation device image component 304 receiving the image, the image analysis component 308 may be configured to analyze the image by employing an image analysis algorithm. The image analysis performed by the image analysis component 308 may include analyzing the images to identify one or more passengers and/or drivers in the vehicle 106 a. For example, facial feature analysis may be employed to extract one or more facial features of each passenger and/or driver in the vehicle 106 a. The extracted features may be used to match one or more passengers and/or drivers registered for the vehicle 106 a. Once a match is found, the identity of the passenger and/or driver can be determined by the image analysis component 308, and other information of the identified driver and/or passenger can be obtained, such as gender, age, user interests, user experience.
As another example, the facial features extracted by the image analysis component 308 for each passenger may be used by the image analysis component 308 to determine the gender of the passenger, the age group of the passenger, and/or any other characteristic information about one or more passengers. For example, in some cases, the exact identity of a particular passenger in the vehicle 106a may not be readily determined by the image analysis component 308. In that case, some feature information may still be determined by the image analysis component 308 using facial features, for example, the passenger is a male in the teenage group. In some implementations, the image analysis component 308 may be configured to determine the location of the occupant within the vehicle 106 a. For example, the position of each passenger relative to the front or rear rows in the vehicle 106a may be determined by the image analysis component 308 by analyzing and aggregating content in the images. In some embodiments, such image analysis may include obtaining information about the vehicle 106a, such as the number of rows of seats the vehicle 106a has, the dimensions of the interior of the vehicle 106a, and/or any other information about the specifications of the vehicle 106a as obtained by the transportation device information component 306. In this example, the location of a particular passenger may be determined, for example, passenger a is sitting in the left rear seat.
The multimedia information component 310 may be configured to determine particular multimedia information for presentation to one or more passengers based on geographic information of the vehicle 106a, passenger information, and/or any other information, if any. For example, the passenger information may indicate that the vehicle 106a has a particular passenger who is a male in the age of 20. The information related to the vehicle 106a may indicate that the vehicle 106a has entered the area 104a and has traveled within the area 104a for a certain period of time. In this example, the multimedia information component 310 may be configured to determine to push one or more movies or television programming that may be of interest to the passenger and available in the area 104a for presentation to the passenger based on the general interests of male performance in the age group and the geographic location information of the vehicle 106 a. In this example, the multimedia information component 310 may be configured to obtain general interest in various age groups.
The transmission component 312 can be configured to transmit the multimedia information determined by the multimedia information component 310 to the vehicle 106a for presentation on a display device suitable for the passenger. In some implementations, the transmission component 312 can be configured to determine a format of an item to be presented on a display device. For example, the passenger information as determined by the image analysis component 308 may indicate that the passenger is sitting in the left rear seat, and the information related to the vehicle 106a may indicate that the left rear seat has a display device with a particular internet address and capable of presenting an interactive channel guide. In this example, the transmission component 312 may be configured to transmit an interactive channel guide to the display device over a particular internet address and enable the passenger to select a television channel in the channel guide for viewing. In some implementations, transmission component 312 may transmit the item through UAV network 100.
Attention is now directed to fig. 4, wherein an exemplary method 400 for facilitating delivery of multimedia information to a transport device in accordance with the present disclosure is illustrated. The particular sequence of processing steps depicted in fig. 4 is not intended to be limiting. It should be understood that the process steps may be performed in an order different than that shown in fig. 4, and that not all of the steps shown in fig. 4 need be performed. In some embodiments, method 400 may be implemented by a video processing center, such as the video processing center shown in FIG. 5.
In some embodiments, the method depicted in method 400 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices that perform some or all of the operations of method 400 in response to electronically stored instructions on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software specifically designed to perform one or more operations of method 400.
At 402, one or more images of an interior of a transport device may be received. The images received at 402 may include images of the interior of the transport device taken by a UAV (e.g., UAV102a) from different angles. The image received at 402 may include information that readily indicates the identity of the transport device. In some implementations, the operations referred to in 402 may be implemented by a transport device image component that is the same as or substantially similar to the transport device image component 304 shown and described herein.
At 404, information related to the transportation device may be obtained based on the image received at 402. As described above, the image received at 402 may contain information indicative of the license plate number of the transportation device. In some embodiments, based on such information, information about the transportation device may be obtained. The information obtained at 404 may include the brand of the transport device (e.g., Toyota Kara 2014, Honda Accord 2016, etc.), one or more presentation capabilities of the transport device (e.g., audio, video, multimedia presentation capabilities: whether the transport device has a display device, how many display devices the transport device has, what type of display devices the transport device has, where each display is located within the transport device in the case of a transport device having more than one display, and/or any other capability information), one or more communication channels supported by the transport device, one or more multimedia formats supported by the transport device, and/or any other information related to the transport device. In some implementations, the information related to the transportation device obtained at 410 may include information indicative of various statistics about the transportation device. For example, the information may indicate geographic location information about the current location (e.g., area 104a) that the transport device is traveling within and how long (e.g., 5 minutes) the transport device has traveled within the area. In some implementations, the operations involved in 404 may be implemented by a transport device information component that is the same as or substantially similar to the UAV communications component transport device information component 306 shown and described herein.
At 406, the image received at 402 may be analyzed to obtain passenger information about one or more passengers in the transport device and/or driver information about one or more drivers. The image analysis performed at 406 may include analyzing the images to identify one or more passengers and/or drivers in the transport device. For example, facial feature analysis may be employed to extract one or more facial features of each passenger and/or driver in the transport device at 406. The extracted features may be used to match one or more passengers and/or drivers registered for the transport device. Once a match is found, the identity of the passenger and/or driver may be determined and other information of the identified driver and/or passenger may be obtained, such as gender, age, user interests, user experience. As another example, the facial features extracted for each passenger at 406 may be used to determine the gender of the passenger, the age group of the passenger, and/or any other characteristic information about one or more passengers. In some implementations, the operations involved in 406 may be implemented by an image analysis component that is the same as or substantially similar to the image analysis component 308 shown and described herein.
At 408, one or more sets of multimedia information may be determined for presentation to passengers in the transport device based on the geographic information of the transport device as obtained at 404 and the passenger information obtained at 406. In some implementations, the operations involved in 408 may be implemented by multimedia information components that are the same as or substantially similar to the multimedia information components 310 shown and described herein.
At 410, the one or more items determined at 408 may be transmitted to the transport device for presentation to a passenger and/or driver in the transport device. In some implementations, the operations involved in 410 may be implemented by a transmission component that is the same as or substantially similar to transmission component 312 shown and described herein.
FIG. 5 illustrates a simplified computer system that may be used to implement the various embodiments described and illustrated herein. The computer system 500 shown in fig. 5 may be incorporated into a device such as a portable electronic device, a mobile phone, or other device as described herein. FIG. 5 provides a schematic diagram of one embodiment of a computer system 500 that may perform some or all of the steps of the methods provided by the various embodiments. It should be noted that FIG. 5 is intended merely to provide a generalized illustration of various components, any or all of which may be suitably employed. Thus, fig. 5 broadly illustrates how various system elements may be implemented in a relatively decentralized or relatively more centralized manner.
The computer system 500 is shown as including hardware elements that are electrically coupled via a bus 505, or may otherwise communicate as appropriate. The hardware elements may include: one or more processors 510, including but not limited to one or more general-purpose processors and/or one or more special-purpose processors such as digital signal processing chips, graphics acceleration processors, and/or the like; one or more input devices 515, which may include, but are not limited to, a mouse, a keyboard, a camera, and/or the like; and one or more output devices 520 that may include, but are not limited to, a display device, a printer, and/or the like.
The computer system 500 may also include and/or communicate with one or more non-transitory storage devices 525, which non-transitory storage devices 525 may include, but are not limited to, local and/or network accessible memory, and/or may include, but are not limited to, disk drives, arrays of drives, optical storage devices, solid state storage devices, such as random access memory ("RAM") and/or read only memory ("ROM"), which may be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any suitable data storage, including but not limited to various file systems, database structures, and/or the like.
Computer system 500 may also include a communication subsystem 530, which may include, but is not limited to, a modem, a network card (wireless or wired), an infrared communication device, a wireless communication device, and/or a chipset such as a Bluetooth (TM) device, 502.11 device, WiFi device, WiMax device, cellular communication infrastructure, and/or the like. Communication subsystem 530 may include one or more input and/or output communication interfaces to allow data to be exchanged with a network (such as the network described below, to name but one example), other computer systems, a television, and/or any other device described herein. A portable electronic device or the like may communicate images and/or other information via the communication subsystem 530, depending on desired functionality and/or other implementation factors. In other embodiments, a portable electronic device, such as the first electronic device, may be incorporated into the computer system 500, for example, as the electronic device of the input device 515. In some embodiments, the computer system 500 will also include a working memory 535, which may include a RAM or ROM device as described above.
Computer system 500 may also include software elements shown as currently located within working memory 535, including an operating system 540, device drivers, executable libraries, and/or other code, such as one or more application programs 545, which may include computer programs provided by the various embodiments, and/or which may be designed to implement methods and/or configure systems provided by other embodiments as described herein. Merely by way of example, one or more of the processes described with respect to the above-described methods (e.g., those described with respect to fig. 5) may be embodied as code and/or instructions executable by a computer and/or a processor within a computer; in one aspect, then, such code and/or instructions may be used to configure and/or adapt a general purpose computer or other device to perform one or more operations in accordance with the described methods.
A set of these instructions and/or code may be stored on a non-transitory computer readable storage medium, such as storage device 525 described above. In some cases, the storage medium may be incorporated into a computer system, such as computer system 500. In other embodiments, the storage medium may be separate from the computer system, e.g., a removable medium such as an optical disk, and/or provided in an installation package, such that the storage medium may be used to program, configure and/or tune the general purpose computer via the instructions/code stored thereon. These instructions may take the form of executable code, which may be executed by computer system 500, and/or may take the form of source and/or installable code, which when compiled and/or installed on computer system 500 (e.g., using any of a variety of commonly available compilers, installation programs, compression/decompression utilities, etc.), takes the form of executable code.
It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software including portable software such as applets, or both hardware and software. In addition, connections to other computing devices, such as network input/output devices, may be employed.
As described above, in one aspect, some embodiments may use a computer system, such as computer system 500, to perform methods in accordance with various embodiments of the present technology. According to one set of embodiments, some or all of the procedures of these methods are performed by computer system 500 in response to processor 510 executing one or more sequences of one or more instructions, which may be incorporated into operating system 540 and/or other code (such as application programs 545) contained in working memory 535. Such instructions may be read into working memory 535 from another computer-readable medium, such as one or more storage devices 525. By way of example only, execution of the sequences of instructions contained in the working memory 535 may cause the processor 510 to perform one or more processes of the methods described herein. Additionally or alternatively, some portions of the methods described herein may be performed by dedicated hardware.
The terms "machine-readable medium" and "computer-readable medium" as used herein refer to any medium that participates in providing data that causes a machine to operation in a specific fashion. In an embodiment implemented using computer system 500, various computer-readable media may be involved in providing instructions/code to processor 510 for execution and/or may be used to store and/or carry such instructions/code. In many implementations, the computer-readable medium is a physical and/or tangible storage medium. Such a medium may take the form of a non-volatile medium or a volatile medium. Non-volatile media includes, for example, optical and/or magnetic disks, such as storage device 525. Volatile media includes, but is not limited to, dynamic memory, such as working memory 535.
Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read instructions and/or code.
Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 510 for execution. By way of example only, the instructions may initially be carried on a magnetic and/or optical disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer system 500.
The communication subsystem 530 and/or its components will typically receive signals, and the bus 505 may then carry the signals and/or the data, instructions, etc. carried by the signals to a working memory 535, from which working memory 535 the processor 510 retrieves the instructions and executes the instructions. The instructions received by the working memory 535 may optionally be stored on a non-transitory storage device 525 before or after execution by the processor 510.
The methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For example, in alternative configurations, the methods may be performed in an order different than that described, and/or stages may be added, omitted, and/or combined. Furthermore, features described for certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Furthermore, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims.
Specific details are given in the description to provide a thorough understanding of example configurations including embodiments. However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configuration. This description provides example configurations only, and does not limit the scope, applicability, or configuration of the claims. Rather, the foregoing description of the configurations will provide those skilled in the art with a possible description for implementing the described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
Further, the configuration may be described as a process depicted as a schematic flow chart or a block diagram. Although each configuration may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. The process may have additional steps not included in the figures. Furthermore, examples of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a non-transitory computer-readable medium such as a storage medium. The processor may perform the described tasks.
Although a number of example configurations have been described, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above-described elements may be components of a larger system, where other rules may take precedence over or otherwise modify the application of the present technology. Further, a plurality of steps may be performed before, during, or after the above-described elements are considered. Accordingly, the above description does not limit the scope of the claims.
As used herein and in the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "a user" includes a plurality of such users, and reference to "the processor" includes reference to one or more processors and equivalents thereof known to those skilled in the art, and so forth.
Furthermore, the words "consisting of," "consisting of … …," "containing," "including," "contains," and "having," when used in this specification and the appended claims, are intended to specify the presence of stated features, integers, components, or steps, but they do not preclude the presence or addition of one or more other features, integers, components, steps, acts, or groups thereof.
Claims (20)
1. A method for facilitating directional delivery of multimedia information to a transport device via an Unmanned Aerial Vehicle (UAV) network, the method implemented in one or more processors configured to execute programmed components, the method comprising:
receiving, via the UAV network, one or more images of the transport device taken by a UAV;
obtaining information related to the transport device in response to receiving the image of the transport device;
analyzing the one or more images to obtain passenger information related to one or more passengers in the transport device;
determining multimedia information for presentation to one or more passengers based on the information related to the transport device and the passenger information; and is
Sending the multimedia information to the transport device for presentation.
2. The method of claim 1, wherein the transportation device comprises a vehicle.
3. The method of claim 1, wherein the passenger information includes a gender of each passenger in the transport device, an age group of each passenger in the transport device, and an identity of each passenger in the transport device.
4. The method of claim 1, further comprising processing the passenger information and one or more images to obtain location information regarding a location of the one or more passengers within the transport device.
5. The method of claim 1, wherein determining the multimedia information to present to one or more passengers comprises an interactive television channel guide and/or an on-demand entertainment guide.
6. The method of claim 1, wherein the multimedia information determined to be presented to one or more passengers comprises a video clip, an audio clip, and a video game.
7. The method of claim 1, wherein the multimedia information is presented to the one or more passengers via audio and/or video within the transport device.
8. The method of claim 1, determining one or more items for presentation to one or more passengers based on the geographic information of the transport device and the passenger information comprises:
determining a particular display device within the transport device for presenting the one or more items based on the information related to the transport device and the passenger information.
9. The method of claim 1, determining multimedia information for presentation to one or more passengers based on the geographic information of the transport device and the passenger information comprises:
determining whether a set of multimedia content is available for viewing in a geographic area indicated by geographic information of the transport device and whether the set of multimedia content is of interest to the one or more passengers.
10. A system for facilitating directional delivery of multimedia information to a transport device via an Unmanned Aerial Vehicle (UAV) network, the system comprising one or more processors configured to:
receiving, via the UAV network, one or more images of the transport device taken by a UAV;
obtaining information related to the transport device in response to receiving the image of the transport device;
analyzing the one or more images to obtain passenger information related to one or more passengers in the transport device;
determining multimedia information for presentation to one or more passengers based on the information related to the transport device and the passenger information; and is
Sending the multimedia information to the transport device for presentation.
11. The system of claim 10, wherein the transportation device comprises a vehicle.
12. The system of claim 10, wherein the passenger information includes a gender of each passenger in the transport device, an age group of each passenger in the transport device, and an identity of each passenger in the transport device.
13. The system of claim 10, wherein the processor performs processing the passenger information and one or more images to obtain location information regarding a location of the one or more passengers within the transport device.
14. The system of claim 10, wherein the multimedia information determined to be presented to one or more passengers comprises an interactive television channel guide and/or an on-demand entertainment guide.
15. The system of claim 10, wherein the multimedia information determined to be presented to one or more passengers comprises a video clip, an audio clip, and a video game.
16. The system of claim 10, wherein the multimedia information is presented to the one or more passengers via audio and/or video within the transport device.
17. The system of claim 10, wherein determining one or more items for presentation to one or more passengers based on the geographic information of the transport device and the passenger information comprises:
determining a particular display device within the transport device for presenting the one or more items based on the information related to the transport device and the passenger information.
18. The system of claim 10, determining multimedia information for presentation to one or more passengers based on the geographic information of the transport device and the passenger information comprises:
determining whether a set of multimedia content is available for viewing in a geographic area indicated by geographic information of the transport device and whether the set of multimedia content is of interest to the one or more passengers.
19. A method comprising any one or any combination of the features of claims 1 to 9.
20. A system comprising any one or any combination of the features of claims 10 to 18.
Applications Claiming Priority (7)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US62/274,112 | 2015-12-31 | ||
| US15/341,824 | 2016-11-02 | ||
| US15/341,809 | 2016-11-02 | ||
| US15/341,831 | 2016-11-02 | ||
| US15/341,797 | 2016-11-02 | ||
| US15/341,813 | 2016-11-02 | ||
| US15/341,818 | 2016-11-02 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1240437A1 true HK1240437A1 (en) | 2018-05-18 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10097862B2 (en) | Facilitating multimedia information delivery through a UAV network | |
| WO2017114506A1 (en) | Facilitating multimedia information delivery through uav network | |
| US20170193556A1 (en) | Facilitating targeted information delivery through a uav network | |
| US10440323B2 (en) | Facilitating wide view video conferencing through a drone network | |
| US10354521B2 (en) | Facilitating location positioning service through a UAV network | |
| US11663911B2 (en) | Sensor gap analysis | |
| US10601496B2 (en) | Method, apparatus and system of providing communication coverage to an unmanned aerial vehicle | |
| US20220394213A1 (en) | Crowdsourced surveillance platform | |
| US20200126413A1 (en) | Uav network assisted situational self-driving | |
| EP3261405B1 (en) | Local network for simultaneously exchanging data between a drone and a plurality of user terminals and assigning a main single user that controls the drone | |
| US11670089B2 (en) | Image modifications for crowdsourced surveillance | |
| HK1240437A1 (en) | Facilitating multimedia information delivery through a uav network | |
| HK1242875A1 (en) | Facilitating targeted information delivery through a uav network | |
| HK1239995A1 (en) | Facilitating multimedia information delivery through a uav network | |
| HK1239994A1 (en) | Facilitating targeted information delivery through a uav network | |
| HK1241617A1 (en) | Facilitating wide view video conferencing through a uav network | |
| HK1242892A1 (en) | Facilitating location positioning service through a uav network | |
| HK1239993A1 (en) | Facilitating wide-view video conferencing through a uav network | |
| US20220390946A1 (en) | Path-based surveillance image capture | |
| US20220394426A1 (en) | Mobile device power management in surveillance platform | |
| US20220392033A1 (en) | Correction of surveillance images | |
| HK1239996A1 (en) | Facilitating location positioning service through a uav network | |
| HK1242858A1 (en) | Facilitating communication with a vehicle via a uav |