HK40009818A - Systems and methods for displaying images across multiple devices - Google Patents
Systems and methods for displaying images across multiple devices Download PDFInfo
- Publication number
- HK40009818A HK40009818A HK19133322.8A HK19133322A HK40009818A HK 40009818 A HK40009818 A HK 40009818A HK 19133322 A HK19133322 A HK 19133322A HK 40009818 A HK40009818 A HK 40009818A
- Authority
- HK
- Hong Kong
- Prior art keywords
- signal
- light show
- beacon
- speaker
- pixel
- Prior art date
Links
Description
Cross Reference to Related Applications
This application claims priority from U.S. application No. 62/436652 entitled "SYSTEMS AND METHODS for displaying IMAGES ACROSS MULTIPLE DEVICES" filed on 20.12.2016. FOR the united states, this application claims the benefit of united states application No. 62/436652 entitled "SYSTEMS AND METHODS FOR DISPLAYING IMAGES acros short DEVICES," filed 2016 (system and method FOR displaying images across multiple DEVICES), "from 35u.s.c. § 119, which is incorporated herein by reference FOR all purposes.
Technical Field
The present invention relates to techniques for generating audience participation light shows in live events.
Background
In concerts, shows, productions, sporting activities or other live activities, participants to the activity may be encouraged to participate in spectators or groups of people participating in the light show to enhance the experience of the live activity. Typically, to generate such light shows, audience members are provided with devices having one or more Light Emitting Diodes (LEDs) that cause the devices to light up during activity or to pulsate regularly with the light. Such a device may be a wearable device, such as e.g. a wrist band or a necklace. Alternatively, such devices may include a ball or other object that may be held by audience members or designed to float on the crowd. The device may be wirelessly controlled to turn on and off the LEDs of the device during field activity. A transmitter equipped in the field activity may send commands modulated onto an Infrared (IR) or Radio Frequency (RF) signal to a receiver embedded in the device. The device may have a microprocessor that controls the LEDs based on the signal detected at the receiver. In this way, the device can be controlled to light up or pulsate regularly during the show.
However, these crowd-engaging devices are typically disposable items that must be remanufactured for each live event, which increases the cost of providing a crowd-engaging light show for the event. Furthermore, there are environmental costs associated with producing items that can only be used once or a few times. Furthermore, specific limitations of IR control devices include that stage smoke may interfere with the signal and thus affect the overall lighting effect. IR signals may also adversely affect the viewing of the show by a mobile phone or other camera.
U.S. patent US 2015/0081071 issued to humm City Lights Inc (Wham City Lights Inc.) discloses the use of mobile devices carried by active participants to generate light shows in a live event. Data is modulated onto an audio signal transmitted from a speaker to a computing device. The action on the computing device triggered by the data is based on when the audio signal is received. Thus, if the computing device receives audio signals at different times (due to the limitation of using sound waves to carry the signals, the sound waves travel at a much lower speed than other wireless signals traveling at the speed of light), the light show will lack synchronization between the devices. This reference also does not disclose how to determine the location of each device at the site and does not provide the ability to edit the light show effects during the performance of the show.
In some systems, each device is assigned an address that corresponds to a location in the venue where an audience member is expected to be located. For example, the address may be the assigned seat number of the audience member, or a general portion of the arena in which the audience member's seat number is located. The device may be controlled based on the assigned address to be able to produce different lighting effects based on the assigned location of the audience member. However, these methods rely on audience members being in a location corresponding to an address assigned on their device. These methods do not work if audience members move to different locations during live events. These methods are also not applicable to general seating situations where the audience position during the performance in a general seating area or a general entrance area cannot be predicted.
Another method of addressing a device, particularly for Global Positioning Satellite (GPS) enabled devices, is to use GPS information to determine the location of the device. However, since radio signals broadcast from GPS satellites are difficult to penetrate the walls of a building, GPS signals may be unreliable for determining the location of a mobile device within the building. Even if GPS signals are receivable by the device, the position coordinates determined using GPS signals are not very accurate or precise. For example, some GPS signals may be able to locate the position of the device within about 8 meters of the actual position of the device with a 95% confidence level. Thus, at best, GPS signals can identify the approximate location of the device and sometimes incorrectly identify the location, or not identify the location at all if the GPS signals are blocked by a building wall.
Other methods of addressing a device are by measuring the signal strength at the device to determine its position relative to a transmitter, such as a bluetooth transmitter or WiFi transmitter. However, the locations determined using these methods are often inaccurate. For example, bluetooth low energy iBeacon Received Signal Strength Indicator (RSSI) values are intended to provide only three types of range measurements: far, near and very near. Using these three levels of feedback to triangulate at a level of granularity that is meaningful to a light show is difficult or impractical because beacon signals bounce off walls and are absorbed by various objects including people. In order to obtain a meaningful reading, beacons must be placed approximately every 3 to 4 meters. In a stadium environment this is almost impossible. Even if they are placed at these locations, the accuracy of the position determination is still only about 1 to 2 meters. Furthermore, the number of simultaneous bluetooth signals operating in the overall environment is often too large for a mobile phone to handle. Similar difficulties exist with estimating location based on signal strength using WiFi transmitters. Furthermore, setting up a WiFi network to handle all devices that need to connect to it during an activity can present load impacts and challenges.
The foregoing examples of related art and their associated limitations are intended to be illustrative, not exclusive. Other limitations of the related art will become apparent to those of skill in the art upon a reading of the specification and a study of the drawings.
Disclosure of Invention
The following embodiments and aspects thereof are described and illustrated in conjunction with systems, tools and methods which are meant to be exemplary and illustrative, not limiting in scope. In various embodiments, one or more of the above-described problems have been reduced or eliminated, while other embodiments are directed to other improvements.
One aspect of the invention provides a light show control system for generating a light show across a plurality of pixel devices. The light show may be used to attract viewers or groups of people in live activities such as concerts, shows, productions, sports or game activities, fireworks shows, etc., where audience members hold, carry, or wear pixels for the display of the light show. In a particular embodiment, the pixel device comprises a handheld mobile device such as, for example, a smartphone or tablet computer. The system includes a controller that receives input from a light show operator and generates a plurality of light show parameters based on such input. The system further includes one or more beacon transmitters in communication with the light show controller and configured to receive the light show parameters from the light show controller, encode the light show parameters on a beacon signal, and broadcast the beacon signal to the pixel devices. For example, based on a plurality of light show parameters, the beacon transmitter may be configured to encode a play scene command on the beacon signal, wherein the play scene command is defined by one or more of: scene type, set of color IDs, fade rate, scene transition, and beats per minute (bpm). The light show controller may be configured to provide a graphical user interface through its display to receive input from a light show operator and to enable dynamic and real-time generation and modification of light show parameters based on such input. Each pixel device is configured to receive and decode a beacon signal to perform one or more display actions of the light show based on the decoded beacon signal. The beacon transmitter may comprise a Bluetooth Low Energy (BLE) beacon transmitter. Multiple beacon transmitters may be employed at a live event to ensure coverage and provide redundancy in the event of failure of one or more transmitters.
In some embodiments, a timing reference is encoded on the beacon signal to synchronize the performance of display actions across pixel devices. The timing reference may include the time since a starting reference point (e.g., the first beat of the light show or the first beat of the current light show scene, etc.). Based on the plurality of light show parameters, the beacon transmitter may be configured to encode a heartbeat message on the beacon signal, wherein the heartbeat message is defined by the timing reference and one or more of: bpm, beat pattern type, beat number, and speed of sound.
The beacon transmitter may be configured to broadcast the beacon signal as one or more repeating batches of data packets, such as a batch of 15 to 25 data packets. In a particular embodiment, the time t between the transmission of successive data packetsmBetween 15ms and 30ms and encoded in the first dataThe timing reference in each data packet following the packet is incremented by t from the timing reference encoded in the previous data packetm. In some embodiments, each beacon transmitter is configured to update the transmitter's Media Access Control (MAC) address to encode a new MAC address for each batch of data packets. In addition, each beacon transmitter may be configured to encode a new identification number for each batch of data packets on the beacon signal.
Other aspects provide methods performed by a pixel device for facilitating display of a light show across a plurality of such pixel devices. Particular embodiments of such a method include: scanning for and receiving at the pixel device a beacon signal broadcast from a beacon transmitter; decoding the beacon signal to determine a plurality of light show parameters; and performing one or more display actions of the light show based on the light show parameters. The display action may include one or more of: displaying at least one image or a series of images on a display screen of the pixel device; flashing a light source on the pixel device; and vibrating the pixel device. Additionally, or alternatively, the display action may comprise displaying a scene of the light show, wherein the scene comprises a sequential display of colors displayed on a display screen of the pixel device, and the scene is characterized by one or more of: scene type, set of color IDs, fade rate, scene transition, and bpm.
In some embodiments, the pixel device scans and receives heartbeat signals broadcast from a beacon transmitter; and responsive to not receiving a heartbeat signal within a heartbeat timeout period, is configured to cease one or more display actions and/or restart the bluetooth receiver. If a heartbeat signal is received, the pixel device decodes a timing reference from the heartbeat signal and performs a display action at a start time based on the timing reference.
Another aspect of the invention provides systems and methods for generating a light show in which the display actions performed by the pixel devices are based on the individual locations of the pixels. Thus, the light show across all pixels can be controlled to show moving lines, spirals, swirls, halos, or other effects. A positioning pixel location system is provided to enable pixel devices to identify their own locations. The system includes a positioning signal transmitter and a plurality of speaker nodes in communication with the positioning signal transmitter. Communication from the positioning signal transmitter to the speaker node may be achieved by Radio Frequency (RF) signals. A localization signal emitter emits a tone-generating signal to each speaker node, and in response to receiving the tone-generating signal, each speaker node generates and emits a unique audio signal that is recorded and played back by the pixel device to enable time difference of arrival (TDOA) -based trilateration and/or multilateration for location determination by the pixel device. In a particular embodiment, the audio signal is an ultrasonic audio signal (higher than the frequency of audible sound), such as an audio signal having a frequency in the range of 16kHz to 24kHz (although the frequency range may be different in other embodiments). Each speaker node is configured to emit audio signals simultaneously with other speaker nodes. The audio signal may include tones, chirps, or other sounds. In particular embodiments, clusters of four to six speaker nodes are employed at live arenas and the speaker nodes are placed at different elevations to enable a single pixel device to make three-dimensional location determinations using TDOA trilateration and/or multi-point location.
In some embodiments, the positioning signal transmitter is configured to transmit the tone-generating signal as a plurality of RF signals to the cluster of speaker nodes at equally spaced time intervals. Upon receiving each RF signal, each speaker node is configured to time elapsed time since a start of a previous time interval and determine a signal generation time period based on a set of timed times. For example, each speaker node may be configured to take the lowest time of the set of timing times as the signal generation time, wherein the speaker node generates and emits an audio signal when a signal generation time period after the start of the next time interval has elapsed. If the set of timing times has a range greater than some predetermined threshold (such as 10ms), the speaker node may be configured to wait until the end of a round of RF signals to avoid emitting audio signals on that round.
In one embodiment of a method performed by a pixel device of determining a location of the pixel itself, the method includes receiving a start recording signal at the pixel device; recording a plurality of audio signals simultaneously emitted from a plurality of speaker nodes in response to receiving the start recording signal, wherein each speaker node emits an audio signal at a different frequency than the other speaker nodes; filtering and processing the audio signals based on their different frequencies to determine a difference between the arrival Times (TDOA) of each audio signal; receiving location information for each of the plurality of speaker nodes; and determining a location of the pixel device using trilateration and/or multilateration based at least in part on the TDOA and the location information. Speaker node location information, speaker node tone IDs, and/or sound speed values used by pixels for trilateration and multilateration calculations may be decoded from beacon signals transmitted by beacon transmitters to pixel devices. Each pixel device receives a display command for an animated scene and identifies one or more display actions to be performed by the pixel device based on the pixel device's corresponding location in a display representation of the animated scene.
In addition to the exemplary aspects and embodiments described above, further aspects and embodiments will become apparent by reference to the drawings and by study of the following detailed description.
Drawings
Exemplary embodiments are illustrated in referenced figures of the drawings. The embodiments and figures disclosed herein are intended to be considered illustrative, not restrictive.
Fig. 1 schematically illustrates a top view of a venue for generating a light show according to the methods and systems described herein.
Fig. 2 illustrates a system for implementing a light show using multiple pixels at a live event, according to one embodiment.
Fig. 3A illustrates a representative mobile device that may be used with other mobile devices to implement a light show. Fig. 3B schematically illustrates hardware and/or software components of the mobile device of fig. 3A.
Fig. 4 schematically illustrates multiple copies of a data packet transmitted in bulk by a beacon transmitter, according to one embodiment.
Fig. 5A, 5B and 5C illustrate data packets transmitted by a beacon transmitter for a heartbeat message, a light show command message and a venue configuration message, respectively, according to one embodiment.
Fig. 6 illustrates a method for decoding beacon signals and generating a light show on a mobile device, according to one embodiment.
Fig. 7 illustrates a method for receiving input parameters at a light show controller and encoding a beacon signal based on the parameters.
Fig. 8A, 8B, and 8C are screen shots of a graphical user interface that may be used to control a light show controller, according to one embodiment.
Fig. 9 schematically illustrates a top view of a speaker node cluster for determining a location of a pixel using trilateration and/or multi-point localization methods, in accordance with an embodiment.
FIG. 10 illustrates a method performed by a processor of a mobile device for determining a location of the device.
Fig. 11 schematically illustrates a top view of a venue having multiple geo-fenced areas with different light shows.
FIG. 12 schematically illustrates a top view of an example venue having a varying rectangular display area.
Detailed Description
In the following description, specific details are set forth in order to provide a more thorough understanding to persons skilled in the art. However, well known elements may not have been shown or described in detail to avoid unnecessarily obscuring the disclosure. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Certain embodiments provide methods and systems for generating a synchronized light show. Light shows may be generated to appeal to a crowd at a concert, performance, production, sporting or athletic activity, fireworks show, or other live event. A scene of the light show is played on a plurality of mobile devices participating in the light show. The participating mobile devices collectively provide a multi-pixel display, where each pixel of the display comprises one mobile device. A light show command for display of a light show scene is broadcast to the pixels. During execution, the producer of the light show can develop different scenes for display in real time and dynamically set the light show scene parameters for broadcast to the pixels. Light show scene parameters may be transmitted to the pixels via Bluetooth Low Energy (BLE) signals broadcast by one or more transmitters (beacons) located at the site of the live event. Each pixel is configured to scan for BLE signals from the beacon and, in response to receiving a BLE signal, decode the received BLE signal and perform one or more particular actions based on information contained in the BLE signal.
In some embodiments, the heartbeat signal is also broadcast periodically by the beacon to the pixels. The heartbeat signal may be used to facilitate synchronization of pixels in the light show. In particular embodiments, the information encoded in the heartbeat signal may include information such as beats per minute (bpm), beat numbers (e.g., 2/4, 3/4, or 4/4 beats, etc.), current speed of sound given the temperature in the site and the height of the site above the mean sea level (AMSL), and time from the first beat or some other timing reference. When decoding this information, the pixel will be able to determine when it should start playing the next scene (the exact time, i.e. in some embodiments within e.g. 5 milliseconds). The heartbeat signal may also be used to monitor the responsiveness of the pixel and ensure that the pixel has detected the most recent heartbeat signal for use as a timing reference. The passage of a period of time (heartbeat timeout period) without receiving a heartbeat signal at a pixel may cause the pixel to be excluded from the light show until the pixel detects a new heartbeat signal.
Each pixel of the light show display may be controlled based on the self-position of the pixel. In certain embodiments, a plurality of speaker nodes are located at different locations of a venue of a live event and are configured to emit audio signals. The audio signal may comprise a periodic tone in the ultrasonic frequency range, or the like. In some embodiments, the tones are emitted simultaneously by the speakers. The speakers may be controlled such that each speaker emits a unique predetermined tone different from the tones emitted by the other speakers, such that the tone serves as a unique ID (identifier) for this speaker. Each pixel is configured to listen and record tones from the speaker, play back and process the recorded signals to determine time difference of arrival (TDOA) for each tone, and calculate the location of the pixel using TDOA hyperbolic trilateration and/or multipoint location methods and known locations of speaker nodes. In other embodiments, the speaker may emit other audio signals as a unique identifier for this speaker. For example, a speaker may emit an audio chirp, where the frequency increases (up-chirp) or decreases (down-chirp) over time. Each speaker may emit an audio chirp having a different duration and/or a different start frequency and/or end frequency.
In a particular embodiment, four or more speaker nodes are used as tone emitters. In some embodiments, six speaker nodes are used. In other embodiments, up to 16 speaker nodes are used. The speaker nodes may be placed in a particular arrangement at the site of a live event. The speaker nodes may also be placed at different heights when the pixels of the light show display are at different heights to allow each pixel to determine three-dimensional positioning information. The self-positioning information allows the pixels to be independently self-addressed pixels of a multi-pixel display. Thus, a pixel may be instructed to perform different actions than other pixels in the display based on the location of the pixel. In this way, a more complex scene can be displayed than simply having all pixels display the same color and/or light or flash simultaneously. For example, the pixels may be controlled to display a moving line, spiral, swirl, halo or other effect, or any other animation or image, across the entire light show display formed by the pixels. This can be achieved by incorporating an effect layer that overlaps the representation displayed by the light show; the effects layer contains a representation of what the light show display (including all pixels participating in the light show) should look like for each frame of the scene. By knowing its position on the light show display and knowing what the entire light show should appear at each particular frame based on the effect layer, the pixel can determine the action it needs to perform for each frame in order to cause the desired animation or image.
Fig. 1 schematically illustrates a venue 100 for generating a light show using the methods and systems described herein. The venue 100 may be an indoor or outdoor stadium, arena, bowling gym, concert hall, theater, roundabout, stand, venue, beach or other open area, etc. suitable for hosting a concert, sporting or competition event, show, production, fireworks show or other live event. The pixels 102 are shown at different locations in the venue 100 and together form a display 103 of a light show. Typically, each pixel 102 is associated with a respective one of the active participants participating in the light show. The pixels 102 are controlled by the light show control system 104 to turn on/off to participate in the light show. For clarity and simplicity of illustration, only representative pixels 102 are shown in fig. 1, however, it should be understood that pixels 102 may be more densely populated and that more pixels 102 than shown may participate in a light show, collectively forming a multi-pixel display 103 displaying a series of scenes of the light show. The pixels 102 may all be located on a horizontal plane or they may be located at different heights of the venue 100 with layered, inclined, or multi-layered seating.
As used herein, the term pixel(s) refers to the smallest addressable element or illumination area of the light show display 103. In a particular embodiment, each pixel 102 includes a mobile device of an active participant. In other embodiments, the pixel 102 may comprise other types of handheld devices capable of receiving signals broadcast by a beacon transmitter (or other transmitter for transmitting light show command signals to the pixel). Referring to fig. 3A, a representative mobile device 130 that may be used as a pixel 102 has a display screen 131 that serves as a pixel light source and is operable to illuminate and/or display a particular color, thereby contributing pixels 102 of this color to the light show display 103. A color image displayed by the mobile device 130 may fill a portion or the entire area of the display screen 131 with one color. The mobile device 130 may be controlled to sequentially display different colors during the light show. In some embodiments, the mobile device 130 may be controlled to fill an area of the display screen 131 with more than one color at a time, such as a color gradient. For example, the display screen 131 may temporarily display multiple colors as the screen transitions to the next color over a gradual transition. In some embodiments, the mobile device 130 may be controlled while populating a portion of the display screen 131 with image and/or video capture user interfaces, which may be used, for example, to allow images and/or video of a live activity to be taken while the mobile device 130 also serves as a pixel 102 in the display 103. In some embodiments, the mobile device 130 includes one or more other light sources, such as an LED camera flash 136, as seen for example in fig. 3B, which can be activated to add another visual element to the light show.
Further, the mobile device 130 may have a vibrator 135 that may be activated by a vibrating mobile device command received from a beacon of the light show control system 104 during a light show. Activation of the vibrator 135 may be used to communicate with the active participant through tactile feedback. For example, vibration of the mobile device 130 may be used to indicate to an active participant carrying the mobile device 130 that a light show is beginning. To better participate in a crowd participation light show, active participants may be instructed to lift their mobile device 130 when they feel their mobile device 130 vibrates.
As shown in fig. 1, a venue 100 may have multiple sections 101 including one or more stages 105, one or more general seating areas 106, and one or more distributed seating areas 108 (e.g., 108A, 108B, 108C, etc.). The site section 101 shown in FIG. 1 is for representative purposes only and may vary in number, layout and configuration between different sites 100 and different live events. Pixels 102 are located in those venue sections 101 where an active participant (and its mobile device 130) can be found. The pixels 102 may be controlled by the light show control system 104 to turn on/off and display colors. By synchronizing these actions across the pixels 102 of the display 103, a light show can be generated to attract people and enhance live activities. In some scenes of a light show, all pixels 102 at the venue 100 are controlled to perform the same action in synchronization. In other scenarios, pixels 102 are controlled to produce a particular image (stationary or moving) on multi-pixel display 103 (which means that pixels 102 may display a different color than pixels 102 in other areas of the venue and/or may have their display screen on, while other pixels 102 have their display screen off). To generate such an image, the determination of whether to turn on the display screen 131 of the mobile device and what color should be displayed on the display screen 131 at any time may be based on location information of the pixels, such as self-location information obtained using the methods and systems described herein.
Fig. 2 depicts a light show control system 104 for implementing a light show using a plurality of pixels 102 at a live activity site, such as site 100 of fig. 1, in accordance with one embodiment. System 104 includes a light show controller 124, one or more beacon transmitters 122 (individually and collectively, beacon transmitters 122), and a positioning signal transmitter 125. Light show controller 124 communicates with and controls beacon transmitters 122 and positioning signal transmitters 125. The system 104 also includes a plurality of speaker nodes 126 that receive signals from and are controlled by the positioning signal transmitters 125. As described in further detail below, each component of the system 124 functions in enabling a light show to be generated for a group of active participants carrying the mobile device 130 by turning each mobile device 130 into pixels 102 of a large multi-pixel display 103 extending over at least a portion of the venue 100. The display 103 may comprise a two-dimensional display in which all pixels 102 are on the same plane or are treated as if they were on the same plane (by projecting their positions onto the plane). In other embodiments, the display 103 comprises a three-dimensional display. This may be the case for venues such as arenas, stadiums, bowling alleys, amphitheaters, theaters, and the like having layered or multi-layered seats, where the pixels 102 are at different heights and may be controlled based on their position in three-dimensional space to produce the desired effect.
Light show controller 124 is configured to instruct beacon transmitter 122 to periodically broadcast a one-way BLE signal. For example, a fiber optic, RF, WiFi, or ethernet connection, etc. may be used to relay one or more messages from light show controller 124 to beacon transmitter 122 to configure the beacon signal from beacon transmitter 122. The beacon signal from beacon transmitter 122 may be detected by pixels 102 that are located within the transmission range of beacon transmitter 122. The number of beacon transmitters 122 used by venue 100 will depend on the range of the transmitters and/or redundancy considerations. In particular embodiments, beacon transmitter 122 has a transmission range of approximately 10m to 500 m. To provide redundancy and ensure transmission coverage of the venue 100 in the event of a transmitter failure, more than one beacon transmitter 122 may be positioned in the venue 100 to broadcast signals to the pixels 102 within the venue for generating a light show. The pixels 102 are configured to scan BLE signals broadcast by the beacon transmitters 122, decode the detected BLE signals, and perform certain actions in response to the decoded information to generate the light show.
The light show controller 124 instructs the beacon transmitter 122 to encode certain information on the BLE signal to produce a light show on the mobile device 130 that serves as the pixel 102 of the light show display 103. The pixels 102 may include a mobile device 130, such as a smartphone, tablet computer, or the like, or any other handheld device including a display screen or light source capable of receiving and decoding beacon signals and performing display actions based on the decoded signals. Such a mobile device 130 may be held by an audience member or may be carried or worn by an audience member (e.g., hanging on a neck strap lanyard or wrist strap) so that their display screen or light source is visible to participate in a light show.
According to particular embodiments, the signals broadcast by beacon transmitters 122 include, but are not limited to: (1) a light show command signal; (2) a heartbeat signal; and (3) a venue configuration signal. The structure of the messages conveyed by these signals, such as data packet 180 (shown individually and collectively in fig. 5), is illustrated in fig. 5A, 5B, and 5C. Data packet 180 may be in accordance with a suitable BLE beacon communication protocol, such as, for example, the edystone (Eddystone) beacon format. A light show command signal (e.g., which may be transmitted as data packet 180B shown in fig. 5B) transmits to pixels 102 the particular action to be performed to generate a light show (e.g., play scene, stop playing scene, play scene to the end of phrase and stop, turn off all pixels, vibrate the mobile device, activate an LED flash, etc.). The heartbeat signal (which may be transmitted, for example, as the data packet 180A shown in fig. 5A) helps facilitate synchronization of the pixels 102 by transmitting a timing reference. In some embodiments, two or more types of heartbeat signals are transmitted: 1) a first heartbeat signal to facilitate pixel synchronization using a timing reference, and 2) a second or more heartbeat signals that may transmit a tone ID (identifying the tone originating from each speaker node 126) to help the pixels 102 determine their own position when the pixels are operating in a position-aware mode. In some embodiments, rather than using the BLE signal to dynamically transmit a light show command to the mobile device 130, a series of commands for generating a light show may be preloaded onto the mobile device 130 prior to the live activity. In such embodiments, a heartbeat signal is broadcast by the beacon transmitter 122 to the mobile device 130 to synchronize the execution of light show commands preloaded on the mobile device 130. For example, a heartbeat signal may be transmitted to convey a timing reference indicating which scene the mobile device 130 should play and when to start the scene. The heartbeat signal may also convey node location and venue size whether the light show command is dynamically transmitted or preloaded on the mobile device 130. The venue configuration signal (which may be transmitted, for example, as data packet 180C shown in fig. 5C) transmits the location of speaker node 126 to pixel 102 to help pixel 102 determine their own location. In other embodiments, the tone ID may be transmitted in a venue configuration signal, and/or the speaker node location may be transmitted in a heartbeat signal.
Another type of signal that beacon transmitter 122 may broadcast is a locate pixel message. This signals the pixels 102 to enter a recording mode, listens and records different localized audio sounds (tones, beeps, chirps, etc.) emitted by the plurality of speaker nodes 126, and determines the location of the pixels 102 based on the TDOA of the recorded audio sounds. The locate pixel message may include information conveying which audio sounds are expected from which speaker node 126. Alternatively or additionally, such information may be transmitted with the venue configuration signal or other signal.
Other types of signals that beacon transmitter 122 may broadcast include signals that cause mobile devices 130 (serving as pixels 102) to display a particular image, word, phrase, or text/alphanumeric message on display screen 131 of each mobile device 130. These types of signals may have application for dynamically communicating information about live activities to active participants. For example, in a live sporting event, when a team scores, a scoring team logo or team color(s) may be displayed on the pixel display screen 131. Other information that may be displayed on display screen 131 includes the number and/or name of the player scoring, the current score of the team, the time remaining in the time period, etc., penalty information, game statistics, etc. In some embodiments, advertisements and/or activity sponsored media may be displayed on the pixel display screen 131. In some embodiments, public safety messages may be displayed on pixel display screen 131. For example, the public safety messages may include messages containing information about emergency situations, safety threats, location of emergency exits, emergency procedures, evacuation procedures, locking procedures, and the like. Beacon signals may also be generated and broadcast to the mobile devices 130 to cause sound effects, music, audible speech, etc. to be played on the mobile devices 130.
The signal broadcast by beacon transmitter 122 is under the control of light show controller 124. In a particular embodiment, the light show controller 124 is configured to run a main controller application that provides a Graphical User Interface (GUI)123 for an operator to input commands to facilitate control of the pixels 102 via the beacon transmitters 122. (the main controller application and GUI123 may also be used to control the positioning signal emitter 125 and speaker node 126, as described in further detail herein.) the operator of the light show controller 124 may use the GUI123 to set the scene of the light show and may edit the upcoming scene while the light show is in progress. Further, the operator of the light show controller 124 may use the GUI123 to transmit specific commands to the pixels 102 via the beacon transmitter 122 to start/stop playing a scene to the end of a phrase and stop, turn off all pixels, vibrate the mobile device, activate an LED flash, and so forth. The operator may also use the GUI123 to control other types of signals transmitted by the beacon transmitters 122 to the pixels 102, including, for example, heartbeat signals, venue configuration signals, location pixel groupings, and signals commanding the display of particular words, phrases, or messages on the pixel display screen. GUI123 may also be used to adjust parameters of bulk beacon signal transmissions. Software for the main controller application may be stored in a program memory that is part of or accessible by the light show controller 124. Execution of the software instructions stored in the program memory causes the light show controller 124 to accept various inputs from the operator via the GUI123 and to instruct the encoding and transmission of a particular signal by the beacon transmitter.
As shown in fig. 3B, the mobile device 130 serving as the pixel 102 includes a processor 132 that executes software instructions 133 loaded in a memory or computer data storage 134 contained in the mobile device 130. For example, the memory 134 may include a RAM (random access memory). The software instructions 133 loaded in the memory 134 may be provided, for example, in a mobile application downloaded to the mobile device 130 prior to a light show. Alternatively, the software instructions 133 may be stored in memory elsewhere accessible to the processor 132 or made available to the processor 132 through a suitable wireless or wired connection (e.g., a mobile web application). Execution of the software instructions 133 causes the mobile device 130 to:
the step of scanning for and decoding beacon signals broadcast by the beacon transmitters 122 is performed;
distinguishing between different types of messages, such as light show command signals, heartbeat signals, venue configuration signals, and location pixel signals, each message encoded with a different piece of information; and
perform a particular action in response to the decoded information.
In particular embodiments, a light show may be created from scenes, where each scene includes a particular sequence of actions to be performed by pixels 102. The light show includes a series of scenes that are played in succession. Thus, a particular type of light show command encoded in a beacon signal may be a "play scene" command that contains various scene parameters that tell the mobile device 130 to turn on/off its display screen 131 and/or display a particular color on the screen 131. The light show controller 124 may be controlled by an operator to generate such play scene commands to a main controller application via a Graphical User Interface (GUI) 123. Using the GUI123, the operator can set the parameters of the scene before the light show begins and save them to the scene playlist, and set or edit the parameters of the upcoming scene during the light show (i.e., once the light show has begun), among other functions.
Exemplary screen representations of GUIs 123A, 123B, and 123C (individually and collectively, GUI 123) of the main controller application running on the light show controller 124 are shown in fig. 8A, 8B, and 8C. As shown in fig. 8A, the GUI123 includes a color palette 140 from which the operator can select colors to be displayed during the scene. The GUI123 also includes two or more control layers for setting scene parameters for playing scene commands. In particular, GUI123A includes a first image control layer 142 that determines a color sequence 146 of the scene and a gradual transition 148 between colors and how many bars (measures) the color sequence cycles over (i.e., the speed of the color sequence). GUI123A also includes a second image control layer 144 that determines the flashing of the display screen (e.g., the speed of flashing, such as determined by the portion of the bar where the flashing occurred, the time the display was on, and the time the display was off, fill in/fill in, etc.).
Each scene may be configured using parameters set in the image control layers 142 and 144. The image control layers 142 and 144 may be combined or collocated when a scene is played on the mobile device 130. For example, if the first image control layer 142 is set to display red, blue and yellow in a cycle during the scene and the second image control layer 144 is set to flash in a particular pattern during the scene, for a play scene command that includes a combination of the image control layers 142, 144, the display screen 131 of the mobile device 130 will display the colors (red, blue or yellow) specified by the first layer 142, but these colors will only be visible (i.e., the screen will only display these colors) for the period of time that the flash specified by the second layer 144 is "on".
Thus, the scene parameters of the "play scene" command encoded into the light show command signal may include the following parameters or a subset thereof:
a unique software ID;
current show ID;
a unique message ID;
the message type;
play type (straight line, random fade, random blink, random fade, and random blink);
a set of color IDs (e.g., color IDs 1, 2, 3, 4, 5, 6, etc.) that identify the colors to be played on the scene in a sequential or random order;
fade rate (e.g., in bar representation — the number of bars over which a particular color sequence in the scene should cycle);
gradual transitions (e.g., transitions to and beyond a particular color, expressed as a percentage);
the flicker speed;
scintillation fill;
flicker-in and flicker-out (determining fade-in/fade-out, expressed as a percentage, of a particular flicker);
a scene number;
beats per minute (bpm);
phrase flags (on/off) (if phrase flag is "on," then mobile device 130 plays to the end of the phrase (e.g., scene or sequence of scenes) and stops; otherwise, mobile device 130 continuously loops through the scene(s);
and/or the like.
Fig. 5B illustrates an example structure of a data packet 180B of a light show command message, in accordance with certain embodiments. As with all data packets 180 transmitted by beacon transmitter 122, data packet 180B begins with a 1-byte preamble 181 for protocol management, followed by a 4-byte access address 182. After the access address 182, the data packet 180B contains a Packet Data Unit (PDU) 183. Data packet 180B ends with a Cyclic Redundancy Check (CRC)184 for error checking. As shown in fig. 5B, PDU183 contains a 2-byte header 185 and a variable payload 187' (e.g., 6 to 37 bytes in length) having the contents of the light show command message. The length of the payload may be defined in the header 185. The payload 187' includes a plurality of fields 187, each designated for a different portion of the message. The illustrated representative fields 187 include a message type 187A (e.g., identifying that the message is a "play scene" command or some other type of light show command described herein), a play type 187B (e.g., identifying the type of scene being played if the message type is a "play scene" command), a set of color IDs 187C that define the colors being played on the scene, a fade rate 187D (the rate at which the screen transitions to the next color), and beats per minute (bpm) 187E. Not necessarily all fields 187 that may be defined in the payload 187' of the light show command message are shown in fig. 5B. Other fields 187 (not shown) needed to provide the light show command message may be defined in the payload 187'.
The third image control layer 149 may be provided through the GUI123A (see fig. 8A). Image control layer 149 contains a representation of a bird's eye view preview of the site, indicating what the entire display 103 should look like for each particular frame when transmitting the light show command signal to pixels 102. The third image control layer may be used to select and display image representations of scene types, where each scene type identifies a particular predetermined scene of moving lines, spirals, swirls, halos, and/or other effects, or any other animation or image. The device 130 may look up the scene type from a library of scene types stored on each mobile device 130 or accessible to the mobile device 130 through a wireless or wired connection (e.g., internet or WiFi wireless connection). By combining the image control layers 142, 144, and 149 and knowing their location in the venue 100 and thus locating their location within the display 103 (as represented by the image control layer 149), each pixel 102 can determine what it should display in order to facilitate an animation or image displayed by the display 103. Apparatus and methods for pixel location determination in accordance with certain embodiments are further described below.
In some embodiments, one or more of image control layers 142, 144, and 149 are omitted, disabled, or configured as default configuration settings. For example, image control layer 149 may be disabled, or without any input to that layer, it may default to an "all display" mode (meaning that each pixel 102 displays the same image provided by image control layers 142 and 144 as all other pixels 102 regardless of its position; in other words, mobile devices 130 all play the same scene on their display screens 131; in this mode, image control layer 149 is not active).
Another type of signal that beacon transmitter 122 may broadcast periodically is a heartbeat signal. In some embodiments, the heartbeat signal may be broadcast every 5 to 15 seconds. The heartbeat signal carries information to synchronize the pixels 102 with the light show music beat and start playing the scene at the correct time. The parameters encoded into the heartbeat signal may include the following parameters or a subset thereof:
a unique software ID;
current show ID;
a unique message ID;
the message type;
beats per minute (bpm);
heartbeats since the last bpm reset;
a beat pattern type;
a beat number;
time since first beat (e.g., expressed in milliseconds);
sound speed (sound speed is a parameter that can vary based on the temperature in the venue and the altitude of the venue above the mean sea level (AMSL); thus, the sound speed of a light show at a particular venue can be determined for a given room temperature and AMSL and communicated to the pixels to enable accurate determination of its location; in some embodiments, sound speed can be transmitted as part of a heartbeat signal, while in other embodiments, sound speed can be transmitted as part of another type of message, such as a venue configuration message or a locate pixel message);
and/or the like.
Upon decoding the heartbeat signal to determine the time since the first beat and bpm, the pixel 102 can determine when to start playing the next scene once the current scene ends.
Fig. 5A illustrates an example structure of a data packet 180A of a heartbeat message, in accordance with certain embodiments. Data packet structure 180A has a similar overall structure as data packet structure 180B (fig. 5B), but has a different format for its variable length payload 186'. As shown in fig. 5A, the payload 186' includes, for example, a message type 186A (e.g., identifying that the message is a heartbeat), Beats Per Minute (BPM)186B, a beat pattern type 186C (which specifies when and how the scene is played), a beat number 186D (e.g., 2/4, 3/4, 4/4 beats), a time from the first beat 186E (which may serve as a timing reference point), and a speed of sound 186F (which may be set according to a particular locale). Other fields 186 (not shown) needed to provide heartbeat messages may also be defined in the payload 186'. In some embodiments, beat pattern type 186C may be: "free mode", whereby pixel 102 plays the scene as soon as data packet 180A is received; a "beat pattern" whereby the pixels 102 play, divide phrases, or stop scenes at the end of a bar (e.g., at the end of 4 beats if the beat number 186D is 4/4 beats); and "measures mode" whereby the pixel 102 plays, divides, or stops a scene at the end of the total number of valid measures counts. In addition or as an alternative to the foregoing, a temporal pattern parameter may be defined in which a light show operator may control the pixels 102 based on a duration defined in seconds, rather than beats and bars. For example, the pixel 102 may be controlled to play the scene for a certain number of seconds, as specified by a light show operator using the main controller application.
In addition to facilitating synchronization of the pixels 102 (to ensure, for example, that they start playing a scene at the same time), the heartbeat signal may also be used to monitor the responsiveness of the pixels. Occasionally, the bluetooth receiver on the mobile device 130 may hang up or freeze. For example, some android devices have bluetooth hardware that may become unresponsive after a large number of bluetooth signals are sent to it. The mobile device 130 may be configured such that after a period of time (i.e., a heartbeat timeout period, such as 12 seconds) has elapsed without receiving a heartbeat signal at the mobile device 130, the mobile device 130 is caused to stop the display of the scene it is currently displaying, clear the display screen and/or display a blank screen on the pixels, restart the bluetooth hardware, and/or restart the bluetooth scan.
Yet another type of signal that beacon transmitters 122 may periodically broadcast is a venue configuration signal. For example, the venue configuration signal may be broadcast every 10 to 20 seconds. The venue configuration signal is encoded with the position of the speaker node 126. The speaker node location may be entered into the node location field 154 on the GUI123B (see fig. 8B). By knowing the node location, and receiving the audio signal from the node, the pixel 102 can be configured to calculate its location, as described in further detail below. Other information that may be encoded in the site configuration signal includes an origin offset that allows conversion from a display 103 reference frame to another reference frame (e.g., such as a reference frame based on the center of a speaker node cluster) and a pitch information or pitch ID188D that identifies the frequency or other characteristic of the audio signal generated by each speaker node 126. Pixels 102 may also use such other information to determine their location. In some embodiments, some of the information listed above (e.g., tone ID 188D) may be transmitted in the heartbeat signal in addition to, or instead of, transmitting the heartbeat signal in the venue configuration signal.
Fig. 5C illustrates an example structure of a data packet 180C of a venue configuration message, according to certain embodiments. Data packet structure 180C has a similar overall structure as data packet structures 180A (fig. 5A) and 180B (fig. 5B), but has a different format for its variable length payload 188'. As shown in fig. 5C, the payload 188' includes, for example, a message type 188A for each node (e.g., the identification message is a venue configuration message), an origin offset 188B, a node location group 188C, and a tone ID188D (identifying tone characteristics). Other fields 188 (not shown) needed to provide the heartbeat message may also be defined in the payload 188'.
A single image may be displayed on each pixel display screen (e.g., the display screen of the mobile device 130) during an activity. As shown in fig. 8C, the live activity selection field 190 of the GUI 123C may be used to select an image corresponding to the live activity to be displayed on the mobile device 130. The live action selection field 192 of the GUI 123C may be used to select an advertisement or sponsored image to be displayed on the mobile device 130. A preview of the active image or the advertisement or the sponsored image is displayed in preview fields 191 and 193, respectively. The active image may be sent for display on the mobile device 130 by clicking on the button 194 of the GUI 123C. An advertisement or sponsored image may be sent for display on the mobile device 130 by clicking on button 196 of the GUI 123C.
As previously described, the beacon communication protocol for data packet 180 may be the edston protocol. The Edston protocol allows data to be encoded in a custom format for communication with mobile devices running on the iOS or android mobile operating systems. Because Edston may be used to communicate with iOS or android devices, the beacon transmitter 122 need only broadcast one set of signals, thereby mitigating timing or data communication problems that may result from sending multiple sets of signals encoded in different signal formats for different types of devices. In other embodiments, data packet 180 may be in accordance with other suitable communication protocols.
Manipulating BLE beacon signals to cause the receiving mobile device 130 to perform certain actions involves certain challenges. For example, the mobile device 130 typically does not receive all of the individual BLE beacon signals. This is due, at least in part, to the fact that bluetooth signals are typically transmitted for a shorter time than the scan rate of the mobile device 130 (e.g., a smartphone). To address these issues, redundancy may be provided by transmitting beacon signals in batches, where each batch includes multiple copies of the same message (in the form of repeated BLE data packets, all of which transmit, for example, the same "play scene" command or other command). Each data packet within a batch is separated by a particular time period. For example, in some embodiments, the same message is sent 15-25 times (i.e., 15-25 copies per batch), each time with a 20ms interval. Fig. 4 illustrates a first batch 150A and a second batch 150B of duplicate data packets. In some embodiments, there are 15 data packets 153 per batch. For example, the time t between successive data packets 153mMay be 25 ms. For example, the time t between the start of the batch 150A and the start of the next batch 150BbMay be 500ms (i.e., the batch 150 is sent every 500 ms). The number of data packets 153 per batch 150 and the transmission of consecutive data packets 153 for different configurations of beacon transmitters 122 or light show control systems 104The time between inputs and between successive batches 150 may vary. The beacon message transmission control panel 156 on the GUI123B of FIG. 8B may be used to configure certain data packet transmission parameters, including the time t between batch transmissionsb(or frequency of batch transmission), number of batch transmissions per message, number of data packets per batch, time t between consecutive data packets within a batchmAnd the like. By sending multiple copies of the same message in a batch, the chances of the mobile device 130 getting each command is greatly increased over sending each message only once. In some embodiments, the number of data packets per batch and the time t between successive copies of a messagemIs configured such that the reception rate of the mobile device 130 is 95% or higher.
Unlike the use of audio signals (where the pixels 102 may receive the same audio signal communication at different times due to their different distances from the transmitter), any synchronization issues caused by the propagation time of BLE signal communication are generally negligible since BLE signals propagate at the speed of light. Other sources of synchronization problems (other than the problems that may result from transmission delays due to signal speed) are more likely to be significant. These sources of delay may be related to, for example, hardware or firmware constraints or data processing constraints of the mobile devices 130 or differences in beacon signal reception/processing capabilities between different mobile devices 130. Some methods for addressing these issues are described in further detail herein.
One timing problem is that in the case where duplicate data packets are sent in batches for redundancy purposes as described above, the timing reference information in the heartbeat signal is no longer accurate for all instances of the data packet, since the data packets are sent at slightly different times within a batch. Furthermore, the response time may be different between different mobile devices, such as for example different models of smart phones, which may cause synchronization problems when a light show is played across different mobile devices that receive and respond to data packets at slightly different times. To address these issues, timing reference information (e.g., time since first beat) is incremented for each packet within the same batch to ensureThe packets received by the mobile device 130 contain timing information that is precisely synchronized to the first beat. For example, assume a consecutive data packet interval t within a batchm25ms and the timing reference of the first data packet is tRThen the timing reference of the subsequent data packet in the batch is tR+25ms、tR+50ms、tR+75ms, and so on. This granular timing reference information sent with the heartbeat signal enables the mobile device 130 to synchronize more accurately.
Another problem with the use of BLE beacon signals stems from the fact that BLE beacon data packets originating from beacon devices are typically configured to have the same MAC address. On some mobile devices, once the mobile device 130 has received a data packet from one beacon transmitter, it may ignore subsequent data packets from this same beacon transmitter because the mobile device 130 recognizes the same MAC address in subsequent data packets. To ensure that the mobile device 130 does not ignore subsequent bursts of BLE beacon signals containing new information, in certain embodiments the beacon transmitter 122 is configured through the GUI123 of the master controller application to update the MAC address on the transmitter and encode the new MAC address on the signal for each group of packets conveying a new message, thereby causing the operating system of the receiving mobile device 130 to interpret the batches as originating from the new beacon device. This causes the mobile device 130 to read each subsequent burst of the signal. Updating of the MAC address on the transmitter 122 may be accomplished through custom firmware installed on the beacon transmitter 122 or other suitable means.
In the case where the operating system of the mobile device 130 does not provide access to the MAC address on the beacon (e.g., as is the case with at least some iOS devices), if the two commands are the same, the mobile device 130 may effectively be prevented from recognizing or distinguishing the command from the previous command because the mobile device 130 will not be able to read the MAC address (despite the fact that it can recognize that a new MAC address exists). This may occur if the light show operator wants to divide the phrase or display the same scene twice for the same scene, and thus sends the same light show command twice. To facilitate identifying two commands and to prevent the mobile device 130 from ignoring subsequent identical commands (which may cause the iOS mobile device 130 to be unsynchronized with the android mobile device 130), each message is encoded with a unique identification number. This identification number may be included in the data packet payload, for example. In this manner, mobile devices 130 that tend to ignore subsequent commands because they cannot read the MAC address from the beacon signal are caused to recognize the subsequent command as different from the previous command because of the different identification number transmitted with the command.
Fig. 6 illustrates a method 160 for decoding beacon signals and generating a light show on a mobile device according to one embodiment. The method 160 may be implemented by the mobile device 130 executing software instructions. The software instructions are loaded from memory or data storage on the mobile device 130 or made available to the mobile device 130 through a wireless or wired connection (e.g., a web mobile application). The software instructions loaded in memory may be provided, for example, in a mobile application downloaded to mobile device 130 prior to a light show. The steps of method 160 may be performed each time a new batch of beacon messages is detected by the mobile device 130. For example, in some embodiments, method 160 may be performed every 20-30ms (including the time between successive batches 150 of packets 153 (see FIG. 4)). Each mobile device 130 is controlled by a light show command, which may be generated by an operator of the light show controller 124 during the light show. The method 160 facilitates dynamic or real-time updating of light show scene parameters through the GUI123 interfacing with a main controller application running on the light show controller 124, as each mobile device 130 receives and decodes light show commands shortly before each scene and determines what actions it needs to perform in order to display the scene based on these commands.
The method 160 begins at block 162 where signals from the beacon transmitters 122 are scanned and received. When a beacon signal is detected, the method proceeds to block 164, where the message type (e.g., such as a light show command; heartbeat; venue configuration signal; and location pixel) is decoded from the received data packet at the mobile device 130. Other data may be decoded from the data packet to enable parsing and interpretation of the content of the data packet. At block 166, the remaining content of the data packet payload is decoded based at least on the message type. The contents of the data packet payload provide instructions to the mobile device 130 to perform certain actions described herein. For example, if the message type is a light show command, the decoded content may include specific light show command parameters (such as color ID, fade rate, etc.) of the "play scene" command that control what is played for each scene of the light show on the display screen 131 of the mobile device 130.
At block 168, the method 160 continues by checking whether a heartbeat signal is received within a time period corresponding to a heartbeat timeout period. The heartbeat signal contains the timing reference information discussed herein. If no heartbeat signal is received within the timeout period, the mobile device 130 is instructed to perform certain actions, such as: stopping the display of the scene it is currently displaying, clearing the display screen on the mobile device and causing the device to display a blank screen, and/or resuming scanning for beacon signals. The check performed at block 168 may be performed only once per heartbeat timeout period (i.e., if the heartbeat timeout period has not elapsed, the method 160 jumps to the next step 170). In some embodiments, the step at block 168 is omitted. In other embodiments, the step at block 168 is optional and the operator of the light show controller 124 may choose to disable the transmission of the heartbeat signal. For example, the operator may use GUI123B (fig. 8B) to select a "enable heartbeat" bar 155 to enable/disable the broadcast of the heartbeat signal. To disable the heartbeat signal, a signal communication can be sent to the mobile devices 130 to control the devices to stop responding to the absence of the heartbeat signal within the heartbeat timeout period (e.g., their display screens 131 are not automatically cleared when the heartbeat signal is not received, or their bluetooth hardware is not restarted when the heartbeat signal is not received, etc.). To re-enable the heartbeat signal, a signal communication may be sent to the mobile devices 130 that will cause them to look for the heartbeat signal and respond accordingly if they do not receive the heartbeat signal within the heartbeat timeout period.
The method 160 then proceeds to block 170 where the parameters or commands decoded at block 166 are applied or executed to display a light show. Certain message types (e.g., location pixels and venue configurations) cause the mobile device 130 to determine information such as mobile device or pixel location and speaker node 126 location. Other message types (e.g., light show commands) may cause the mobile device 130 to apply the determined pixel locations and speaker node locations and play the scene on the display screen 131 of the mobile device 130 based on the light show parameters. The heartbeat signal providing the timing reference point may be used to synchronize the light show display action performed by the mobile device 130 at block 170 with the light show display actions of other mobile devices 130. In some embodiments, each mobile device 130 queues the action and waits at exactly the right time (based on timing reference information) until the next scene before displaying the light show. In a particular embodiment, the timing accuracy is such that each mobile device 130 is synchronized to within ± 5 milliseconds of the heartbeat timing reference point.
Another potential synchronization problem is that when an appropriate signal arrives at the mobile device 130, the visual objects in the code take time to compile and store in memory or data storage. Since the mobile device 130 has a response delay in creating the visual object for the scene, in some cases this may result in an unsynchronized light show effect even when heartbeat codes are used to synchronize the mobile device 130 for the light show. To address this issue, the processor 132 of the mobile device 130 may be configured such that when the mobile device 130 receives a light show command signal, such as a play scene command, it compiles the visual objects from the code to generate effects in advance, and stores the visual object files in memory (such as RAM) or non-volatile data storage (such as flash memory) before the beat of the scene is to be played. When the appropriate beat arrives (as determined from the timing reference information provided in the heartbeat signal), the processor 132 of the mobile device 130 immediately invokes and executes the pre-generated visual object to begin playing the scene, thereby avoiding or minimizing the response time delay of the mobile device 130 that would otherwise occur if the processor 132 had to generate the visual object for the scene without sufficient advance time. To ensure that there is space in memory for further visuals, visuals can be deleted once they are used/invoked to play a scene and are no longer needed.
In some embodiments, a mobile application running on the mobile device 130 that utilizes flash hardware functionality to turn on/off a camera flash or other light source on the mobile device to generate a visual effect of a light show facilitates the display of the light show on the mobile device 130. This flash function may be complementary to the operation of the mobile device 130 in the "color mode" (i.e., displaying colors on the display screen 131 of the mobile device 130 to play the scene of the light show as described above). Some mobile devices 130, especially android smartphones, may generate heat when flash hardware is engaged. To mitigate these effects, the mobile application may be programmed to cause the mobile device 130 to engage flash hardware only when the mobile device 130 is operating in "flash mode" to mitigate the need to engage and prepare the hardware for use at all times. When the mobile device 130 is operating only in the "color mode" and not in the flash mode, the flash hardware may be disabled.
Fig. 7 illustrates a method 200 for determining scene parameters and encoding beacon signals with the scene parameters. The encoded beacon signal generated and broadcast as a result of performing the method 200 may be received by the mobile device 130 and decoded using the method 160 of fig. 6 to generate a command for the mobile device 130 to display a light show. As with the method 160 of fig. 6, the method 200 of fig. 7 facilitates dynamic or real-time updating of light show scene parameters via the GUI123 interfacing with a main controller application running on the light show controller 124. The method 200 begins at block 202 where input from an operator of the light show controller 124 is received at the GUI 123. The inputs may correspond to scene parameters and other parameters to define and set a light show, such as location-specific parameters and heartbeat signal parameters. The method 200 proceeds to blocks 204 and 206 where the content of the beacon data packet is determined based on the input received at block 123. These steps may include, for example, identifying a message type and identifying a particular parameter of the message type. For example, if the message type decoded at block 204 is a light show "play scene" command, at block 206, the method 200 continues by identifying a play type (straight line, random fade, random blink, random fade, and random blink), a set of color IDs of colors to be played in the scene, a fade rate, a fade transition, and so forth, based on the input received at block 202. The parameters are then encoded onto the BLE signal by the beacon transmitter 122 at block 208. The encoded signal is broadcast by the beacon transmitter 122 at block 210.
Although the apparatus and method for generating a light show display across multiple pixels 102 is described above with reference to using BLE signals broadcast from beacon transmitters 122 and received by mobile device 130, in other embodiments, other types of wireless signals may be used to control the pixel devices (e.g., by transmitting different types of messages, such as light show commands, venue configurations, and location pixels) according to methods similar to those described above. For example, a WiFi signal, an audio signal, a regular or classic bluetooth signal (as opposed to bluetooth low energy), etc. may be used to relay one or more messages to the mobile device 130 to generate a light show. When the device serving as pixel 102 comprises an RF receiver or an IR receiver, then an RF transmitter or an IR transmitter (as appropriate) may be used to relay the light show communication (as an RF or IR signal) to the device according to the methods described herein.
An apparatus for enabling the location of the pixel 102 to be determined is shown in fig. 2. The light show control system 124 includes a positioning signal emitter 125 that controls a plurality of speaker nodes 126. Each speaker node 126 includes a receiver 127, a tone generator 128, an amplifier 118, and an omnidirectional speaker array 119. In some embodiments, there are four speaker nodes 126. In other embodiments, there are six speaker nodes 126, as shown, for example, in fig. 9. Some embodiments of the light show control system 124 employ up to 16 speaker nodes 126. The locating signal transmitter 125 transmits a signal to each receiver 127 which in turn controls the tone generator 128 to generate an electrical signal of the tone which is provided to the audio amplifier 118. The output of the audio amplifier 118 is used to drive the speaker array 119 to produce sounds or audio signals unique to the speaker nodes 126 such that each speaker node 126 can be identified by its audio signal. For example, fiber optic, RF, WiFi, or ethernet connections, etc. may be used to convey signals from positioning signal transmitter 125 to each receiver 127. The audio signal emitted by the speaker node 126 may be a periodic tone. The tone generator 128 of each node 126 generates a different predetermined tone or other audio signal (such as an up or down chirp). In particular embodiments, the tones are in the range of 16-24kHz or in some other range such that they are generally inaudible to the human ear, but can be detected by the audio transducer of the mobile device 130. In a particular embodiment, each speaker node 126 includes an electroacoustic transducer. The electroacoustic transducer emits a clear, inaudible sound signal. Each pixel 102 may include a mobile device 130 having a processor 132 that executes software 133 stored in a memory or data storage 134 contained in the mobile device 130, which causes the mobile device 130 to perform the steps of: listening and recording predetermined tones from speaker 126; playing back and processing the recorded signal to determine a time difference of arrival (TDOA) for each tone; and computing the location of pixel 102 using TDOA hyperbolic trilateration and/or a multilateration method. The software instructions 133 stored in the memory 134 may be provided, for example, in a mobile application downloaded to the mobile device 130 prior to a light show. Alternatively, the software instructions 133 may be stored in memory or data storage elsewhere accessible to the processor 132 or made available to the processor 132 through a suitable wireless connection.
To enable identification of the speaker nodes 126 by their tone, a set of tone IDs may be defined, each tone ID corresponding to a particular tone. Particular embodiments provide up to 16 tone IDs, for a total of 16 unique tones. Each speaker node 126 is configured to emit a certain tone from this list of tones. The tone emitted by the cluster of speaker nodes 129 may be selected by a light show operator based on the acoustics of the venue 100. For example, in different settings, certain tones may work better in combination with each other to deliver audio signals for trilateration and multilateration purposes. Typically, the tone emitted by a particular speaker node 126 remains constant throughout the light show.
In a particular embodiment, the clusters 129 of speaker nodes 126 are placed in a particular arrangement at a venue of a live event and at different heights to allow three-dimensional positioning information to be determined for each pixel 102. The position information determined from each pixel using the methods and systems described herein is more accurate than the position determined by GPS. For example, while GPS determines location coordinates to be within about 8 meters of the actual location, the hyperbolic trilateration and multipoint positioning methods and systems described herein have been applied to determine the location of a pixel 102 to be within about 1 meter of the actual location of the pixel. Particular embodiments of the hyperbolic trilateration and multilateration methods and systems described herein have been applied to determine the location of a pixel 102 to within 10cm (or less) of the actual location of the pixel.
The location of the speaker node 126 may be determined using a laser distance and angle gauge or a suitable device or method for determining the location of an object. The known coordinates of the speaker nodes may be entered into the light show control system 124 using the GUI 123. The known coordinates of each speaker node 126 may be encoded in the site configuration signal (e.g., having a data packet structure as shown in fig. 5C) and broadcast by the beacon transmitter 122 to the pixels 102 as described above to enable determination of the location of each pixel. In some embodiments, a plurality of fixed speaker nodes 126 at predetermined known locations may emit audio sounds to help other speaker nodes 126 in venue 100 determine their locations, similar to how pixels 102 determine their locations based on TDOA of recorded audio sounds emitted from speaker nodes 126. Once these other speaker nodes 126 have determined the coordinates of their locations, such information may be transmitted to the light show controller 124 to be included in a set of speaker node coordinates encoded in the venue configuration signal.
Fig. 9 shows an overhead view of an exemplary speaker node cluster 129 of site 100, which includes six speaker nodes 126A, 126B, 126C, 126D, 126E, and 126F (individually and collectively, speaker nodes 126). Each speaker node 126 includes a tone generator 128 (shown in fig. 2) that causes the speaker node 126 to emit an audio signal unique to the speaker node. In a particular embodiment, all speaker nodes 126 in the cluster 129 emit audio signals simultaneously. A pixel 102 located at point x determines the coordinates of point x by receiving sound emitted by the speaker node 126. Unless the pixel 102 happens to be located equidistant from all speaker nodes 126, the pixel 102 receives the audio signal at a slightly different time due to the time required for the sound to travel to the pixel 102. By knowing the location of each speaker node 126, the characteristics of the audio signal emitted by each speaker node 126 (e.g., which may be conveyed by the tone ID), and the speed of sound in the environment, the pixel 102 can calculate its distance d from each speaker node 126. The speed of sound may be determined from the current room temperature and the above average sea level (AMSL) readings as an input to the main controller software application and transmitted as part of the heartbeat signal, as previously described. This calculated distance d defines a sphere 121 around the speaker node 126, which represents a set of possible positions of the pixel 102 relative to this speaker node. For example, in FIG. 9, pixel 102 is located at a distance point P2Distance d of2Here, the point is a point on the sphere 121A around the speaker node 126A. The pixel 102 is located at a distance point P3Is a point on the sphere 121B surrounding the speaker node 126B, at a distance d 3. For ease of illustration, each ball 121 is shown as a circle in fig. 9. By finding the intersection of the sphere 121, a multi-point localization method can be used to determine the coordinates of the pixel 102 in three-dimensional space.
The locations of the pixels 102 and speaker nodes 126 may be defined based on a three-dimensional cartesian (x, y, and z) frame of reference. In fig. 9, for example, a cartesian (x, y,z) coordinate reference system. To calculate the position of pixel 102, the image is processed by { x }0,y0,z0To (x + x)0,y+y0,z+z0) May be more convenient to shift the origin of the initial frame of reference. The origin offset information may be sent to the mobile device 130 as part of a venue configuration message. In other embodiments, different types of coordinate systems may be used for the reference system, such as, for example, spherical or cylindrical coordinate systems. In some embodiments, a three-dimensional coordinate system is not required. For example, the mobile devices 130 that make up the light show display 103 may be located entirely on the same plane (e.g., an outdoor arena, or a seating/ingress area in an arena that is only level with the ground). In this case, another suitable reference system may be used, such as a reference system based on a cartesian coordinate system or a polar coordinate system in the plane (x, y). In some other embodiments, where the mobile devices 130 are not all located on the same plane, their location projected onto a single plane may be used to identify a location on the two-dimensional display 103 (e.g., defined in a Cartesian coordinate system in the plane).
A method 250 that may be performed by the processor 132 of the mobile device 130 to determine its location is illustrated in fig. 10. The steps of method 250 may be implemented as software instructions provided by a mobile application and stored in memory 133 of mobile device 130. The method 250 may be performed by the mobile devices 130 when they are operating in a location-aware mode (and thus also receive information about tone IDs, speaker node locations, and/or sound speed needed by the mobile devices 130 to determine their location via heartbeat signals and/or venue configuration signals). Method 250 begins at block 252 where a telephone locator signal 251 is received at mobile device 130 from beacon transmitter 122. This signal informs the mobile device 130 that the speaker node 126 is about to emit an audio signal. The telephone locator signal may be broadcast by beacon transmitter 122 every 10 seconds, for example, a 2 second recording period. At block 254, this signal causes the mobile device 130 to be placed in a record mode. When in the record mode, the mobile device 130 records audio signals or tones 253 received at the mobile device (i.e., by recording for a period of 2 seconds) and saves them to memory or data storage on the mobile device 130. As described herein, the audio signals 253 are emitted simultaneously, and each speaker node 126 emits a unique audio signal. Once the audio signals are recorded, the mobile device 130 exits the recording mode (stops recording) and the method 250 proceeds to block 256 where the mobile device 130 processes the recorded audio signals 253 using filtering and audio recognition functions to identify the frequency of each audio signal and the TDOA 255 of each audio signal. The method 250 then proceeds to block 258 where the processor 132 of the mobile device 130 performs trilateration and multilateration functions to determine the location of the mobile device 130. The function performed at block 258 accepts as input the TDOA value 255 determined at block 256, the known node location 259 of the speaker nodes 126, the tone ID (the tone emitted by each speaker node 126), and the site sound speed 257 at a given temperature and AMSL. The known speaker node location 259, tone ID, and sound speed 257 can be communicated to the mobile device 130 through one or more of a venue configuration message, a heartbeat signal message, and/or a location pixel message. In block 260, the output of block 258, i.e., the determined pixel location 261 of the mobile device 130, is saved to memory or data storage so that it can be later applied to determine what action the mobile device 130 should perform and/or what image should be displayed at any given time to produce the desired animated scene.
As shown in fig. 2, each speaker node 126 includes a tone generator 128 that causes the speaker node to emit a unique audio signal that is different from the audio signals of all other speaker nodes in response to receiving a signal from the positioning signal emitter 125. For purposes of explanation herein, the audio signals are referred to as "tones" and the audio signal generator 128 is referred to as a "tone generator," but it should be understood that the audio signals may alternatively include beeps, chirps, etc., or any other suitable audio signal that may be emitted by the speaker nodes 126 and received by the mobile device 130 to perform TDOA trilateration and multilateration techniques. Tone generator 128 is controlled by commands provided by a master controller application (which, as noted above, may also be used to control mobile device 130 through beacon transmitter 122). The main controller application may accept input or commands from the light show controller via GUI 123. The commands generated by the main controller application, when executed, cause the positioning signal transmitter 125 to transmit a signal to the receiver 127 of the speaker node 126. The signal sent from the locating signal transmitter 125 to the receiver 127 may be an RF signal; in this case, the positioning signal transmitter 125 is an RF transmitter. In other embodiments, the signal may be a WiFi signal or a bluetooth transmission or a signal using other suitable wireless transmission techniques. As previously mentioned, the tones emitted by the speaker nodes 126 are preferably controlled to emit sounds at exactly the same time (e.g., within 2ms of each other, and in particular embodiments, within 1ms of each other) so that TDOAs can be identified for performing multilateration and trilateration. Each speaker node 126 contains configurable settings, where these settings determine what tone the speaker node 126 generates. In particular embodiments, the settings may be configured by the light show controller 124 to the speaker nodes via fiber optic, RF, WiFi, ethernet connections, and the like. In certain embodiments, the tones may be stored as a pre-recorded wav file on the speaker node 126.
RF and bluetooth signals may introduce lag time in the signal transmission process. This may adversely affect the accuracy of the image being displayed for the light show. One solution developed by the inventors to address this problem, particularly in the case of RF signals, is to send multiple equally spaced RF signals from the positioning signal transmitter 125 to the speaker node 126. The speaker node 126 listens for and records each transmission and as each transmission arrives, the speaker node 126 determines which millisecond within the entire second it received the signal. For example, the speaker node 126 may clock its first signal at 50ms from the last second. It may time the second signal at 56ms from the next second and may time the third signal at 46ms from the last or third second. The speaker node 126 stores these three millisecond time values for analysis. The speaker node 126 takes the lowest number in the set of stored time values as the most accurate value (46 ms in the example above), and over the next second, the speaker node 126 waits until the most accurate time value (in ms) is reached. When it sees the most accurate number of stored time values, the speaker node 126 generates and emits an audio signal. Because the clock of each node 126 is separate from the clocks of the other nodes and may not be synchronized with the other nodes 126, the above-described method ensures that the nodes emit audio signals at exactly the same time (i.e., typically within 2ms of each other, more particularly within 1ms of each other in some embodiments).
To prevent errors and mismatches in the beep time, if the speaker node 126 recognizes that the millisecond value it stored from the last round of received transmission has a range greater than some predetermined threshold (e.g., 10 milliseconds), the node 126 avoids emitting audio signals in this round. However, the other speaker nodes 126 may still continue to emit audio signals. Redundancy may be provided in the speaker node cluster 129 by employing more than the minimum four speakers required for trilateration, such as providing a cluster of six speakers as shown in fig. 9, so that if one or two speaker nodes 126 have determined that they need to wait for a round due to receiving a poor set of transmissions, such speaker node(s) 126 typically do not impede effective operation of the speaker node cluster because the audio signals emitted by the remaining speaker nodes 126 may be used to determine the location of the mobile device 130. Each speaker node 126 may include a single board computer, microcontroller, or any other suitable processing and control unit configured to enable it to time, store, and process the arrival times of the RF control signals as described above to control the generation of audio signals in synchronization with the other speaker nodes 126.
In some cases, a tone object (corresponding to a tone to be emitted by the tone generator 128) is generated by the speaker node 126 in response to a signal received from the positioning signal emitter 125. Dynamically generating pitch objects may introduce lag time. This may adversely affect the accuracy of the image to be displayed. To solve this problem, when an audio signal has been played by a speaker node, its tone object is destroyed from memory. A new tone object is pre-generated by pre-running a tone generator script and stored in memory for immediate playback upon receipt of the next appropriate signal. For some embodiments, it may produce more reliable timing for erasing tone objects from memory once they have been used and pre-generating new tone objects for immediate playback if needed.
Another problem is that echoes of tones emitted by speaker nodes 126 at site 100 may adversely affect the accuracy of the image being displayed on display 103. Recording the echo of the tone at pixel 102 may result in an inaccurate determination of TDOA, thus resulting in the use of trilateration and multilateration to calculate an incorrect position. To mitigate the effects of echo, each pixel 102 may be configured to listen only for the beginning of each tone from speaker node 126. Once a tone is detected, pixel 102 is configured to not continue searching for this tone until the next round of tone is emitted.
In a particular embodiment, the audio signal emitted by each speaker node 126 is in the form of a chirp, such as a short linear sinusoidal sweep. For example, the signal one from the tone generator 128 of the first speaker node 126A may be swept from 19500 to 19600Hz in 100ms, the signal two from the tone generator 128 of the second speaker node 126B may be swept from 19750Hz to 19850Hz in 100ms, and so on. Other scan ranges may be used in other embodiments. One advantage of using chirp over steady-state tones is that the cross-correlation of the chirped recorded audio data has more pronounced peaks. The cross-correlation with the stationary sinusoid resembles a triangle as the width of the test signal. Another advantage is that when using chirp, only about 100ms beeps from each speaker are needed to detect the chirp and perform TDOA multipoint localization. Appropriate audio circuitry may be integrated into the speaker node 126 system to produce the desired clear high frequency output of the tone, beep or chirp generated by the tone generator 128.
The self-positioning information obtained by performing method 250 of FIG. 10 allows pixel 102 to be an independently self-addressable pixel of multi-pixel display 103. Thus, pixels 102 may be directed to perform different actions than other pixels 102 in display 103 based on their location. In this way, a more complex scene may be displayed than simply having all pixels 102 display a single color or having all pixels 102 flash simultaneously. For example, the pixels 102 may be caused to display a moving spiral. This may be accomplished by incorporating a control layer 149 in the main controller application that contains a representation of the image displayed on the display 103, as discussed above with reference to FIG. 8A. The image may be associated with a scene type encoded in the light show command signal, and the scene type identifies what the display 103 should look like over a series of frames of the scene. By knowing its location within the display 103 representation on the control layer 149, each pixel 102 can determine what it should display for each frame of the scene. Display 103 may comprise a three-dimensional display in which mobile devices 130 are located at different heights in venue 100, and in which it is desirable to control devices 130 based on their location in three-dimensional space (e.g., by controlling mobile devices 130 on upper venues differently than mobile devices 130 on lower venues). Alternatively, the display 103 may comprise a two-dimensional display in which the mobile device 130 is located on the same plane in the venue 100, or is not located on the same plane, but requires control of the device as if the device's location were projected on the same display plane (e.g., only considering the (x, y) Cartesian plane coordinates of pixel locations).
Due to the limited processing capabilities of current mobile phone hardware, there are often challenges associated with real-time processing of audio signals for frequency detection. To accommodate the capabilities of the mobile device, the audio signals emitted by the speaker node 126 may be recorded by the receiving mobile device 130 as raw audio data. Once the mobile device 130 leaves the recording mode or stops recording audio signals, frequency detection and location placement may be performed at that time based on the recorded audio data.
The hardware processing power may vary from mobile device to mobile device (e.g., such as different smartphone models), which may cause the lighting effects to lose synchronization after a short period of time. By way of explanation, key frames in animation, computer graphics, and movie production are graphs that define the starting and ending points of any smooth transitions. These maps are called "frames" because their position in time is measured in frames. For the light show produced by embodiments of the techniques described herein, key frames may be represented by solid colors or various shapes (i.e., one key frame will make the screen solid green color and the next key frame will make the screen solid red color). The amount of time required for animation from one key frame to the next is defined by the frame rate. The frame rate is determined by the hardware of the mobile device. The hardware of not every device is the same, so although the frame rate is similar in many devices, after a period of time to play effects (e.g., cyclic fading of red and green) across multiple devices, the timing of the fade between the devices may drift significantly, creating a confusing or unsynchronized visual effect. Rather than having animations operate at a frame rate as typical animations, in particular embodiments, a mobile application running an animation on a mobile device in accordance with a light show command may be based on a low-level system clock on the mobile device that vibrates at a particular frequency (typically measured in MHz (megahertz, or million cycles per second)) and is typically implemented as a count of the number of ticks (or cycles) that have occurred since some arbitrary start time. Since the system time between mobile devices is very accurate, it can be used to coordinate the timing of effects across multiple devices. Between key frames (e.g., every 25 milliseconds), the position along its progress line that the animation should follow can be calculated in time. If the animation is ahead or behind the position where it should be with respect to the system clock, the animation is recalculated and redrawn to keep it as planned, thereby keeping the animation of each mobile device synchronized with the animation of all other mobile devices.
As described above, a main controller application may be provided to enable a light show operator to control one or more of: beacon transmitter(s) 122, speaker node 126, and/or positioning signal transmitter 125. The GUI123 is provided by the master controller application to facilitate such control. In particular embodiments, master control of beacon transmitters 122, speaker nodes 126, and positioning signal transmitters 125 may be implemented in real-time by suitable input devices or hardware, such as a mouse, keyboard, touch screen, MIDI hardware, or DMX hardware. In some embodiments, commands issued by the master control may be preprogrammed and/or manipulated in a MIDI sequencer or programmable DMX hardware/software.
In some embodiments, beacon transmitters 122 comprise mobile beacon transmitters, which may be carried by a person or on a movable object or vehicle, such as a drone. In this manner, the location of beacon transmitters 122 within live arena 100 may vary. The location of the mobile beacon transmitter 122 may remain constant throughout the live event, or it may be moved periodically or continuously throughout the live event to create a localized light show effect in the area of the beacon transmitter 122, since only mobile devices 130 within range of the mobile beacon transmitter are able to receive and respond to the beacon transmitter's signal. More than one such mobile beacon transmitter 122 may be employed within the venue 100 and carried by a person or mobile object or vehicle to broadcast light show command signals and other signals used to generate light shows as described above to generate a plurality of mobile localized light show effects during live activities.
In particular embodiments, as shown in FIG. 11, venue 100 may be a venue where multiple shows are occurring. Such a performance may occur simultaneously across multiple stages 105A, 105B, and 105C (individually and collectively, stages 105). Each of these shows may be associated with a different light show display 103A, 103B, and 103C (individually and collectively, light show displays 103). The boundaries of each display 103 are defined by geo-fenced areas, where pixels within a particular geo-fenced area are controllable to display a different image than images displayed in other geo-fenced areas. Beacon transmitters 122A, 122B, and 122C (individually and collectively, beacon transmitters 122) located within each of displays 103A, 103B, and 103C, respectively, broadcast signals to control pixels 102 to produce particular images for display. For example, beacon transmitter 122A broadcasts a signal to instruct pixels 102 within a geo-fenced area of display 103A to produce an image on display 103A. Likewise, beacon transmitter 122B broadcasts a signal to instruct pixels within the geo-fenced area of display 103B to produce an image on display 103B.
At times, the mobile device 130 providing the pixel 102 may be within range of and receive signals from beacon transmitters 122 of different geo-fenced areas. To enable the mobile device 130 to determine which signal to use, the signal emitted by the beacon transmitter 122 may be encoded with a particular display ID, identifying one of the displays 103 with which the transmitter is associated. In the example of fig. 11, pixel 102A receives signals from both beacon transmitter 122A encoded with a display ID corresponding to display 103A and beacon transmitter 122B encoded with a display ID corresponding to display 103B. A light show mobile application loaded on the mobile device 130 providing the pixel 102A may instruct the device to perform a method whereby it determines the location of the pixel 102A and uses this location to determine which beacon signals the device should respond to. In the example shown in fig. 11, the determined location of the pixel 102A is compared to the predetermined boundaries of each display 103 to determine that the pixel 102A is located within the display 103A. Thus, an instruction provided on the light show mobile application on mobile device 130 may instruct it to ignore any beacon signals having a display ID that does not match display 103A. Thus, pixel 102A ignores the beacon signal from beacon transmitter 122B and responds only to the beacon signal from beacon transmitter 122A. Thus, the pixels 102 participate in the generation of the light show display 103A, rather than the light show display 103B.
The location of the pixel 102 may be determined using one or more of the methods described elsewhere herein. For example, where the venue 100 is provided with multiple speaker nodes as described herein, the mobile devices 130 providing the pixels 102 may determine their own location using the location awareness method 250 of fig. 10. In other embodiments, one or more other methods may be used to determine the location of the pixel 102. For example, where the mobile device 103 providing the pixels 102 is a GPS (global positioning system) enabled device, then determination of latitude and longitude GPS coordinates of the device can be used to select which beacon signals the pixels 102 should respond to based on predefined geofence boundaries for each display 103. Where the venue 100 is an outdoor venue, for example, within range of signals from GPS satellites, it may be appropriate to use GPS for location determination.
The geo-fenced area may also be used for events with a single show (e.g., a single show occurring in a large indoor or outdoor venue). In such embodiments, the venue space may be divided into two or more displays 103 (such as geo-fence displays 103A, 103B, 103C, etc., as shown in fig. 11), each displaying a different image. Similar to the method described above, the signals broadcast by beacon transmitters 122 within each geo-fenced area can be encoded with a display ID identifying the display associated with the transmitter to enable mobile devices 130 receiving signals from beacon transmitters 122 of different geo-fenced areas to select which signals to respond to.
In a particular embodiment, as shown in FIG. 12, a mobile device 130 to be used as a pixel 102 in a display 103 may be instructed to perform a sequence of light shows featuring a varying rectangular area 300. The rectangular area 300 may vary between each frame or each scene of the light show (e.g., across the display 103 in its size, position, and/or rotation angle). The rectangular area 300 may be defined and transmitted to the mobile device 130 via broadcast beacon signals using the systems and methods described herein. In a particular embodiment, the rectangular area 300 may be defined by coordinates of a first corner 310, coordinates of a diagonally opposite second corner 320, and a rotation value. The rotation value defines the shape and size of the rectangular area 300 containing the diagonally opposite corners 310, 320. For example, as shown in FIG. 12, rectangular regions 300A, 300B, 300C have the same corners 310, 320, but have different rotation values. If the mobile device 130 providing the pixel 102 determines that its location is within the rectangular area 300, the instructions provided to the mobile device 130 (via a mobile application on the mobile device 130) may instruct it to respond to the light show command encoded on the beacon signal (e.g., by turning on its display screen). Otherwise, if the mobile device 130 determines that its location is not within the rectangular area 300, the instructions provided to the mobile device 130 may instruct it to perform another action (e.g., turn off its display screen). The site 100 shown in the embodiment of fig. 12 may be an outdoor site. Where the mobile devices 130 use GPS signals to determine their location, the corners 310, 320 of the rectangular area 300 may be defined by latitude, longitude GPS coordinates. One advantage of representing the area 300 as two corners and rotation values is that it reduces the amount of data that needs to be relayed to and processed by each mobile device 130 (e.g., as compared to transmitting all four corners of the area 300 in latitude, longitude GPS coordinates).
In some embodiments, the display screen 131 of the mobile device 130 may be used to view live events. In some embodiments, the display screen 131 may display augmented reality media on a live view. Augmented reality media may include, for example, images, spirals, swirls, moving lines, three-dimensional animations and/or other effects, or any other animation or image. In some embodiments, augmented reality media may be transmitted to mobile device 130 using beacon transmitter 122. In some embodiments, augmented reality media may be preloaded onto the mobile device 130. In some embodiments, the augmented reality media displayed on the display screen 131 depends on the location of the mobile device 130. For example, mobile devices 130 at different locations within venue 100 may view augmented reality media having different media, perspectives, sequences, and so forth. Any of the methods described above may be used to determine the location of the mobile device 130.
In some embodiments, the mobile device 130 enables the collection of demographic information related to attendees of the live event. For example, when an active participant logs into the mobile device 130 using a social media account to use the mobile device 130 as the pixel 102, the mobile device 130 may anonymously collect demographic data such as age, gender, etc. using data from the active participant's social media. In some embodiments, the demographic data may be transmitted to a central database for storage and analysis. In some embodiments, the demographic data may be communicated to the campaign organizer.
Embodiments of the techniques described herein may be adapted to enable a remote audience member to remotely participate in a live event that is streamed or broadcast to the remote audience member. For example, remote audience members may be located at one or more different locations located a distance from the venue 100 of the live event, such as at a restaurant, bar, home, community center, indoor or outdoor stadium, arena, bowling gym, concert hall, theater, amphitheater, stand, venue, beach or other open area or any other location remote from the venue 100 where the live event occurs. These remote audience members may be watching a broadcast or live stream of an event on a display screen such as, for example, a television screen, a projection screen, a computer screen, or other display. The signals transmitted by the beacon transmitters 122 to the mobile devices 130 of audience members at the live event 100 may also be adapted to be transmitted to the mobile devices 130 of remote audience members at remote locations. For example, certain information contained in light show command signals and heartbeat signals broadcast by beacon transmitters 122 to mobile devices 130 at venue 100 may also be transmitted to remote audience member's mobile devices 130 via suitable communication means. This may include internet communications, such as over a WiFi or cellular network, or any other suitable communication network for communicating with the mobile device 130. The remote mobile device 130 runs a mobile application that enables the mobile device 130 to receive, interpret, and process signals to perform certain actions, including flashing or turning on/off the display screen and displaying a particular color when turned on, similar to the light show participation actions performed by the mobile device 130 actually present at the live event 100. Similar heartbeat signal information may be communicated to the remote mobile devices 130 to enable the timing of their light show actions to be synchronized with other remote mobile devices as well as those devices located at the live event 100. In this manner, even people who are not physically present at the live action site 100 may remotely participate in audience participation in the light show and use their mobile devices 130 to help enhance the experience of themselves and people around them. In the case where live activity is pre-recorded and the broadcast or live stream is time delayed and/or provided to different areas at different times, the transmission of the light show command signal and the heartbeat signal may account for the time delay or time difference by ensuring that the signal stream is transmitted to the mobile device 130 at the appropriate time to match the timing of the active broadcast or live stream. This determination can be based on, for example, a known geographic location of the mobile device 130, which can be used to identify a time zone for the mobile device 130 and an expected timing of the broadcast/live stream being observed by an operator of the mobile device 130.
Interpretation of terms
In the specification and claims, unless the context clearly requires otherwise:
"including", "comprising", and the like are to be understood in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, the meaning of "including, but not limited to";
the words "herein," "above," "below," and words of similar import, when used in this specification, shall refer to this specification as a whole and not to any particular portions of this specification;
"or," which refers to a list of two or more items, encompasses all of the following interpretations of the word: any item in the list, all items in the list, and any combination of items in the list;
the singular forms "a", "an" and "the" also include any appropriate plural reference.
Embodiments of the invention may be implemented using specially designed hardware, configurable hardware, a programmable data processor configured by providing software (which may optionally include "firmware") capable of being executed on a data processor, a special purpose computer or a data processor that is specifically programmed, configured or constructed to perform one or more steps and/or a combination of two or more of the steps of the methods explained in detail herein. Examples of specially designed hardware are: logic circuits, application specific integrated circuits ("ASIC"), large scale integrated circuits ("LSI"), very large scale integrated circuits ("VLSI"), etc. Examples of configurable hardware are: one or more programmable logic devices such as programmable array logic ("PAL"), programmable logic arrays ("PLA"), and field programmable gate arrays ("FPGA"). Examples of programmable data processors are: microprocessors, digital signal processors ("DSPs"), embedded processors, graphics processors, math coprocessors, general purpose computers, server computers, cloud computers, mainframe computers, computer workstations, and the like. For example, one or more data processors in a computer system for a device may implement the methods described herein by executing software instructions in a program memory accessible to the processors.
The processing may be centralized or distributed. In the case of a distributed process, information including software and/or data may be centralized or distributed. Such information may be exchanged between different functional units over a communication network, such as a Local Area Network (LAN), a Wide Area Network (WAN) or the internet, wired or wireless data links, electromagnetic signals or other data communication channels.
For example, while processes or blocks are presented in a given order, alternative examples may perform routines having steps, or use systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative combinations or sub-combinations. Each of these processes or blocks may be implemented in a variety of different ways. Further, while processes or blocks are sometimes shown as being performed in series, these processes or blocks may instead be performed in parallel, or may be performed at different times.
Further, while elements are sometimes shown as being performed sequentially, they may instead be performed simultaneously or in a different order. It is therefore intended that the following claims be interpreted to embrace all such variations as fall within their intended scope.
Embodiments of the invention may also be provided as a program product. The program product may comprise any non-transitory medium carrying a set of computer readable instructions which, when executed by a data processor, cause the data processor to perform the method of the invention. The program product according to the invention may be in any of a variety of forms. The program product may include, for example, non-transitory media such as magnetic data storage media including floppy disks, hard disk drives, optical data storage media including CD ROMs, DVDs, electronic data storage media including ROMs, flash RAMs, EPROMs, hardwired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, and so forth. The computer readable signal on the program product may optionally be compressed or encrypted.
The present invention may be implemented in software. For greater clarity, "software" includes any instructions executed on a processor and may include, but is not limited to, firmware, resident software, microcode, etc. Both processing hardware and software may be centralized or distributed, in whole or in part (or a combination thereof), as will be appreciated by those skilled in the art. For example, software and other modules may be accessed via local memory, via a network, via a browser or other application in a distributed computing environment, or via other means suitable for the purposes described above.
When reference is made above to a component (e.g., a software module, a processor, a server, a client, a mobile device, a pixel device, a speaker, a transmitter, a receiver, a beacon, etc.), unless otherwise indicated, reference to such component (including a reference to a "means") should be interpreted as including as an equivalent of such component any component which performs the function of the described component (i.e., that is functionally equivalent), including components which are not structurally equivalent to the disclosed structure which performs the function in the illustrated exemplary embodiments of the invention.
While a number of exemplary aspects and embodiments have been discussed above, those of skill in the art will recognize certain modifications, permutations, additions and sub-combinations thereof. It is therefore intended that the following appended claims and claims hereafter introduced are interpreted to include all such modifications, permutations, additions and sub-combinations as are consistent with the broadest interpretation of the specification as a whole.
Claims (48)
1. A light show control system for generating a light show across a plurality of pixel devices, the light show control system comprising:
a controller configured to receive input from a light show operator and generate a plurality of light show parameters based on such input;
a beacon transmitter in communication with the light show controller and configured to: the method further includes receiving the plurality of light show parameters from the light show controller, encoding the plurality of light show parameters on a beacon signal, and broadcasting the beacon signal to the plurality of pixel devices, wherein each of the plurality of pixel devices is configured to receive and decode the beacon signal to perform one or more display actions of the light show based on the decoded beacon signal.
2. The system of claim 1, wherein based on the plurality of light show parameters, the beacon transmitter is configured to encode a timing reference on the beacon signal to synchronize performance of display actions across a plurality of pixel devices.
3. The system of claim 2, wherein the timing reference comprises a time since a starting reference point.
4. The system of claim 3, wherein the starting reference point comprises a first beat of the light show or a first beat of a current light show scene.
5. A system as claimed in any one of claims 2 to 4, wherein the beacon transmitter is configured to broadcast the beacon signal as one or more repeated batches of data packets.
6. A system according to claim 5, wherein the beacon transmitter is configured to broadcast the beacon signal such that the time between transmission of successive data packets in each batch is tmWherein, tmFor example between 15ms and 30ms, and the timing reference encoded in each data packet following the first data packet is incremented by t from the timing reference encoded in the preceding data packetm。
7. The system of any of claims 5 and 6, wherein each batch of duplicate data packets comprises 15 to 25 data packets.
8. A system as claimed in any one of claims 5 to 7, wherein the beacon transmitter is configured to update a Media Access Control (MAC) address of the transmitter to encode a new MAC address for each batch of data packets.
9. A system according to any one of claims 5 to 8, wherein the beacon transmitter is configured to encode a new identification number for each batch of data packets on the beacon signal.
10. The system according to any one of claims 1 to 9, wherein, based on the plurality of light show parameters, the beacon transmitter is configured to encode a play scene command on the beacon signal, wherein the play scene command is defined by one or more of: scene type, set of color IDs, fade rate, scene transition, and beats per minute (bpm).
11. A system according to any one of claims 1 to 10, wherein, based on the plurality of light show parameters, the beacon transmitter is configured to encode a heartbeat message on the beacon signal, wherein the heartbeat message is defined by the timing reference and one or more of: bpm, beat pattern type, beat number, and speed of sound.
12. The system of any one of claims 1 to 11, comprising a locating signal emitter and a plurality of speaker nodes in communication with the locating signal emitter, wherein the locating signal emitter emits a tone-generating signal to each of the plurality of speaker nodes, and in response to receiving the tone-generating signal, the speaker nodes emit audio signals for trilateration and/or multilateration by the plurality of pixel devices.
13. The system of claim 12, wherein the audio signal is an ultrasonic audio signal.
14. A system according to any of claims 12 or 13, wherein the audio signal is characterized by a frequency in the range 16kHz to 24 kHz.
15. The system of any one of claims 13 or 14, wherein each speaker node of the plurality of speaker nodes is configured to emit the audio signal simultaneously with other speaker nodes.
16. The system of claim 15, wherein each speaker node of the plurality of speaker nodes comprises a tone generator, an amplifier, and an omnidirectional speaker array configured to produce the audio signal, wherein the audio signal comprises a tone characterized by a frequency different from the other speaker nodes.
17. The system of claim 15, wherein each speaker node of the plurality of speaker nodes comprises a tone generator, an amplifier, and an omnidirectional speaker array configured to produce the audio signal, wherein the audio signal comprises a chirp characterized by frequencies that increase and/or decrease over time that are different from other speaker nodes.
18. The system of any one of claims 12 to 17, wherein the positioning signal transmitter comprises a Radio Frequency (RF) transmitter configured to transmit the tone-generating signal as an RF signal, and each of the plurality of speaker nodes comprises an RF receiver for receiving the RF signal from the positioning signal transmitter.
19. The system of claim 18, wherein the positioning signal transmitter is configured to transmit the tone-generating signal as a plurality of RF signals to the plurality of speaker nodes at equally spaced time intervals, and upon receipt of each RF signal, each speaker node is configured to time elapsed time since a start of a previous time interval and determine a signal occurrence period based on a set of timing times.
20. The system of claim 19, wherein each speaker node is configured to take a lowest time of the set of timing times as a signal generation time, wherein the speaker node generates and emits the audio signal when a signal generation time period after a start of a next time interval has elapsed.
21. The system of any one of claims 19 or 20, wherein each speaker node is configured to refrain from emitting an audio signal if the time within the set of timed times has a range greater than a certain predetermined threshold.
22. The system of any one of claims 12 to 21, wherein the plurality of speaker nodes comprises four or more speaker nodes.
23. The system of claim 22, wherein the plurality of speaker nodes comprises six speaker nodes.
24. The system of any of claims 22 or 23, wherein at least one of the plurality of speaker nodes is located at a different elevation than the other speaker nodes.
25. A system according to any one of claims 12 to 24, wherein, based on the plurality of light show parameters, the beacon transmitter is configured to encode a venue configuration message on the beacon signal, wherein the venue configuration message is defined by one or more of: a set of speaker node positions, a set of speaker node tone IDs, and an origin offset.
26. The system of any of claims 12 to 25, wherein based on the plurality of light show parameters, the beacon transmitter is configured to encode a locate pixel message on the beacon signal, wherein upon receipt of the locate pixel message, each of the plurality of pixel devices begins recording for audio signals emitted by the plurality of speaker nodes.
27. The system according to any one of claims 1 to 26, wherein the beacon transmitter comprises a Bluetooth Low Energy (BLE) beacon transmitter configured to transmit the beacon signal as a BLE signal.
28. A system according to any one of claims 1 to 27, comprising a plurality of beacon transmitters having the features of the beacon transmitters of any one of claims 1 to 27.
29. The system according to any one of claims 1 to 28, wherein the controller comprises a display and is configured to provide a graphical user interface via the display to receive input from the light show operator and to enable the plurality of light show parameters to be generated and modified dynamically and in real time based on such input.
30. A method performed by a pixel device for facilitating display of a light show across a plurality of such pixel devices, the method comprising:
scanning for and receiving at the pixel device a beacon signal broadcast from a beacon transmitter;
decoding the beacon signal to determine a plurality of light show parameters; and
one or more display actions of the light show are performed based on the plurality of light show parameters.
31. The method of claim 30, wherein the one or more display actions include one or more of: displaying at least one image or a series of images on a display screen of the pixel device; flashing a light source on the pixel device; and vibrating the pixel device.
32. The method of claim 31, wherein the one or more display actions comprise displaying a scene of the light show, wherein the scene comprises a sequential display of colors displayed on a display screen of the pixel device.
33. The method according to claim 32, wherein the scene is characterized by one or more of: scene type, set of color IDs, fade rate, scene transition, and bpm.
34. The method according to any one of claims 30 to 33, wherein the pixel device comprises a bluetooth receiver and scanning for the beacon signal comprises scanning for a BLE beacon signal.
35. The method of claim 34, comprising: scanning for and receiving at the pixel device a heartbeat signal broadcast from a beacon transmitter; and in response to not receiving the heartbeat signal within a heartbeat timeout period, ceasing the one or more display actions and/or restarting the bluetooth receiver.
36. The method of claim 34, comprising: scanning for and receiving at the pixel device a heartbeat signal broadcast from a beacon transmitter; decoding a timing reference from the heartbeat signal; and performing the one or more display actions at a starting time based on the timing reference.
37. The method of claim 36, wherein the timing reference comprises time since a starting reference point.
38. The method of claim 37, wherein the starting reference point comprises a first beat of the light show or a first beat of a current light show scene.
39. A method according to any of claims 30 to 38, wherein the pixel device comprises a handheld mobile device such as, for example, a smartphone or tablet computer.
40. The method of any one of claims 30 to 39, comprising:
receiving a start recording signal at the pixel device;
recording a plurality of audio signals simultaneously emitted from a plurality of speaker nodes in response to receiving the start recording signal, wherein each speaker node emits an audio signal at a different frequency than the other speaker nodes;
filtering and processing the audio signals based on their different frequencies to determine a difference between the arrival Times (TDOA) of each audio signal;
receiving location information for each of the plurality of speaker nodes; and
the location of the pixel device is determined using trilateration and/or multilateration based at least in part on the TDOA and the location information.
41. The method of claim 40, wherein the location information is decoded from a beacon signal transmitted by the beacon transmitter.
42. The method of any one of claims 40 or 41, wherein the plurality of sound signals are emitted by at least four sound speaker nodes.
43. The method of claim 42, wherein the plurality of sound signals are emitted by six speaker nodes.
44. A method according to any one of claims 40 to 43, comprising performing the one or more display actions based at least in part on a location of the pixel device.
45. The method of claim 44, wherein performing the one or more display actions includes receiving a display command for an animated scene and identifying the one or more display actions to be performed by the pixel device based on a corresponding location of the pixel device in a display representation of the animated scene.
46. A localization pixel localization system comprising a localization signal emitter and a plurality of speaker nodes in communication with the localization signal emitter, the localization signal emitter and the plurality of speaker nodes comprising any feature or combination of features of any one of claims 12 to 24.
47. An apparatus having any new and inventive feature, combination of features, or sub-combination of features described herein.
48. Methods having any novel and inventive step, act, combination of steps and/or acts, or sub-combination of steps and/or acts, described herein.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US62/436,652 | 2016-12-20 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK40009818A true HK40009818A (en) | 2020-06-26 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20240073655A1 (en) | Systems and methods for displaying images across multiple devices | |
| US10277813B1 (en) | Remote immersive user experience from panoramic video | |
| JP5971768B2 (en) | Speech positioning using speech signal encoding and recognition | |
| US9485603B2 (en) | Customizable beacons to activate content on a user device | |
| US9722649B2 (en) | Methods and apparatus for communicating with a receiving unit | |
| US20160192308A1 (en) | Mobile Device Synchronization of Screen Content and Audio | |
| JP6688995B2 (en) | Content display system | |
| JP2015073182A (en) | Content synchronization system, event production system, synchronization device, and recording medium | |
| US20160165690A1 (en) | Customized audio display system | |
| HK40009818A (en) | Systems and methods for displaying images across multiple devices | |
| JP2021023517A (en) | Performance control system, method and program | |
| CN113658544A (en) | Interactive induction system of LED floor tile screen and control method thereof | |
| US20160150034A1 (en) | System, method and computer program product for locating members of a crowd and controlling interactions therewith | |
| WO2019036392A1 (en) | System for, and method of, changing objects in an environment based upon detected aspects of that environment | |
| JP7247069B2 (en) | Space production system, space production method and space production program | |
| WO2014075128A1 (en) | Content presentation method and apparatus | |
| JP7303780B2 (en) | Production control system, method and program | |
| JP7801076B2 (en) | Automatic control signal transmission system based on fixed repeaters | |
| JP7810778B2 (en) | Production control system, method, and program | |
| US20130320885A1 (en) | Audience Participatory Effect System and Method | |
| US20240298126A1 (en) | Special effect production system and methods useful in conjunction therewith | |
| US20190327358A1 (en) | Method of controlling mobile devices in concert during a mass spectators event with a beacon based network |