Detailed Description
The technical means of the present invention will be described in further detail with reference to specific embodiments. It should be understood that the detailed description and specific examples, while indicating the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
In this application, those skilled in the art will appreciate from the context that a "connection" between some components referred to herein may refer to either a wired connection or a wireless connection.
FIG. 1 is a system for at least two computers to interact with a screen according to some embodiments of the invention. As shown in FIG. 1, the interactive system of the present invention comprises at least two computers (three computers 21,22 and 23 in the figure); a screen 10; a projector 20, the projector 20 projecting contents to be displayed by the computer onto a screen; a server 30; and the sensor 9 is used for acquiring gesture operation on the screen 10 and transmitting acquired signals to the server 30. After the sensor 9 collects the gesture operation of the user on the screen 10, the collected signal is transmitted to the server 30, the server 30 recognizes the gesture operation through an image recognition algorithm, and generates a corresponding control signal according to the recognition result, activates a corresponding computer, so that the content of the computer is output to the projector and is displayed on the screen through the projector. For example, in some embodiments, a correspondence table between gesture operations and control signals is stored in advance in the server (for example, gesture operations 1,2, and 3 may respectively indicate that computers 1,2, and 3 are to be activated), and the server queries the correspondence table to generate corresponding control signals. The control signal may also include operating on content displayed on the screen, including paging, zooming in, zooming out, deleting, scribing, etc. the content.
In the above embodiment, the operation position and operation gesture of the user on the screen are captured by the sensor, wherein the gesture can also leave the screen; in contrast, in some embodiments, the user's touch location and gesture on the screen may be captured by the screen itself. For example, when the screen is a capacitive, resistive, infrared, or surface acoustic wave touch screen. In this case, the content of the activated one of the three computers can be displayed on the screen by one projector as well. In some embodiments, the screen itself may be used as a display screen that can accept input signals, and in this case, activated computer content is directly input to the display device for display through the server without a projector.
Accordingly, some embodiments of the present invention provide methods for at least two computers to interact with a screen. The method comprises the following steps: after the sensor 9 collects the gesture operation of the user on the screen 10, the collected signal is transmitted to the server 30, the server 30 recognizes the gesture operation through an image recognition algorithm, and generates a corresponding control signal according to the recognition result, activates a corresponding computer, so that the content of the computer is output to the projector and is displayed on the screen through the projector. In some embodiments, a correspondence table between the gesture operations and the control signals is stored in advance in the server (for example, gesture operations 1,2, and 3 may respectively indicate that computers 1,2, and 3 are to be activated), and the server queries the correspondence table to generate corresponding control signals.
In the above embodiment, the operation position and the operation gesture of the user on the screen are captured by the sensor; in contrast, in some embodiments, the user's touch location and gesture operations on the screen may be captured by the screen itself. For example, when the screen is a capacitive, resistive, infrared, or surface acoustic wave touch screen. At this time, the content of the activated one of the three computers can be displayed on the screen by the projector as well. In some embodiments, the screen itself may serve as a display screen that can accept input signals, in which case activated computer content is input into the display device for display without the need for a projector.
FIG. 2 is a system for interaction of at least two computers with a screen according to some embodiments of the present invention. The interactive system comprises at least two computers (three computers 201,202 and 203 in the figure); a screen 100; a projector 200, three projectors are shown in the figure, for projecting contents to be displayed by the computer onto a screen; a sensor 109 for collecting gesture operations on the screen 100 and transmitting the collected signals to the server 30; a multi-screen stitching processor 300; and a server 301. The multi-screen splicing processor 300 is connected with the computer 201, the computer 202, the computer 203, the server and the projector 200, and the multi-screen splicing processor 300 can receive the contents of the computer 201, the computer 202 and the computer 203 and project the contents on the screen 100 through the projector for displaying. And the server controls the multi-screen splicing processor according to the control signal to enable the screen to be displayed as a whole screen or be divided into a plurality of windows to project display contents. For example, the multi-screen splicing processor 300 can divide the screen 100 into a plurality of windows for displaying, for example, the window 101, the window 102, and the window 103 of the screen 100 are related to each other, so that the window 101 displays the content of the computer 201, the window 102 displays the content of the computer 202, the window 103 displays the content of the computer 203, or the window 101 displays the content of three computers, which is omitted and not described in detail since the multi-screen splicing processor is well known to those skilled in the art in terms of window adjustment and layout.
In some embodiments of the present invention, the sensor 109 collects an operation gesture of a user on the screen 100 and a window where the operation occurs, and transmits the collected signal to the server 301, the server 301 recognizes the gesture signal through an image recognition algorithm to obtain a gesture operation input by the user and the window where the gesture operation is located, and the server correspondingly controls the multi-screen stitching processor according to the recognition result. The server is pre-stored with a corresponding relation table between gesture operations and control signals, and queries the corresponding relation table to generate corresponding control signals, for example, gesture operation W may indicate that a multi-screen splicing processor is to be controlled to enter a window layout adjustment mode, gesture operation > indicates to enlarge a window, gesture operation < indicates to reduce the window, gesture left movement indicates to move a current window to the left, and gesture right movement indicates to move the current window to the right.
For example, when a gesture W acts on the window 101 on the screen, the sensor 109 captures the position and motion of the gesture, and transmits the collected signal to the server, and the server recognizes the signal and knows that the gesture W acts on the window 101, so that the server controls the multi-screen stitching processor to enter the window layout adjustment mode according to the recognition result. In the next time, the user can continue to perform gesture operations (>, <, move left, move right, etc.), and the gesture operations are collected by the sensor and then sent to the server again, so that the server controls the mosaic controller to respectively enlarge, reduce, move left, move right, or disappear the window 101. When the window 101 is enlarged to occupy the entire screen, only the contents of the computer 201 will be displayed, and when the window 102 is enlarged to occupy the entire screen, only the contents of the computer 202 will be displayed. Accordingly, the foregoing embodiments of the present invention can realize that 3, 2 or 1 computer can simultaneously or alternatively display the content on the screen 10. In some embodiments, when the user is off-screen, it is not possible to determine which window the gesture operation is located in, and the window to be adjusted by the gesture operation may be defined by other slightly complex gestures. In some embodiments, after the server controls the multi-screen splicing processor to enter the window layout adjustment mode according to the recognition result, the server may also adjust three windows that have been originally divided by the multi-screen splicing processor together according to a certain gesture, for example, if the gesture input at this time is M.
In the above embodiment, the operation position and operation gesture of the user on the screen are captured by the sensor, and the gesture may also be operated off the screen; in some embodiments, the screen is a projection screen, an infrared light curtain is arranged on the screen, and when a hand acts on the screen, the sensor detects a change of the light curtain, generates a control signal corresponding to the gesture operation, and controls the display of the content of the computer on the screen according to the control signal. In contrast, in some embodiments, the user's touch location and gesture on the screen may be captured by the screen itself, for example, when the screen is a capacitive, resistive, infrared, or surface acoustic wave touch screen. At this time, the server controls the multi-screen splicing processor according to the control signal, so that the multi-screen splicing processor enables the screen to be displayed as a whole screen or divided into a plurality of windows to project and display the content of three or one of the computers through one or more projectors (according to the method).
According to one embodiment of the invention, the screen is a projection screen, the sensor is an infrared sensor, the infrared sensor is positioned in front of or behind the screen, an infrared light curtain is arranged on the screen, when a gesture acts on the screen, the infrared sensor collects light curtain signals before and after the gesture acts, and the processor generates the control signal according to the signals collected by the infrared sensor.
According to one embodiment of the invention, the screen is a projection screen, the sensor is an infrared sensor, an infrared light curtain is arranged on the screen, the infrared sensor is located behind the screen, when a gesture acts on the screen, the infrared sensor collects infrared light transmitted through the screen due to the gesture, and the processor generates the control signal according to the signal collected by the infrared sensor.
The projection screen is a screen capable of meeting the projection imaging requirements, and comprises a screen with a certain transmittance.
In some embodiments, the screen may be a plurality of tiled screens. The system also comprises a multi-screen splicing processor, wherein the multi-screen splicing processor is connected with the server, the computer and the splicing screen, and the server controls the multi-screen splicing processor according to the control signal to enable the screen to be used as a whole screen to be displayed or to be divided into a plurality of windows to project display contents.
Methods for at least two computers to interact with a screen according to some embodiments of the present invention are described below with respect to FIG. 2 and FIG. 3. As shown in fig. 3, in step S1, the user may perform a gesture operation on the screen 100; step S2, the sensor collects gesture operation and sends the collected signal to the server 301; step S3, the server 301 processes the image of the gesture signal, analyzes the screen position of the gesture operation and the corresponding gesture operation, and queries the corresponding relation table between the gesture operation and the control signal to generate a corresponding control signal; step S4, the server controls the multi-screen splicing processor to enter a window layout adjustment mode according to the control signal corresponding to the gesture operation; s5, the user adjusts one of the windows, for example, adjusts the window in which the gesture operation is performed. In some embodiments, after the server controls the multi-screen splicing processor to enter the window layout adjustment mode according to the control signal corresponding to the gesture operation, the server may also adjust three windows that have been originally divided by the multi-screen splicing processor together according to a certain gesture, for example, if the gesture input at this time is M, instead of just adjusting the window where the operation is performed.
In the embodiments described above with respect to fig. 1-3, after the computer content is displayed on the screen, the user may further perform an operation on the displayed content, for example, the user may perform a page turning operation on the displayed content, the operation may be captured by the sensor and then transmitted to the server, and the server analyzes the window (or computer) where the operation is located and the corresponding control signal, and then controls or edits the corresponding content in the corresponding computer.
The invention can realize the viewing, editing or controlling of the contents of a plurality of computers through one screen, solves the technical problem that at least two computers in the prior art need to use at least two displays to waste resources, and solves the problem that the content is difficult to solve by other methods.
The above embodiments are all preferred embodiments of the present invention, and therefore do not limit the scope of the present invention. Any equivalent structural and equivalent procedural changes made to the present disclosure without departing from the spirit and scope of the present disclosure are within the scope of the present disclosure as claimed.