Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
The terms "first," "second," and the like in the description and in the claims, and the above-described drawings of embodiments of the present disclosure, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the present disclosure described herein may be made. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
The term "plurality" means two or more unless otherwise specified.
In the embodiment of the present disclosure, the character "/" indicates that the preceding and following objects are in an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes objects, meaning that three relationships may exist. For example, a and/or B, represents: a or B, or A and B.
With reference to fig. 1, an embodiment of the present disclosure provides a method for scene control, including:
s101, setting equipment response actions corresponding to scenes;
and S102, when the scene is triggered, controlling the equipment corresponding to the scene to execute the equipment response action.
By adopting the method for controlling the scene, the device response action corresponding to the scene can be set, the device under the scene can be triggered to execute the corresponding action according to the user requirement, the effect of customizing the scene template is achieved, the user can use the template scene more flexibly, and the user experience is improved.
Optionally, setting a device response action corresponding to the scene includes: and modifying, adding or deleting the equipment response action corresponding to the scene, and updating the equipment response action corresponding to the scene in the scene library. Therefore, the response action of the equipment can be updated more conveniently.
Optionally, when there are more than two device response actions corresponding to the scene, further comprising sorting the device response actions; and when the scene is triggered, controlling the equipment corresponding to the scene to execute equipment response actions according to a set sequence. For example, the device response action corresponding to the scene is to turn on the air conditioner and turn on the air purifier in the living room of the scene, and when the scene is triggered, the air conditioner is set to be turned on first and then the air purifier is set to be turned on.
Optionally, the device is set on the float-out layer in response to the action.
Optionally, the method for scene control further comprises: setting a trigger condition of a scene; upon reaching the trigger condition, the scenario is triggered.
Optionally, the trigger condition comprises one or more of: set time, set environment, enter geofence, exit geofence. For example, at 19:00 pm, the device air conditioner corresponding to the living room in the home scene is triggered to execute the device response action, such as turning on the air conditioner.
Optionally, the geo-fence is obtained by: acquiring position information; and setting the area within the set length of the distance and position information as a geo-fence. Optionally, the set length is 100 meters. For example, the location information is the location of a home, and the location of the home is within 100 meters of the geofence. For example, when the user arrives at the home within 100 meters, the device air conditioner corresponding to the living room of the home scene is triggered to be started, and if the device air purifier corresponding to the living room of the home scene is also triggered to be started, the air conditioner is started first, and then the air purifier is started.
Optionally, the obtaining the location information includes: obtaining location information by searching for an address; or, fuzzy search is carried out through keywords, and position information is selected from matched addresses; or, acquiring GPS positioning information, and acquiring position information according to the GPS positioning information; or, obtaining the position information through the map interaction information. Therefore, the position information can be more conveniently and accurately obtained.
In practical application, the manual template execution process includes: entering a template scene detail page in a scene store; entering a template scene editing page after the scene is successfully started; the 'editing mode' or 'deleting condition' is selected by long press, wherein the flow of deleting the condition or the flow of adding the new condition is consistent with the template operation; and if the manual execution condition is deleted, saving the manual execution condition. And then, adding a new condition, namely adding a new scene trigger condition, clicking 'adding a new condition', enabling the added condition to emerge, adding a new condition through the emerging layer, selecting a trigger condition, entering template scene editing, then storing, clicking 'adding a new action', and adding a new action through the emerging layer, namely a device response action corresponding to the scene.
Optionally, the scenario enabling success includes: after the template of the manual scene is started, the adding condition button of the manual condition is gray-set and can not be clicked, and other conditions can not be added; the logic of 'if a plurality of devices meet the condition, the devices can select one from several' is reserved, but if the user deletes the condition, only one device can be added according to the function of the custom adding condition; and when multiple devices are used, the user clicks the action characters to adjust the parameters.
Optionally, adding the new condition comprises: classification of manual and automatic scenes, further comprising: additions to set time, set environment, enter geofence, exit geofence, device, external environment, and the like.
Optionally, the adding a new action comprises: the device response actions corresponding to the scene can be added, deleted and sequenced, and delay, message and device functions can be added during the addition; when the device responds to the action addition, the logic of the scene customization is consistent, and the functions of binding the device action, the delay and the message can be added; the logic of 'if there are several devices conforming to the response action of the device, the device can be selected more' is reserved, if the user deletes the action, the user can only add the device according to the function and device of the custom adding device.
The manual execution of the custom flow comprises the following steps: manually triggering custom scene editing by electric shock on a home page, enabling an electric shock 'adding new action' to appear on a floating layer, and adding a corresponding device response action of a scene on the floating layer.
In some embodiments, the template for the manual scenario is enabled, and the "add condition" button for the manual condition is hidden from adding other conditions.
The conditions in the manual scene can be deleted, and after deletion, a button of 'adding new conditions' is displayed; clicking the scene condition selection box popped up at the bottom of the 'add new condition' page, and the method comprises the following steps of classifying manual scenes and automatic scenes: additions to set time, set environment, enter geofence, exit geofence, device, external environment, and the like. The scene corresponding to the blank condition adds the condition for the first time.
In the automatic scene, the conditions, namely the trigger conditions and the actions, namely the adding conditions and the action buttons can be directly displayed in the response actions of the corresponding equipment of the scene. Conditions require repeated combinations between classes. Timing, external environment, device triggering, geo-fencing functions in any combination. If the function conflicts with the existing condition, clicking the graying function can prompt that the function has influence on the selected function and cannot be selected.
Where multiple geofence conditions can be added, two or more, the condition logic must be arbitrarily satisfied; if the initial logic is that all conditions are met: when a second geo-fence is added, and when timing and sunrise and sunset are added, a gray small lock placing mode is displayed, and the unified prompt of clicking a small lock meets the condition that the second geo-fence cannot be added and the timing and sunrise and sunset cannot be added under the condition that one geo-fence exists under all condition logics; when any of the initial logic is satisfied: when a second or more geofences can be added, but the logic of the scenario cannot be changed, the logic can only be modified after deleting one geofence.
And if the user changes the automatic scene condition into manual, deleting and hiding the effective time. The effective time is only displayed when the auto scene is conditional.
When 2 sunrise and sunset conditions can be added at most, namely only one sunrise and one sunset are in a scene, the condition logic must be arbitrarily satisfied when two conditions exist: if the initial logic is that all conditions are met: when a second sunrise and sunset is added and a timing geofence is added, a gray small lock placing mode is displayed, and the click of a small lock is given as a unified prompt, namely, the condition that the sunrise and sunset exist under all condition logics is met, the second sunrise and sunset cannot be added, and the timing geofence cannot be added; when any of the initial logic is satisfied: when a second sunrise and sunset can be added, but the logic of the scene cannot be changed, and the logic can be modified only after one sunrise and sunset is deleted.
As long as the scene with the condition of the geo-fence only acts on a scene creator, family members only can see the scene and cannot click to enter the configuration, when the scene is clicked out or switched on, a message prompt box toast is popped up, the scene only can be edited by the creator and cannot be modified, the personal scene of other people is grayed when the scene is edited for a long time, and the scene only can be edited by the creator and cannot be modified by the creator and cannot be deleted and renamed by clicking a prompt.
The same external conditions, cannot create 2 and more than 2 identical "above" or "below" conditions, such as outdoor humidity: when a condition higher than that is created later and a second condition is created, the condition higher than that is hidden is only selected lower; if the logic of the condition is configured, for example, the temperature is higher than or lower than the selected temperature, the condition displays a grey small lock mode, and a unified prompt of the small lock is clicked.
Actions for hiding the device-less configuration after the scene is enabled include: an act of device configuration when enabled by a user; the action to be configured is displayed, i.e., the user has a device to support the action. If the device is hidden, the delay action before the action of the device is also hidden and the delay is not executed, but the message action is not hidden.
If the user deletes an action, the action will not be displayed permanently, for example, the action of turning on the air conditioner. If the user does not have the air conditioner at the beginning, the action is not displayed after the scene is started; if a user has a certain action in the original template but shares the action equipment, the action is not hidden, the action diagram still retains the class diagram in the equipment, but the display is changed into 'equipment moving out', but due to the design of an engine, after the user enters and stores for the second time, if no available equipment still exists, the next time after the storage, the action diagram is hidden, and if the equipment is shared back, the equipment cannot be automatically configured, and only belt configuration can be prompted; if the picture of the equipment can not be obtained after the user-defined sharing, the picture is changed into the picture with the missing equipment, and the file is also changed into the action hiding when the equipment moves out of the scene and enters the page next time; if all the devices are moved out under a certain action or condition in a scene in the user definition, prompting a user to delete the condition or action without the devices when the point is determined to be stored, and then storing the scene. "the later user binds an air conditioner, the action displays the configuration to be configured; if an action is displayed, it is deleted by the user who will not display the action later, whether with or without the device
The default order of actions for scene hiding or adding includes: when the scene template is started, if equipment exists, the template action sequence is arranged according to the template sequence; the user arranges the new actions in sequence according to the new time; if the user has an action but is in a state to be configured, the actions are arranged at the lowest part of the action according to the sequence of the scene templates but are consistent, the configuration is finished and the details page is updated next time. For example, 1, 2 (initial configuration), adding custom a, B, and waiting to configure 3, 4, 5; 4, when the detail page is re-entered after the configuration, the sequence is 1, 2, A, B, 4, 3 and 5.
Therefore, through automatic free combination of scene conditions, customization of scene templates and mutual exclusion relationship among the conditions, a user can use the template scene more flexibly.
As shown in fig. 2, an apparatus for scene control according to an embodiment of the present disclosure includes a processor (processor)100 and a memory (memory)101 storing program instructions. Optionally, the apparatus may also include a Communication Interface (Communication Interface)102 and a bus 103. The processor 100, the communication interface 102, and the memory 101 may communicate with each other via a bus 103. The communication interface 102 may be used for information transfer. The processor 100 may call program instructions in the memory 101 to perform the method for scene control of the above-described embodiment.
Further, the program instructions in the memory 101 may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 101, which is a computer-readable storage medium, may be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 100 executes functional applications and data processing, i.e., implements the method for scene control in the above-described embodiments, by executing program instructions/modules stored in the memory 101.
The memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. In addition, the memory 101 may include a high-speed random access memory, and may also include a nonvolatile memory.
By adopting the device for controlling the scene, which is provided by the embodiment of the disclosure, the device response action corresponding to the scene can be set, the device under the scene can be triggered to execute the corresponding action according to the user requirement, the effect of customizing the scene template is achieved, the user can use the template scene more flexibly, and the user experience is improved.
The embodiment of the disclosure provides a mobile phone, which comprises the device for scene control. The mobile phone can trigger equipment under a scene to execute corresponding actions according to user requirements by setting equipment response actions corresponding to the scene, so that a scene template customizing effect is achieved, a user can use a template scene more flexibly, and the user experience is improved.
Embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-described method for scene control.
Embodiments of the present disclosure provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method for … described above.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It can be clearly understood by the skilled person that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be merely a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.