Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the devices serving the network each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, which include both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, program means, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The embodiment of the application provides a multi-mode man-machine interaction test system which comprises control equipment, bionic output equipment and bionic input equipment, the test capability of tested equipment is expanded through the control equipment and the related bionic equipment, interaction under multiple modes is simulated through the bionic equipment, and therefore complete multi-mode automatic test can be achieved.
Fig. 1 shows a structure of a multi-modal human-computer interaction test system provided by an embodiment of the present application, where the test system includes a control device 110, a bionic output device 120, and a bionic input device 130, where the bionic output device and the bionic input device correspond to respective modalities. In the testing process, the control device 110 is configured to send a corresponding control instruction to the bionic output device according to a first test script, obtain an interaction feedback of the device under test 140, and determine a testing result according to the interaction feedback; the bionic output device 120 is configured to output interaction information of a corresponding modality to a device under test according to a control instruction from a control device, so that the device under test generates interaction feedback according to the interaction information; the bionic input device 130 is used for collecting the interactive feedback of the device to be tested and providing the interactive feedback to the control device.
The interactive information is information corresponding to interactive operation in different modalities, for example, specific voice information corresponding to voice interactive operation, click object or click position information corresponding to click operation, hand action information corresponding to gesture operation, and the like. The interactive feedback is feedback made by the device under test after the interactive information is acquired, for example, the device under test switches the user interface to a specific interface, plays specific feedback voice, and the like.
Different bionic output devices are used for simulating the control process of the tested device in a corresponding form, and different bionic input devices are used for acquiring the interactive feedback of the tested device in the corresponding form. For example, the bionic output device may include a bionic hand 121 and a bionic mouth 122, the bionic hand may simulate the actions of a hand of a user, so as to simulate the user to control the device under test through various gestures and hand actions in an actual scene, and the bionic mouth may simulate the voice of the user, so as to simulate the user to control the device under test through sound. In an actual scene, the bionic hand can be various manipulators simulating human hand shapes, a steering engine is used as an execution part, so that different postures and actions of the human hand can be simulated, and the bionic mouth can be various audio devices such as a sound box and the like, so that the voice of a user can be simulated by emitting different sounds.
The bionic input device 130 may include a bionic eye 131 and a bionic ear 132, where the bionic eye may collect visual interaction feedback generated by the device under test, and the visual interaction feedback may be feedback information in any visual aspect, such as feedback content displayed on a display screen of the device under test after receiving the interaction information and performing corresponding processing, or feedback action performed by a movable component of the device under test after receiving the interaction information and performing corresponding processing, and the visual interaction feedback may be collected by the bionic eye 131. The bionic ear 132 may collect an auditory interaction feedback generated by the device under test, where the auditory interaction feedback may be any auditory feedback information, such as a sound output by an audio device of the device under test after receiving the interaction information and performing corresponding processing. In an actual scene, the bionic eye 131 may employ a video capture device such as a camera to capture a relevant image to obtain visual interaction feedback, and the bionic ear may employ an audio capture device such as a microphone to capture a relevant sound to obtain audio interaction feedback.
Fig. 2 shows the structure of another multi-modal human-computer interaction test system in the embodiment of the present application, in which a bionic hand 121 and a bionic mouth 122 are adopted as the bionic output device, and a bionic eye 131 and a bionic ear 132 are adopted as the bionic input device. The control device 110 may be various computing devices with data processing functions, for example, a computer deployed locally or a server deployed on a network side, and if the computer deployed locally is used, a user may implement a multimodal human-computer interaction test through a user interface on the computer, and if the server deployed on the network side is used, the user may provide a client or a browser to implement remote interaction with the server, thereby implementing the multimodal human-computer interaction test. For example, the control device 110 in fig. 2 may provide a Socket Interface and a User Interface (UI) to the outside, and the client 150 used by the User may perform data interaction with the control device 110 through the Socket Interface, or may perform data interaction with the control device 110 in a display User Interface after connecting with the control device by using the browser 160, such as starting a test, editing a test script, obtaining a test result, viewing a test report, and the like.
The first test script is a test script for controlling the bionic output device to output the interaction information of the corresponding modality, for example, taking a first test script for controlling the bionic mouth as an example, the content of the first test script may be as follows:
*test_MusicPlay()
{ yield robotmouth.
Assert (uidevice, getcurrentpageuri () ═ page:// music ×.xxx.cn "," open music failed ");
}
the "i want to listen to the song of liu de hua" represents that the interactive information is sent through the artificial mouth connected with the control device, the interactive information is a voice instruction, and the specific content of the interactive information is "i want to listen to the song of liu de hua" so as to control the tested device to play specific music. In an actual scene, the control device may generate an audio file from the text in a TTS manner based on the text "i want to listen to songs of liudeluxe", and then play the audio file through a connected manual mouth to implement voice interaction.
In the test script, namely, whether the tested device generates correct interaction feedback is judged by means of assertion operation, namely, whether the tested device opens a URI (Uniform Resource Identifier) for playing music, wherein the URI is the page:// music.xxx.cn, if the tested device opens the URI for playing music, no operation is executed, and if the tested device does not open the URI, information of 'music opening failure' is generated, so that interaction feedback corresponding to the current interaction information can be confirmed, and whether human-computer interaction is abnormal is judged.
The results of the assertion operation may be sent to the control device so that the control device may determine whether the interactive feedback generated by the device under test is correct. In addition, the interactive feedback of the device under test can also be acquired through a bionic input device such as the bionic eye 131 or the bionic ear 132. Taking the bionic eye as an example, if the tested device executes correct processing based on the voice interaction information 'I want to listen to songs of Liu De Hua', the page:// music. When playing, the tested device enters the corresponding user interface, so that when the tested device enters the user interface, the tested device can generate correct interactive feedback, otherwise, if the tested device does not enter the user interface, the tested device does not generate correct interactive feedback.
Fig. 3 shows an interaction process when testing a voice interaction function of music playing, which includes the following steps:
in step S301, the control device 110 generates a control command a1 according to the first test script and sends the control command a1 to the bionic mouth 122 to send out a voice "i want to listen to the song of liudebua".
In step S302, the bionic mouth 122 sends a voice "i want to listen to the song of liudelhi" to the device under test 140 according to the control instruction a 1.
Step S303, when the device under test 140 recognizes the voice "i want to listen to the song of liu de hua", the interaction information is processed to generate corresponding interaction feedback. If the whole processing process is correct, the interactive feedback is to enter the corresponding playing interface a2 to start playing the Liudebua song. If the processing process of the interactive information is incorrect or the interactive information is not correctly identified, the corresponding playing interface cannot be entered.
In step S304, the bionic eye 131 shoots the user interface of the device under test 140, and sends the user interface to the control device for determination. If the user interface of the device under test enters the playback interface a2 in step S303, the bionic eye 131 may capture the playback interface a2, and send the playback interface a2 to the control device for analysis.
In step S305, the control device 110 may perform analysis according to the user interface captured by the bionic eye 131 to obtain a test result. If the user interface is found to enter the playing interface a2 correctly within the preset time, it can be determined that the voice interaction function of music playing is normal.
And the control equipment is also used for generating a test report according to the test result. When the test report is generated, after the execution of a single test script is completed, the control device may generate the test report for the single test script, or may continue to control the execution of other test scripts, and after the execution of all the test scripts is completed, the control device generates the test report for all the test scripts. In addition, a page related to the test report can be output in combination with other third-party test platforms, so that a user can conveniently view the test result.
In an actual scene, the interaction information during the human-computer interaction of the device to be tested can be simulated through the bionic output device, and can also be simulated through an Application Programming Interface (API) packaged in the device to be tested. For example, various application programming interfaces for interaction, which are packaged in the device under test through a preset test framework, for example, interactive operations such as clicking, dragging, sliding, text input and the like, which are input by a user in the device under test, are simulated by calling the application programming interfaces packaged in the uiautomation and the like in the device under test. When the man-machine interaction test is carried out, the test script used for controlling the simulated interaction operation of the tested equipment is the second test script, and the first test script is the test script used for controlling the bionic output equipment to output the corresponding modal interaction information.
Therefore, in some embodiments of the present application, the control device is further configured to import a second test script into the device under test, so that the device under test runs the second test script, and generates a corresponding interaction feedback after simulating a corresponding interaction operation. A user can simulate multi-mode interactive operation through various bionic output devices and a mode of directly guiding a test script into tested equipment to run, corresponding interactive feedback is obtained, and a test result is verified, so that a developer can more flexibly construct a multi-mode test scene, and multi-mode human-computer interaction test is more conveniently realized.
If an assist manual test (amt) tool is integrated in the device to be tested, a corresponding assist manual test action command can be generated. Therefore, the control device can obtain an auxiliary manual test action instruction corresponding to the second test script, and the auxiliary manual test action instruction is led into the tested device through the socket interface, so that an auxiliary manual test tool in the tested device executes the auxiliary manual test action instruction to generate interactive feedback. Because the auxiliary manual testing tool can be used for interactive operation of repeated user input, the efficiency of the test and the stability of the test can be improved.
In an actual scenario, a test case set may be preconfigured in the control device, where the test case set includes test scripts for performing various human-computer interaction tests on the device under test, and the test scripts include the first test script or the second test script. The test script in the test case set can be written or imported by a developer, a tester and other users through a socket interface or a user interface provided by the control device by using a browser or a client. Or, the test script may also come from the device under test, and at this time, the control device may also obtain the first test script or the second test script from the device under test through the socket, so that the test script used for the human-computer interaction test is uniformly managed on the control device.
Fig. 4 shows a processing flow when performing a human-Computer interaction test according to an embodiment of the present application, where a control device uses a PC (Personal Computer), and when performing a human-Computer interaction test on a device under test, an interaction process between a PC end and the device under test end is as follows:
step S401, starting a test component of the PC side. The test component of the PC end comprises a test script management function, so that a test script set can be conveniently configured and the execution of the test script can be controlled, and meanwhile, a test result can be determined and a test report can be generated.
And S402, after the test component of the PC end is started, the control of the test script can be realized through the socket interface. The control process is as follows: interaction with the equipment to be tested can be realized in an adb (Android Debug Bridge) mode, for example, an adb shell command is sent, a socket port is monitored at a PC (personal computer) end in an adb port mapping mode, and instruction messages from the equipment to be tested are received, wherein the message instructions can be amt instruction character strings. In addition, the PC terminal can also send an amt action instruction to the amt tool in the tested device through the socket port, the amt action instructions are used for simulating interactive operations of clicking, dragging, sliding, text input and the like input by a user in the tested device, the amt tool in the tested device can execute and control hardware in the tested device to execute corresponding processing, and a response is returned to the PC terminal.
In step S403, control of the biomimetic device is initiated. The bionic output devices such as the mechanical arm and the artificial mouth can make various actions or send various voices to simulate the interactive operation of the test under the control of the PC end, and the bionic input devices such as the camera can sense the interactive feedback from the tested device.
Step S404, generating a test report. After the execution of the single test script is finished, the PC end can continue to control other test cases in the execution case set, and after the execution of all the test scripts is finished, the PC end can generate a test report.
In the control process of the bionic device, the bionic device depends on three core components including an image analysis component, a gesture control component and a voice control component.
The image analysis component is used for sensing interactive feedback on the user interface of the tested device. The PC end can be connected with a camera to shoot the change condition of the user interface of the tested equipment. When the interactive feedback is made on the user interface of the tested device, the interactive feedback can be shot into continuous image frames, the time sent by the test action is used as the starting time of the interactive feedback, and the time when the image change of the user interface is stopped is used as the ending time of the interactive feedback, so that the interactive feedback corresponding to the interactive information is obtained.
The gesture control component is used for simulating the action interaction of the hand of the user. Mechanical hardware of the bionic hand can be manufactured in a 3D printing mode, and the manipulator is driven to move through the connected steering engine, so that motion control is achieved. And the PC terminal generates a control instruction according to the test script, controls the steering engine accordingly, and drives the bionic manipulator to execute actions to form the gesture operation to be tested.
For example, in the embodiment of the present application, a test script corresponding to a static gesture may be as follows:
yield MultiMode.sendGestureToServer(MultiMode.GestureEvent.GESTU RECODE_PALM);
the test script corresponding to the dynamic gesture may be as follows:
yield MultiMode.sendGestureToServer([{thumb:180,index:0,middle:180,ring:180,pinky:150,wrist:180,bicep:60,rotator:40,shoulder:30,omoplate:10,interval:2000},{thumb:180,index:0,middle:180,ring:180,pinky:150,wrist:180,bicep:60,rotator:120,shoulder:30,omoplate:10,interval:0}]);
the gesture made by the manipulator can be recognized by a gesture recognition module at the tested device end, and a gesture event is sent to the system application, so that the event is responded. After the event responds, the state of the tested equipment end can be changed correspondingly, and the test script of the equipment end can judge whether the state change meets the expectation through an image analysis component or UIAutomator and the like.
The voice control component is used for simulating voice interaction of a user. The voice test script is executed at the tested equipment end or the PC end, and if the tested equipment end is used, the voice operation instruction text in the voice test script is sent to the PC end. After receiving the voice operation instruction text, the PC terminal calls a TTS interface to generate an audio file from the text, then the system automatically processes the audio file, removes invalid data, and then automatically plays the audio file through equipment such as a manual mouth connected with the PC terminal.
Therefore, the scheme provided by the embodiment of the application can expand the testing capability of the tested equipment through the control equipment and the related bionic equipment, and the bionic equipment is used for simulating interaction under multiple modes, so that complete multi-mode automatic testing can be realized.
In addition, the embodiment of the application can also provide a human-computer interaction testing system applicable to other scenes, and the system comprises control equipment, output equipment and input equipment, wherein the output equipment and the input equipment are not limited to various bionic equipment and can be output equipment and input equipment in any other forms, and the human-computer interaction function of the tested equipment is tested through the interaction among the control equipment, the output equipment, the input equipment and the tested equipment.
In the human-computer interaction test system of this embodiment, the control device is configured to send a corresponding control instruction to the output device according to the first test script. The output equipment is used for outputting interaction information to the tested equipment according to a control instruction from the control equipment, so that the tested equipment generates interaction feedback according to the interaction information. The input device is used for collecting the interactive feedback of the tested device and providing the interactive feedback to the control device. And after the control equipment acquires the interactive feedback provided by the input equipment, the control equipment can determine a test result according to the interactive feedback to finish the test of the man-machine interactive function.
Based on the same inventive concept, the embodiment of the application also provides a multi-mode human-computer interaction testing method, the corresponding system of the method is the multi-mode human-computer interaction testing system in the embodiment, and the problem solving principle is similar to that of the method.
Fig. 5 shows a processing flow of a multi-modal human-computer interaction testing method provided by an embodiment of the application, which relies on a testing system of a control device, a bionic output device and a bionic input device. When the human-computer interaction test is realized, the method comprises the following processing steps:
step S501, the control device sends a corresponding control instruction to the bionic output device according to the first test script.
Step S502, the bionic output device outputs interaction information of a corresponding mode to the tested device according to a control instruction from the control device, so that the tested device generates interaction feedback according to the interaction information.
Step S503, the bionic input device collects the interactive feedback of the tested device and provides the interactive feedback to the control device.
Step S504, the control device obtains the interactive feedback of the tested device, and determines the test result according to the interactive feedback.
The interactive information is information corresponding to interactive operation in different modalities, for example, specific voice information corresponding to voice interactive operation, click object or click position information corresponding to click operation, hand action information corresponding to gesture operation, and the like. The interactive feedback is feedback made by the device under test after the interactive information is acquired, for example, the device under test switches the user interface to a specific interface, plays specific feedback voice, and the like.
Different bionic output devices are used for simulating the control process of the tested device in a corresponding form, and different bionic input devices are used for acquiring the interactive feedback of the tested device in the corresponding form. For example, the bionic output device may include a bionic hand 121 and a bionic mouth 122, the bionic hand may simulate the actions of a hand of a user, so as to simulate the user to control the device under test through various gestures and hand actions in an actual scene, and the bionic mouth may simulate the voice of the user, so as to simulate the user to control the device under test through sound. In an actual scene, the bionic hand can be various manipulators simulating human hand shapes, a steering engine is used as an execution part, so that different postures and actions of the human hand can be simulated, and the bionic mouth can be various audio devices such as a sound box and the like, so that the voice of a user can be simulated by emitting different sounds.
The bionic input device 130 may include a bionic eye 131 and a bionic ear 132, where the bionic eye may collect visual interaction feedback generated by the device under test, and the visual interaction feedback may be feedback information in any visual aspect, such as feedback content displayed on a display screen of the device under test after receiving the interaction information and performing corresponding processing, or feedback action performed by a movable component of the device under test after receiving the interaction information and performing corresponding processing, and the visual interaction feedback may be collected by the bionic eye 131. The bionic ear 132 may collect an auditory interaction feedback generated by the device under test, where the auditory interaction feedback may be any auditory feedback information, such as a sound output by an audio device of the device under test after receiving the interaction information and performing corresponding processing. In an actual scene, the bionic eye 131 may employ a video capture device such as a camera to capture a relevant image to obtain visual interaction feedback, and the bionic ear may employ an audio capture device such as a microphone to capture a relevant sound to obtain audio interaction feedback.
Fig. 2 shows the structure of another multi-modal human-computer interaction test system in the embodiment of the present application, in which a bionic hand 121 and a bionic mouth 122 are adopted as the bionic output device, and a bionic eye 131 and a bionic ear 132 are adopted as the bionic input device. The control device 110 may be various computing devices with data processing functions, for example, a computer deployed locally or a server deployed on a network side, and if the computer deployed locally is used, a user may implement a multimodal human-computer interaction test through a user interface on the computer, and if the server deployed on the network side is used, the user may provide a client or a browser to implement remote interaction with the server, thereby implementing the multimodal human-computer interaction test. For example, the control device 110 in fig. 2 may provide a Socket Interface and a User Interface (UI) to the outside, and the client 150 used by the User may perform data interaction with the control device 110 through the Socket Interface, or may perform data interaction with the control device 110 in a display User Interface after connecting with the control device by using the browser 160, such as starting a test, editing a test script, obtaining a test result, viewing a test report, and the like.
The first test script is a test script for controlling the bionic output device to output the interaction information of the corresponding modality, for example, taking a first test script for controlling the bionic mouth as an example, the content of the first test script may be as follows:
*test_MusicPlay()
{ yield robotmouth.
Assert (uidevice, getcurrentpageuri () ═ page:// music ×.xxx.cn "," open music failed ");
}
the "i want to listen to the song of liu de hua" represents that the interactive information is sent through the artificial mouth connected with the control device, the interactive information is a voice instruction, and the specific content of the interactive information is "i want to listen to the song of liu de hua" so as to control the tested device to play specific music. In an actual scene, the control device may generate an audio file from the text in a TTS manner based on the text "i want to listen to songs of liudeluxe", and then play the audio file through a connected manual mouth to implement voice interaction.
In the test script, namely, whether the tested device generates correct interaction feedback is judged by means of assertion operation, namely, whether the tested device opens a URI (Uniform Resource Identifier) for playing music, wherein the URI is the page:// music.xxx.cn, if the tested device opens the URI for playing music, no operation is executed, and if the tested device does not open the URI, information of 'music opening failure' is generated, so that interaction feedback corresponding to the current interaction information can be confirmed, and whether human-computer interaction is abnormal is judged.
The results of the assertion operation may be sent to the control device so that the control device may determine whether the interactive feedback generated by the device under test is correct. In addition, the interactive feedback of the device under test can also be acquired through a bionic input device such as the bionic eye 131 or the bionic ear 132. Taking the bionic eye as an example, if the tested device executes correct processing based on the voice interaction information 'I want to listen to songs of Liu De Hua', the page:// music. When playing, the tested device enters the corresponding user interface, so that when the tested device enters the user interface, the tested device can generate correct interactive feedback, otherwise, if the tested device does not enter the user interface, the tested device does not generate correct interactive feedback.
Fig. 3 shows an interaction process when testing a voice interaction function of music playing, which includes the following steps:
in step S301, the control device 110 generates a control command a1 according to the first test script and sends the control command a1 to the bionic mouth 122 to send out a voice "i want to listen to the song of liudebua".
In step S302, the bionic mouth 122 sends a voice "i want to listen to the song of liudelhi" to the device under test 140 according to the control instruction a 1.
Step S303, when the device under test 140 recognizes the voice "i want to listen to the song of liu de hua", the interaction information is processed to generate corresponding interaction feedback. If the whole processing process is correct, the interactive feedback is to enter the corresponding playing interface a2 to start playing the Liudebua song. If the processing process of the interactive information is incorrect or the interactive information is not correctly identified, the corresponding playing interface cannot be entered.
In step S304, the bionic eye 131 shoots the user interface of the device under test 140, and sends the user interface to the control device for determination. If the user interface of the device under test enters the playback interface a2 in step S303, the bionic eye 131 may capture the playback interface a2, and send the playback interface a2 to the control device for analysis.
In step S305, the control device 110 may perform analysis according to the user interface captured by the bionic eye 131 to obtain a test result. If the user interface is found to enter the playing interface a2 correctly within the preset time, it can be determined that the voice interaction function of music playing is normal.
The multi-mode man-machine interaction test method can also generate a test report according to the test result. When the test report is generated, after the execution of a single test script is completed, the control device may generate the test report for the single test script, or may continue to control the execution of other test scripts, and after the execution of all the test scripts is completed, the control device generates the test report for all the test scripts. In addition, a page related to the test report can be output in combination with other third-party test platforms, so that a user can conveniently view the test result.
In an actual scene, the interaction information during the human-computer interaction of the device to be tested can be simulated through the bionic output device, and can also be simulated through an Application Programming Interface (API) packaged in the device to be tested. For example, various application programming interfaces for interaction, which are packaged in the device under test through a preset test framework, for example, interactive operations such as clicking, dragging, sliding, text input and the like, which are input by a user in the device under test, are simulated by calling the application programming interfaces packaged in the uiautomation and the like in the device under test. When the man-machine interaction test is carried out, the test script used for controlling the simulated interaction operation of the tested equipment is the second test script, and the first test script is the test script used for controlling the bionic output equipment to output the corresponding modal interaction information.
Therefore, in some embodiments of the application, the multi-modal human-computer interaction testing method may further import a second test script into the device under test, so that the device under test runs the second test script, and generates corresponding interaction feedback after simulating corresponding interaction operations. A user can simulate multi-mode interactive operation through various bionic output devices and a mode of directly guiding a test script into tested equipment to run, corresponding interactive feedback is obtained, and a test result is verified, so that a developer can more flexibly construct a multi-mode test scene, and multi-mode human-computer interaction test is more conveniently realized.
If an assist manual test (amt) tool is integrated in the device to be tested, a corresponding assist manual test action command can be generated. Therefore, the control device can obtain an auxiliary manual test action instruction corresponding to the second test script, and the auxiliary manual test action instruction is led into the tested device through the socket interface, so that an auxiliary manual test tool in the tested device executes the auxiliary manual test action instruction to generate interactive feedback. Because the auxiliary manual testing tool can be used for interactive operation of repeated user input, the efficiency of the test and the stability of the test can be improved.
In an actual scenario, a test case set may be preconfigured in the control device, where the test case set includes test scripts for performing various human-computer interaction tests on the device under test, and the test scripts include the first test script or the second test script. The test script in the test case set can be written or imported by a developer, a tester and other users through a socket interface or a user interface provided by the control device by using a browser or a client. Or, the test script may also come from the device under test, and at this time, the control device may also obtain the first test script or the second test script from the device under test through the socket, so that the test script used for the human-computer interaction test is uniformly managed on the control device.
In addition, the embodiment of the application can also provide a human-computer interaction testing method applicable to other scenes, the method adopts a system comprising control equipment, output equipment and input equipment, wherein the output equipment and the input equipment are not limited to various bionic equipment and can be output equipment and input equipment in any other forms, and the human-computer interaction function of the tested equipment is tested through interaction among the control equipment, the output equipment, the input equipment and the tested equipment.
In the human-computer interaction test method of this embodiment, the control device first sends a corresponding control instruction to the output device according to a first test script. The output device may output interaction information to the device under test according to a control instruction from the control device, so that the device under test generates interaction feedback according to the interaction information. The input device collects the interactive feedback of the device under test and provides the interactive feedback to the control device. And after the control equipment acquires the interactive feedback provided by the input equipment, the control equipment can determine a test result according to the interactive feedback to finish the test of the man-machine interactive function.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. Some embodiments according to the present application include a computing device as shown in fig. 6, which includes one or more memories 610 storing computer-readable instructions and a processor 620 for executing the computer-readable instructions, wherein the computer-readable instructions, when executed by the processor, cause the device to perform the methods and/or aspects based on the embodiments of the present application.
Furthermore, some embodiments of the present application also provide a computer readable medium, on which computer program instructions are stored, the computer readable instructions being executable by a processor to implement the methods and/or aspects of the foregoing embodiments of the present application.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In some embodiments, the software programs of the present application may be executed by a processor to implement the above steps or functions. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.