[go: up one dir, main page]

CN109344055B - Test method and test device - Google Patents

Test method and test device Download PDF

Info

Publication number
CN109344055B
CN109344055B CN201811044818.8A CN201811044818A CN109344055B CN 109344055 B CN109344055 B CN 109344055B CN 201811044818 A CN201811044818 A CN 201811044818A CN 109344055 B CN109344055 B CN 109344055B
Authority
CN
China
Prior art keywords
test
database
tested
test case
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811044818.8A
Other languages
Chinese (zh)
Other versions
CN109344055A (en
Inventor
丁普升
付铨
冯源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Dream Database Co ltd
Original Assignee
Wuhan Dameng Database Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Dameng Database Co Ltd filed Critical Wuhan Dameng Database Co Ltd
Priority to CN201811044818.8A priority Critical patent/CN109344055B/en
Publication of CN109344055A publication Critical patent/CN109344055A/en
Application granted granted Critical
Publication of CN109344055B publication Critical patent/CN109344055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Prevention of errors by analysis, debugging or testing of software
    • G06F11/3668Testing of software
    • G06F11/3672Test management
    • G06F11/3676Test management for coverage analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Prevention of errors by analysis, debugging or testing of software
    • G06F11/3668Testing of software
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本发明公开了一种测试方法以及测试装置,该测试方法包括:获取生产数据库的基准测试数据以及日志文件;根据基准测试数据以及日志文件创建测试用例;判断确定测试用例是否适用于待测数据库;若测试用例适用于待测数据库,则依据测试用例对待测数据库进行测试,并输出测试结果。该测试方法基于生产数据库的日志文件创建测试用例,由于日志文件能够真实的反应应用服务的实际使用场景,提高了测试用例的覆盖率以及实用性。在测试用例适用于待测数据库时,才基于测试用例对待测数据库进行测试,提高了测试的准确性,减小了误判情况的发生。

Figure 201811044818

The invention discloses a test method and a test device. The test method includes: acquiring benchmark test data and log files of a production database; creating a test case according to the benchmark test data and the log file; judging and determining whether the test case is suitable for the database to be tested; If the test case is applicable to the database to be tested, the database to be tested is tested according to the test case, and the test result is output. The test method creates a test case based on the log file of the production database. Since the log file can truly reflect the actual usage scenario of the application service, the coverage and practicability of the test case are improved. When the test case is suitable for the database to be tested, the database to be tested is tested based on the test case, which improves the accuracy of the test and reduces the occurrence of misjudgment.

Figure 201811044818

Description

Test method and test device
Technical Field
The invention belongs to the technical field of databases, and particularly relates to a test method and a test device.
Background
A database is an organized collection of data that provides data services, such as queries, records, updates, etc., for various computer applications. With the development of information technology, various computer applications are in a large number, and in order to ensure the stability of the performance of a database, the performance, the function and the like of the database need to be tested.
In order to improve the test accuracy of a database system and ensure that the database meets the functional requirement and the performance requirement of a specific application scene, the simulation of a specific computer application to test the database to be tested becomes a widely used test method. At present, there are two traditional methods for simulating computer application services to perform database system testing: (1) building a computer application service, and carrying out system test on the database service to be tested through the application service; (2) and establishing a computer application service, wherein the application service is connected with an agent program, the agent program is connected with the production database service and the database service to be tested, and the agent program is used for simultaneously sending the database request of the application service to the production database service and the database service to be tested for comparison test.
In the implementation process of the method, the database to be tested needs to be tested depending on a program set of computer application services, but the functions provided by the database to be tested are more and more complex, so that the types of the Structured Query Language (SQL) may be different from the data scale to the SQL, and thus the test range of the database system is larger and larger, and the demand of the program set of the computer application services is more and more. As a database manufacturer, it is often difficult to deploy a program set of an application service required by a third party, and meanwhile, if the program set of the application service is not matched with a database to be tested, a problem of inaccurate test result is caused, so that misjudgment on the performance and function of the database to be tested occurs, and the authenticity and comprehensiveness of the test result are defective.
In view of the above, overcoming the drawbacks of the prior art is an urgent problem in the art.
Disclosure of Invention
In view of the above drawbacks or needs for improvement in the prior art, the present invention provides a test method and a test apparatus, which aim to create a test case based on a log file of a production database, and improve the coverage and practicability of the test case because the log file can truly reflect the actual usage scenario of an application service. When the test case is suitable for the database to be tested, the database to be tested is tested based on the test case, so that the test accuracy is improved, and the occurrence of misjudgment is reduced, thereby solving the technical problems that the program set of the application service required by a third party is difficult to deploy, and meanwhile, if the program set of the application service is not matched with the database to be tested, the test result is inaccurate, so that the misjudgment of the performance and the function of the database to be tested is caused, and the authenticity and the comprehensiveness of the test result have defects.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, the present invention provides a test method, comprising:
acquiring benchmark test data and a log file of a production database;
creating a test case according to the benchmark test data and the log file;
judging and determining whether the test case is suitable for the database to be tested;
and if the test case is suitable for the database to be tested, testing the database to be tested according to the test case, and outputting a test result.
Preferably, the determining whether the test case is applicable to the database under test includes:
acquiring a reference test environment of the production database and a target test environment of the database to be tested;
carrying out similarity matching on the reference test environment and the target test environment to obtain the similarity between the reference test environment and the target test environment;
judging and determining whether the similarity between the reference test environment and the target test environment is greater than a preset similarity threshold;
and if the similarity between the reference test environment and the target test environment is greater than a preset similarity threshold, the test case is applicable to the database to be tested.
Preferably, the benchmark test environment and the target test environment include one or more of an operating system platform, an operating system version, a data size, a client concurrency size, and a hardware resource.
Preferably, the determining whether the test case is applicable to the database under test includes:
acquiring the test weight between the production database and the database to be tested according to the incidence relation between the production database and the database to be tested;
acquiring a reference test result of the production database running the test case and an actual test result of the to-be-tested database running the test case;
performing weighting calculation on the actual test result and the test weight to obtain a weighted test result, and acquiring a difference result between the weighted test result and the reference test result;
judging and determining whether the difference result is smaller than a preset difference result threshold value;
and if the difference result is smaller than a preset difference result threshold value, the test case is applicable to the database to be tested.
Preferably, the test method further comprises:
if the difference result is larger than a preset difference result threshold value, the test case is not applicable to the database to be tested;
and recreating the test case according to the incidence relation between the production database and the database to be tested.
Preferably, the creating a test case according to the benchmark test data and the log file includes:
analyzing the log file to obtain an operation command set;
and creating a test case according to the benchmark test data and the operation command set.
Preferably, the creating a test case according to the benchmark test data and the log file further includes:
filtering the operation command set according to a preset processing condition to obtain a first operation command set;
executing the first operation command set, and removing operation commands which fail to be executed to obtain a target operation command set;
creating a test case according to the benchmark test data and the operation command set comprises:
and creating a test case according to the benchmark test data and the target operation command set.
Preferably, the test method further comprises:
re-acquiring the benchmark test data and the log file of the production database according to a preset time interval;
creating a new test case according to the newly acquired benchmark test data and the log file;
judging and determining whether the difference between the new test case and a historical test case is greater than a preset difference threshold value, wherein the historical test case is a test case for testing the database to be tested at present;
if the difference between the new test case and the historical test case is larger than a preset difference threshold value, obtaining a difference test case according to the difference between the new test case and the historical test case, and testing the database to be tested according to the new test case and/or the difference test case.
Preferably, the test method further comprises:
running the test case on the production database according to a preset time interval to obtain a simulation test result;
judging and determining whether the difference between the simulation test result and the expected test result is greater than a preset difference threshold value;
if the difference between the simulation test result and the expected test result is larger than a preset difference threshold value, discarding the corresponding test case, and recreating a new test case;
and if the difference between the simulation test result and the expected test result is smaller than a preset difference threshold value, executing judgment to determine whether the test case is suitable for the database to be tested.
In a second aspect, the present invention provides a test apparatus comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being programmed to perform the test method of the first aspect.
In a third aspect, the present invention also provides a non-transitory computer storage medium having stored thereon computer-executable instructions for execution by one or more processors for performing the method of testing of the first aspect.
Generally, compared with the prior art, the technical scheme of the invention has the following beneficial effects: the testing method comprises the steps of obtaining benchmark test data and a log file of a production database; creating a test case according to the benchmark test data and the log file; judging and determining whether the test case is suitable for the database to be tested; and if the test case is suitable for the database to be tested, testing the database to be tested according to the test case, and outputting a test result. According to the test method, on one hand, the test case is created based on the log file of the production database, and the log file can truly reflect the actual use scene of the application service, so that the test case created based on the log file can well simulate the actual use scene of the application service, and the coverage rate and the practicability of the test case are improved. On the other hand, because the production database and the database to be tested may have differences, the method judges whether the test case is suitable for the database to be tested, and tests the database to be tested based on the test case when the test case is suitable for the database to be tested, so that the test accuracy is improved, and the occurrence of misjudgment is reduced.
Drawings
FIG. 1 is a schematic flow chart of a testing method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of one embodiment of step 12 of FIG. 1;
FIG. 3 is a schematic flow chart of another embodiment of step 12 of FIG. 1;
FIG. 4 is a schematic flow chart of another testing method provided by the embodiment of the invention;
FIG. 5 is a schematic flow chart of another testing method provided by the embodiment of the present invention;
fig. 6 is a schematic structural diagram of a testing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1:
referring to fig. 1, the present embodiment provides a testing method for testing performance, function, and the like of a database to be tested, the testing method includes the following steps:
step 10: and acquiring benchmark test data and a log file of the production database.
In this embodiment, first, benchmark test data and a log file of a production database are acquired. The benchmark test data file can be obtained by copying or backing up the production database, or can be directly derived from the production database.
In an actual application scenario, when the production database runs, a corresponding log file is generated, and the log file is stored on a local or remote storage device. The log file mainly records the operation behavior of the application program on the database, and the log file can be used for knowing which operations and specific operation processes are performed on the database by the application program. For example, the application logs in the database and performs query, modification, and the like on the data of the database based on the business logic, wherein the business logic may be a transaction behavior involving goods query, shopping, payment, order processing, and the like.
Step 11: and creating a test case according to the benchmark test data and the log file.
In this embodiment, the test apparatus generates a test case from the benchmark test data and the log file. The log file is mainly used for recording operation logic and related data, and has no control logic, so that the test device needs to generate a test case according to the reference test data and the log file, and the test case is actually an operation script for testing the database to be tested. Specifically, the log file is analyzed to obtain an operation command set, an executable Structured Query Language (SQL) statement is obtained through analysis according to the structure of the log file, and the execution parameter is associated with the corresponding SQL statement to obtain the operation command set. And then, creating a test case according to the benchmark test data and the operation command set.
In an actual application scenario, repeated operations or operations that do not meet test conditions may be recorded in the log file, so as to improve the test efficiency, the test case may be created after filtering the operation command set acquired based on the log file according to a preset condition.
In a preferred embodiment, the operation command set is filtered according to a preset processing condition to obtain a first operation command set, the first operation command set is executed, and operation commands which fail to be executed are removed to obtain a target operation command set. The preset processing conditions comprise deduplication processing and operation object screening processing.
In an actual application scenario, the operation command set is first subjected to deduplication processing, and the deduplication processing may be performed on the operation command set according to the following function.
/**
*@author Administrator
Description of the drawings: for removing duplicate similar sql, reducing the number of test sql
*/
public class ItemFilter{
HashMap<Integer,ArrayList<String>>mapFilter=new HashMap<Integer,ArrayList<String>>();
SqlLog log=null;
SqlLog log2=null;
public void setDelLog(String filename){
if(log!=null){
log.Close();
log2.Close();
}
log=new SqlLog(filename);
log2=new SqlLog(filename+".2.log");
}
public void finish(){
if(log!=null){
log.Close();
log2.Close();
}
}
After the operation command set is subjected to deduplication processing, the operation command set is subjected to filtering processing by using filters, for example, each filter contains an SQL feature (e.g., a specific SQL statement type, SQL for operating a specific object) and processing operations (e.g., a delete operation, one or more of the same or similar operations reserved), and the first operation command set is obtained by repeatedly processing the operation command set by using the filters. In a practical application scenario, the filtering process may be performed as follows.
/**
Determining whether to retain the sql
*@param:
*@return:boolean
*/
public boolean isKeep(String sql){
boolean flag=true;
if(sql==null
||sql.trim().length()==0){
return false;
}
if (isDeleteSql (sql)) {// if deletion of sql is confirmed, then no reservation is required
log.logMessageNotPrint(sql+"\r\n");
return false;
}
Pitch ("\ \ s +") length; // separating sql strings according to spaces
if(!mapFilter.containsKey(new Integer(len))){//if hash-code notexists,keep sql
ArrayList<String>tmp=new ArrayList<String>();
tmp.add(sql);
mapFilter.put(new Integer(len),tmp);
return true;
}
ArrayList<String>values=mapFilter.get(new Integer(len));
Iterator<String>it=values.iterator();
while(it.hasNext()){
String sqltmp=it.next();
if (sql. trim (). equalsIgnoreCase (sqltmp)) {// find the same sql, then that sql does not remain
log.logMessageNotPrint(sql+"\r\n");
flag=false;
break;
}
//to-do
If a similar sql is found, then not retained, the deleted sql is entered into the temporary file
if(isSimilarSql(sql,sqltmp)){
log2.logMessageNotPrint(sql+"\r\n");
flag=false;
break;
}
}
if(flag){//sql need keep
values.add(sql);
mapFilter.put(new Integer(len),values);
}
return flag;
}
In this embodiment, after the first operation command set is obtained, the first operation command set is also simulated to be executed, and the target operation command set is obtained after the operation command which is failed to be executed is removed. And then, creating a test case according to the benchmark test data and the target operation command set.
In this embodiment, the test case is created based on the log file, and is an operation script for the database. The test case can simulate the operation behavior of the application program on the database, which is equivalent to the playback of the operation behavior of the application program on the database, so that the database can be truly and comprehensively tested on the premise of not depending on the application program.
Step 12: and judging and determining whether the test case is suitable for the database to be tested.
In an actual application scenario, because a difference may exist between a production database and a database to be tested, a test case created according to reference test data and a log file of the production database may not match the database to be tested, so that a test result does not conform to actual performance of the database to be tested, and accuracy of testing is improved. According to the embodiment, whether the test case is suitable for the database to be tested is judged and determined, and the database to be tested is tested based on the test case when the test case is suitable for the database to be tested, so that the test accuracy is improved, and the occurrence of misjudgment is reduced.
The judgment and determination of whether the test case is suitable for the database to be tested can be carried out from different dimensionalities. In an alternative embodiment, the determination may be performed according to a reference test environment of the production database and a target test environment of the to-be-tested database. Referring to fig. 2, step 12: judging and determining whether the test case is suitable for the database to be tested specifically comprises the following steps:
step 1211: and acquiring a reference test environment of the production database and a target test environment of the database to be tested.
In this embodiment, a reference test environment of the production database and a target test environment of the to-be-tested database are obtained. The benchmark test environment is an actual working environment for producing the database when the benchmark test data and the corresponding log files are obtained, and the benchmark test environment comprises one or more of an operating system platform, an operating system version, a data scale, a client side concurrent scale and hardware resources. The target test environment is a working environment of the to-be-tested database for testing the to-be-tested database, and the reference test environment comprises one or more of an operating system platform, an operating system version, a data scale, a client concurrent scale and hardware resources.
Step 1212: and performing similarity matching on the reference test environment and the target test environment to acquire the similarity between the reference test environment and the target test environment.
In this embodiment, similarity matching is performed between the benchmark test environment and the target test environment, so as to obtain a similarity between the benchmark test environment and the target test environment. In order to obtain more accurate similarity, corresponding weights may be configured according to the type of the test environment, for example, the operating system platform and the operating system version are configured as a first similarity weight value, and the client concurrency scale and the hardware resources of the database are configured as a second similarity weight value.
For example, the method includes the steps of comparing an operating system platform and an operating system version of a production database and a database to be tested, determining a first similarity according to the similarity between the operating system platform and the operating system version of the production database and the database to be tested, then comparing the client concurrent scale and hardware resources of the production database and the database to be tested, determining a second similarity according to the client concurrent scale and the hardware resources of the production database and the database to be tested, multiplying the first similarity by a first similarity weighted value to obtain a first similarity weighted value, multiplying the second similarity by a second similarity weighted value to obtain a second similarity weighted value, and overlapping the first similarity weighted value and the second similarity weighted value to obtain the similarity between the production database and the database to be tested.
The first similarity weight and the second similarity weight are determined according to actual conditions. For example, if the operating system platforms and operating system versions of the production database and the database to be tested are the same, the first similarity is 1; the concurrent scales of the production database and the client of the database to be tested and the hardware resources are different, and the second similarity is determined to be 0.9. Because the operating system platform and the operating system version have a greater determinant on the test result, the preset first similarity weight is 60%, the second similarity weight is 40%, and the similarity between the production database and the database to be tested is 1 × 60% +0.9 × 40% — 0.96.
Step 1213: and judging and determining whether the similarity between the reference test environment and the target test environment is greater than a preset similarity threshold.
The preset similarity threshold is designed according to actual conditions, and may be 80% or 90%, and is not specifically limited herein.
In this embodiment, it is determined whether the similarity between the reference test environment and the target test environment is greater than a preset similarity threshold. For example, if the preset similarity threshold is 90% and the similarity between the production database and the database to be tested is 0.96, the similarity between the production database and the database to be tested is greater than the preset similarity threshold. If the similarity between the production database and the database to be tested is greater than the preset similarity threshold, step 1214 is executed, and if the similarity between the production database and the database to be tested is less than the preset similarity threshold, step 1215 is executed.
Step 1214: and if the similarity between the reference test environment and the target test environment is greater than a preset similarity threshold, the test case is applicable to the database to be tested.
Step 1215: and if the similarity between the reference test environment and the target test environment is smaller than a preset similarity threshold, the test case is not suitable for the database to be tested, and the test case is created again.
In another alternative embodiment, whether the test case is suitable for the database to be tested may be determined according to the association relationship between the production database and the database to be tested. Referring to fig. 3, step 12: judging and determining whether the test case is suitable for the database to be tested specifically comprises the following steps:
step 1221: and acquiring the test weight between the production database and the database to be tested according to the incidence relation between the production database and the database to be tested.
In this embodiment, the test weight between the production database and the database to be tested is obtained according to the association relationship between the production database and the database to be tested. The association relationship may be determined by the data size or the operation configuration.
Step 1222: and acquiring a reference test result of the production database running the test case and an actual test result of the to-be-tested database running the test case.
The reference test result can be a reference time length for producing the database running test case, and correspondingly, the actual test result is an actual time length for the database to be tested to run the test case. Alternatively, the test result may be other indexes.
Step 1223: and performing weighting calculation on the actual test result and the test weight to obtain a weighted test result, and acquiring a difference result between the weighted test result and the reference test result.
Step 1224: and judging and determining whether the difference result is smaller than a preset difference result threshold value.
In this embodiment, it is determined whether the difference result is smaller than a preset difference result threshold, and if the difference result is smaller than the preset difference result threshold, step 1225 is executed; if the difference result is greater than the preset difference result threshold, go to step 1226.
For example, in the same operation command, when the configuration of the production database and the configuration of the database to be tested are basically consistent, and the data scale of the database to be tested is a times of the data scale of the production database, the test weight between the production database and the database to be tested is 1/a, the reference duration for the production database to execute the test case is T1, the actual duration for the database to be tested to execute the test case is T2, and the weighting duration for the database to be tested to execute the test case is T2 x 1/a according to the actual duration multiplied by the test weight. The difference between the weighted test results and the baseline test results was T2 x 1/A-T1.
For example, when the data size of the database to be tested is 2 times of the data size of the production database, the test weight between the production database and the database to be tested is 1/2, the reference duration for the production database to execute the test case is 4s, the actual duration for the database to be tested to execute the test case is 9s, and the weighted duration for the database to be tested to execute the test case is calculated according to the actual duration multiplied by the test weight, wherein 9s is 1/2, and 4.5 s. The difference between the weighted test result and the baseline test result was 4.5s-4 s-0.5 s.
The difference result depends on the actual situation, for example, the difference result is 1 s. And if the 0.5s is less than 1s, the test case is suitable for the database to be tested.
Step 1225: and if the difference result is smaller than a preset difference result threshold value, the test case is applicable to the database to be tested.
Step 1226: and if the difference result is larger than a preset difference result threshold value, the test case is not suitable for the database to be tested, and the test case is created again.
In this embodiment, if the difference result is greater than the preset difference result threshold, the test case is not applicable to the to-be-tested database, and the test case is re-created according to the association relationship between the production database and the to-be-tested database.
In yet another alternative embodiment, first, the similarity between the reference test environment of the production database and the target test environment of the database to be tested is obtained by performing a judgment according to the reference test environment of the production database and the target test environment of the database to be tested, when the similarity between the reference test environment of the production database and the target test environment of the database to be tested reaches a preset similarity threshold, then obtaining the test weight of the production database and the database to be tested according to the incidence relation between the production database and the database to be tested, combining the similarity of the reference test environment of the production database and the target test environment of the database to be tested and the test weight between the production database and the database to be tested to obtain the target test weight, and then, judging and determining whether the test case is suitable for the database to be tested by using the target test weight according to the modes of the steps 1223 to 1226 (adjusting the value corresponding to the test weight to the value corresponding to the target test weight).
Step 13: and if the test case is suitable for the database to be tested, testing the database to be tested according to the test case, and outputting a test result.
In this embodiment, if the test case is applicable to the database to be tested, the database to be tested is tested according to the test case, and a test result is output.
Example 2:
different from the embodiment 1, in order to further ensure the accuracy of the test case, in this embodiment, after the test case is created, the test case is periodically returned to the production database for simulation test, evaluation is performed according to the simulation test result and the expected test result, and whether the test case can truly simulate the operation behavior of the user is determined; if the difference between the simulation test result and the expected test result is large, the operation behavior of the user cannot be truly simulated by the test case, and the test case is recreated. Please refer to fig. 4 for the specific steps of the testing method of the present embodiment.
Step 20: and acquiring benchmark test data and a log file of the production database.
In this embodiment, first, benchmark test data and a log file of a production database are acquired. The benchmark test data file can be obtained by copying or backing up the production database, or can be directly derived from the production database.
Step 20 of this embodiment is the same as step 10 of embodiment 1, and the detailed process refers to step 10 and the related text.
Step 21: and creating a test case according to the benchmark test data and the log file.
In this embodiment, the test apparatus generates a test case from the benchmark test data and the log file. The log file is mainly used for recording operation logic and related data, and has no control logic, so that the test device needs to generate a test case according to the reference test data and the log file, and the test case is actually an operation script for testing the database to be tested.
Step 21 of this embodiment is the same as step 10 of embodiment 1, and the detailed process refers to step 10 and the related text.
Step 22: and running the test case on the production database according to a preset time interval to obtain a simulation test result.
Considering that the application program update or other factors can cause changes to the database operation command set (the database log file can also change), the test case changes are caused, and the log needs to be analyzed again to generate the test case. In this embodiment, the test case is run on the production database at preset time intervals to obtain a simulation test result. The preset time interval can be determined according to actual conditions. Step 23: and judging and determining whether the difference between the simulation test result and the expected test result is greater than a preset difference threshold value.
Wherein the expected test result can be determined according to the actual result output by the current production database.
In this embodiment, the current latest output result of the production database when the operation matched with the test case is executed is obtained, and the current latest output result is configured as the expected test result. Then, whether the difference between the simulation test result and the expected test result is larger than a preset difference threshold value is judged and determined. If the difference between the simulated test result and the expected test result is greater than a preset difference threshold, executing step 24; if the difference between the simulated test result and the expected test result is less than the preset difference threshold, step 25 is executed.
The difference threshold may be determined according to actual situations, and is not specifically limited herein. Step 24: and if the difference between the simulation test result and the expected test result is greater than a preset difference threshold value, discarding the corresponding test case and recreating a new test case.
In this embodiment, a new test case is created again according to steps 20 and 21.
Step 25: and if the difference between the simulation test result and the expected test result is smaller than a preset difference threshold value, judging and determining whether the test case is suitable for the database to be tested.
In this embodiment, if the difference between the simulation test result and the expected test result is smaller than a preset difference threshold, it is determined whether the test case is applicable to the to-be-tested database.
Please refer to fig. 1, fig. 2, fig. 3 and the related text description in embodiment 1, which are not limited herein specifically for the determination process of whether the test case is applicable to the database to be tested.
In an actual application scenario, an application update causes a change in the database operation command set resulting in a change in the test case. If the data structure is not changed during updating of the application program, an expected result can be output when the database to be tested is tested through the historical test case, and a new test case does not need to be created again. If the data structure is changed by updating the application program, and an exception may occur when the production database is tested through the historical test case, it indicates that a new test case needs to be created again.
Step 26: and if the test case is suitable for the database to be tested, testing the database to be tested according to the test case, and outputting a test result.
In this embodiment, if the test case is applicable to the database to be tested, the database to be tested is tested according to the test case, and a test result is output.
In another embodiment, the benchmark test data and the log file of the production database are obtained again according to a preset time interval, and a new test case is created according to the obtained benchmark test data and the log file. And judging whether the test case needs to be replaced or not based on the difference between the new test case and the historical test case. Please refer to fig. 5 for a detailed method flow.
Step 30: and re-acquiring the benchmark test data and the log file of the production database according to a preset time interval.
Considering that the application program update or other factors can cause changes to the database operation command set (the database log file can also change), the test case changes are caused, and the log needs to be analyzed again to generate the test case. In this embodiment, the benchmark test data and the log file of the production database are obtained again at preset time intervals to create a new test case. The preset time interval can be determined according to actual conditions.
Step 31: and creating a new test case according to the reference test data and the log file which are obtained again.
Step 32: and judging and determining whether the difference between the new test case and a historical test case is greater than a preset difference threshold value, wherein the historical test case is the test case for testing the database to be tested at present.
In this embodiment, it is determined whether the difference between the new test case and the historical test case is greater than a preset difference threshold, if so, it is determined that the test case needs to be replaced, step 33 is executed, and if not, it is determined that the test case does not need to be replaced, and step 34 is executed. The preset difference threshold value is determined according to the actual situation.
Step 33: if the difference between the new test case and the historical test case is larger than a preset difference threshold value, obtaining a difference test case according to the difference between the new test case and the historical test case, and testing the database to be tested according to the new test case and/or the difference test case.
In this embodiment, if the difference between the new test case and the historical test case is greater than the preset difference threshold, it indicates that the historical test case cannot truly and comprehensively test the database to be tested, and the test case needs to be replaced.
For example, when the test result has been output according to the historical test case, in order to improve the testing efficiency, the database to be tested may be tested only according to the differential test case, and the test result corresponding to the differential test case is analyzed with emphasis. Optionally, in order to ensure the accuracy of the test, the database to be tested may also be directly retested with a new test case. Or, the database to be tested may be tested by using the differential test case and the new test case, and the test result corresponding to the new test case and the differential test result corresponding to the differential test case may be output. The manner of replacing the test cases may be determined according to actual situations, and is not particularly limited herein.
Step 33: and if the difference between the new test case and the historical test case is smaller than a preset difference threshold value, the test case does not need to be replaced.
It should be noted that the method for creating the test case is the same as step 10 and step 20 in embodiment 1, and is not described herein again. After the new test case or the differential test case is obtained, whether the new test case or the differential test case is suitable for the database to be tested may be determined according to step 12 and step 13 in embodiment 1, which is not described herein again.
Different from the prior art, the test method comprises the steps of obtaining benchmark test data and log files of a production database; creating a test case according to the benchmark test data and the log file; judging and determining whether the test case is suitable for the database to be tested; and if the test case is suitable for the database to be tested, testing the database to be tested according to the test case, and outputting a test result. According to the test method, on one hand, the test case is created based on the log file of the production database, and the log file can truly reflect the actual use scene of the application service, so that the test case created based on the log file can well simulate the actual use scene of the application service, and the coverage rate and the practicability of the test case are improved. On the other hand, because the production database and the database to be tested may have differences, the method judges whether the test case is suitable for the database to be tested, and tests the database to be tested based on the test case when the test case is suitable for the database to be tested, so that the test accuracy is improved, and the occurrence of misjudgment is reduced.
Example 3:
referring to fig. 6, fig. 6 is a schematic structural diagram of a testing apparatus according to an embodiment of the present invention. The test apparatus of the present embodiment includes one or more processors 61 and a memory 62. In fig. 6, one processor 61 is taken as an example.
The processor 61 and the memory 62 may be connected by a bus or other means, such as the bus connection in fig. 6.
The memory 62, which is a non-volatile computer-readable storage medium based on a testing method, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as the testing method and corresponding program instructions in embodiment 1. The processor 61 implements the functions of the test method of embodiment 1 or embodiment 2 by executing various functional applications of the test method and data processing by executing nonvolatile software programs, instructions, and modules stored in the memory 62.
The memory 62 may include, among other things, high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, the memory 62 may optionally include memory located remotely from the processor 61, and these remote memories may be connected to the processor 61 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
For the testing method, please refer to fig. 1 to fig. 6 and the related text description, which are not repeated herein.
It should be noted that, for the information interaction, execution process and other contents between the modules and units in the apparatus and system, the specific contents may refer to the description in the embodiment of the method of the present invention because the same concept is used as the embodiment of the processing method of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the embodiments may be implemented by associated hardware as instructed by a program, which may be stored on a computer-readable storage medium, which may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1.一种测试方法,其特征在于,所述测试方法包括:1. a test method, is characterized in that, described test method comprises: 获取生产数据库的基准测试数据以及日志文件;Get benchmark data and log files for the production database; 根据所述基准测试数据以及所述日志文件创建测试用例;Create a test case according to the benchmark data and the log file; 判断确定所述测试用例是否适用于待测数据库;Judging and determining whether the test case is suitable for the database to be tested; 若所述测试用例适用于所述待测数据库,则依据所述测试用例对所述待测数据库进行测试,并输出测试结果;If the test case is applicable to the database to be tested, test the database to be tested according to the test case, and output a test result; 所述判断确定所述测试用例是否适用于所述待测数据库包括:The judgment to determine whether the test case is suitable for the database to be tested includes: 获取所述生产数据库的基准测试环境以及所述待测数据库的目标测试环境;Obtain the benchmark test environment of the production database and the target test environment of the database to be tested; 将所述基准测试环境与所述目标测试环境进行相似度匹配,获取所述基准测试环境和所述目标测试环境之间的相似度;performing similarity matching between the benchmark test environment and the target test environment to obtain the similarity between the benchmark test environment and the target test environment; 判断确定所述基准测试环境和所述目标测试环境之间的相似度是否大于预设的相似度阈值;Judging and determining whether the similarity between the benchmark test environment and the target test environment is greater than a preset similarity threshold; 若所述基准测试环境和所述目标测试环境之间的相似度大于预设的相似度阈值,则所述测试用例适用于所述待测数据库。If the similarity between the benchmark test environment and the target test environment is greater than a preset similarity threshold, the test case is suitable for the database to be tested. 2.根据权利要求1所述的测试方法,其特征在于,所述基准测试环境和所述目标测试环境包括操作系统平台、操作系统版本、数据规模、客户端并发规模、硬件资源中的一项或者多项。2. The test method according to claim 1, wherein the benchmark test environment and the target test environment comprise one of an operating system platform, an operating system version, a data scale, a client concurrent scale, and hardware resources or multiple. 3.根据权利要求1所述的测试方法,其特征在于,所述判断确定所述测试用例是否适用于所述待测数据库包括:3. The test method according to claim 1, wherein the judging to determine whether the test case is applicable to the database to be tested comprises: 依据所述生产数据库和所述待测数据库之间的关联关系获取所述生产数据库和所述待测数据库之间的测试权重;Obtain the test weight between the production database and the database to be tested according to the association between the production database and the database to be tested; 获取所述生产数据库运行所述测试用例的基准测试结果以及所述待测数据库运行所述测试用例的实际测试结果;Obtain the benchmark test result of running the test case on the production database and the actual test result of running the test case on the database to be tested; 将所述实际测试结果与所述测试权重进行加权计算得到加权测试结果,并获取所述加权测试结果与所述基准测试结果的差异结果;Perform weighted calculation on the actual test result and the test weight to obtain a weighted test result, and obtain the difference result between the weighted test result and the benchmark test result; 判断确定所述差异结果是否小于预设的差异结果阈值;Judging and determining whether the difference result is less than a preset difference result threshold; 若所述差异结果小于预设的差异结果阈值,则所述测试用例适用于所述待测数据库。If the difference result is less than a preset difference result threshold, the test case is suitable for the database to be tested. 4.根据权利要求3所述的测试方法,其特征在于,所述测试方法还包括:4. test method according to claim 3, is characterized in that, described test method also comprises: 若所述差异结果大于预设的差异结果阈值,则所述测试用例不适用于所述待测数据库;If the difference result is greater than a preset difference result threshold, the test case is not applicable to the database to be tested; 依据所述生产数据库和所述待测数据库之间的关联关系重新创建测试用例。The test case is recreated according to the association relationship between the production database and the database to be tested. 5.根据权利要求1所述的测试方法,其特征在于,所述根据所述基准测试数据以及所述日志文件创建测试用例包括:5. The test method according to claim 1, wherein the creating a test case according to the benchmark test data and the log file comprises: 解析所述日志文件获取操作命令集;Parse the log file to obtain an operation command set; 根据所述基准测试数据以及所述操作命令集创建测试用例。Create a test case according to the benchmark data and the set of operation commands. 6.根据权利要求5所述的测试方法,其特征在于,所述根据所述基准测试数据以及所述日志文件创建测试用例还包括:6. The test method according to claim 5, wherein the creating a test case according to the benchmark test data and the log file further comprises: 按照预设的处理条件对所述操作命令集进行过滤处理得到第一操作命令集;Perform filtering processing on the operation command set according to preset processing conditions to obtain a first operation command set; 执行所述第一操作命令集,剔除执行失败的操作命令得到目标操作命令集;Execute the first operation command set, and remove the failed operation commands to obtain the target operation command set; 根据所述基准测试数据以及所述操作命令集创建测试用例包括:Creating a test case according to the benchmark data and the operation command set includes: 根据所述基准测试数据以及所述目标操作命令集创建测试用例。A test case is created according to the benchmark test data and the target operation command set. 7.根据权利要求1所述的测试方法,其特征在于,所述测试方法还包括:7. test method according to claim 1, is characterized in that, described test method also comprises: 按照预设的时间间隔重新获取所述生产数据库的基准测试数据以及日志文件;Re-acquire the benchmark data and log files of the production database at preset time intervals; 并依据重新获取到的基准测试数据以及日志文件创建新的测试用例;And create new test cases based on the re-obtained benchmark data and log files; 判断确定所述新的测试用例与历史测试用例之间的差异是否大于预设的差异阈值,其中,所述历史测试用例为当前对所述待测数据库进行测试的测试用例;Judging and determining whether the difference between the new test case and the historical test case is greater than a preset difference threshold, wherein the historical test case is the test case currently testing the database to be tested; 若所述新的测试用例与所述历史测试用例之间的差异大于预设的差异阈值,则依据所述新的测试用例与所述历史测试用例之间的差异获取差异测试用例,依据所述新的测试用例和/或所述差异测试用例对待测数据库进行测试。If the difference between the new test case and the historical test case is greater than a preset difference threshold, obtain a difference test case according to the difference between the new test case and the historical test case, and according to the The new test cases and/or the differential test cases are tested against the database under test. 8.根据权利要求1所述的测试方法,其特征在于,所述测试方法还包括:8. test method according to claim 1, is characterized in that, described test method also comprises: 按照预设的时间间隔在所述生产数据库上运行所述测试用例得到模拟测试结果;Run the test case on the production database according to a preset time interval to obtain a simulated test result; 判断确定所述模拟测试结果与预期测试结果之间的差异是否大于预设的差异阈值;Judging and determining whether the difference between the simulated test result and the expected test result is greater than a preset difference threshold; 若所述模拟测试结果与预期测试结果之间的差异大于预设的差异阈值,则丢弃对应的测试用例,并重新创建新的测试用例;If the difference between the simulated test result and the expected test result is greater than the preset difference threshold, discard the corresponding test case, and recreate a new test case; 若所述模拟测试结果与预期测试结果之间的差异小于预设的差异阈值,则执行判断确定所述测试用例是否适用于所述待测数据库。If the difference between the simulated test result and the expected test result is smaller than a preset difference threshold, a judgment is performed to determine whether the test case is suitable for the database to be tested. 9.一种测试装置,其特征在于,所述测试装置包括至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被程序设置为执行如权利要求1~8任一项所述的测试方法。9. A test device, characterized in that the test device comprises at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores data that can be used by the at least one processor An instruction to be executed, the instruction is set by a program to execute the test method according to any one of claims 1 to 8.
CN201811044818.8A 2018-09-07 2018-09-07 Test method and test device Active CN109344055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811044818.8A CN109344055B (en) 2018-09-07 2018-09-07 Test method and test device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811044818.8A CN109344055B (en) 2018-09-07 2018-09-07 Test method and test device

Publications (2)

Publication Number Publication Date
CN109344055A CN109344055A (en) 2019-02-15
CN109344055B true CN109344055B (en) 2020-05-19

Family

ID=65304976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811044818.8A Active CN109344055B (en) 2018-09-07 2018-09-07 Test method and test device

Country Status (1)

Country Link
CN (1) CN109344055B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262961A (en) * 2019-05-21 2019-09-20 深圳壹账通智能科技有限公司 Test method, device, storage medium and the terminal device of Workflow Management System
CN110908906B (en) * 2019-11-18 2023-03-28 中国民航信息网络股份有限公司 Regression testing method and system
CN111579962A (en) * 2020-05-07 2020-08-25 济南浪潮高新科技投资发展有限公司 Autonomous controllability detection system and detection method for measurement and control equipment
CN111767222A (en) * 2020-06-28 2020-10-13 杭州数梦工场科技有限公司 Data model verification method, device, electronic device, storage medium
CN112559316A (en) * 2020-09-03 2021-03-26 中国银联股份有限公司 Software testing method and device, computer storage medium and server
CN112788640B (en) * 2021-03-04 2022-08-05 惠州Tcl移动通信有限公司 Communication equipment testing method and device, storage medium and terminal
CN113778835A (en) * 2021-11-11 2021-12-10 广州粤芯半导体技术有限公司 Pressure testing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992017853A2 (en) * 1991-04-05 1992-10-15 Pattern Recognition, L.P. Direct data base analysis, forecasting and diagnosis method
CN102831052A (en) * 2011-06-16 2012-12-19 中国银联股份有限公司 Automatic generating device and method for test case
CN104461863A (en) * 2014-10-29 2015-03-25 中国建设银行股份有限公司 Service system testing method, device and system
CN104951399A (en) * 2015-06-19 2015-09-30 北京齐尔布莱特科技有限公司 Software test system and method
CN107908549A (en) * 2017-10-24 2018-04-13 北京小米移动软件有限公司 Method for generating test case, device and computer-readable recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992017853A2 (en) * 1991-04-05 1992-10-15 Pattern Recognition, L.P. Direct data base analysis, forecasting and diagnosis method
CN102831052A (en) * 2011-06-16 2012-12-19 中国银联股份有限公司 Automatic generating device and method for test case
CN104461863A (en) * 2014-10-29 2015-03-25 中国建设银行股份有限公司 Service system testing method, device and system
CN104951399A (en) * 2015-06-19 2015-09-30 北京齐尔布莱特科技有限公司 Software test system and method
CN107908549A (en) * 2017-10-24 2018-04-13 北京小米移动软件有限公司 Method for generating test case, device and computer-readable recording medium

Also Published As

Publication number Publication date
CN109344055A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109344055B (en) Test method and test device
CN109344056B (en) Test method and test device
US10554771B2 (en) Parallelized replay of captured database workload
US11829360B2 (en) Database workload capture and replay
CN109460349B (en) Test case generation method and device based on log
US20200192900A1 (en) Order-independent multi-record hash generation and data filtering
D’Ambros et al. Evaluating defect prediction approaches: a benchmark and an extensive comparison
KR102356771B1 (en) Data-driven testing framework
JP2017525039A (en) System information management
CN105808438B (en) A kind of Reuse of Test Cases method based on function call path
WO2022227454A1 (en) Automated testing system and method for continuous integration, and electronic device and storage medium
CN112148614B (en) Regression testing method and device
CN111221721A (en) A method and device for automatic recording and execution of unit test cases
CN116010452A (en) Industrial data processing system and method based on stream type calculation engine and medium
Helal et al. Online correlation for unlabeled process events: A flexible CEP-based approach
CN111258876B (en) A precise regression testing method and device under microservice architecture
CN111831545A (en) Test case generation method, generation device, computer equipment and storage medium
CN111552648A (en) Automatic verification method and system for application
CN117573492A (en) An application performance detection method and device in a database migration scenario
CN106681910B (en) A kind of Intrusion Detection based on host code analysis generates the method and device of test cases
CN113553320B (en) Data quality monitoring method and device
CN115437953A (en) Test data generation method and device
CN112416727B (en) Batch processing job verification method, device, equipment and medium
Maplesden et al. Performance analysis using subsuming methods: An industrial case study
CN113868283A (en) Data testing method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 430000 High-tech Avenue 999, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Patentee after: Wuhan dream database Co.,Ltd.

Address before: 430000 High-tech Avenue 999, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Patentee before: WUHAN DAMENG DATABASE Co.,Ltd.

CP01 Change in the name or title of a patent holder
TR01 Transfer of patent right

Effective date of registration: 20220914

Address after: 430073 16-19 / F, building C3, future science and technology building, 999 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Patentee after: Wuhan dream database Co.,Ltd.

Patentee after: HUAZHONG University OF SCIENCE AND TECHNOLOGY

Address before: 430000 16-19 / F, building C3, future technology building, 999 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan, Hubei Province

Patentee before: Wuhan dream database Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230802

Address after: 16-19/F, Building C3, Future Science and Technology Building, No. 999 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430206

Patentee after: Wuhan dream database Co.,Ltd.

Address before: 430073 16-19 / F, building C3, future science and technology building, 999 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Patentee before: Wuhan dream database Co.,Ltd.

Patentee before: HUAZHONG University OF SCIENCE AND TECHNOLOGY

TR01 Transfer of patent right