Test method and test device
Technical Field
The invention belongs to the technical field of databases, and particularly relates to a test method and a test device.
Background
A database is an organized collection of data that provides data services, such as queries, records, updates, etc., for various computer applications. With the development of information technology, various computer applications are in a large number, and in order to ensure the stability of the performance of a database, the performance, the function and the like of the database need to be tested.
In order to improve the test accuracy of a database system and ensure that the database meets the functional requirement and the performance requirement of a specific application scene, the simulation of a specific computer application to test the database to be tested becomes a widely used test method. At present, there are two traditional methods for simulating computer application services to perform database system testing: (1) building a computer application service, and carrying out system test on the database service to be tested through the application service; (2) and establishing a computer application service, wherein the application service is connected with an agent program, the agent program is connected with the production database service and the database service to be tested, and the agent program is used for simultaneously sending the database request of the application service to the production database service and the database service to be tested for comparison test.
In the implementation process of the method, the database to be tested needs to be tested depending on a program set of computer application services, but the functions provided by the database to be tested are more and more complex, so that the types of the Structured Query Language (SQL) may be different from the data scale to the SQL, and thus the test range of the database system is larger and larger, and the demand of the program set of the computer application services is more and more. As a database manufacturer, it is often difficult to deploy a program set of an application service required by a third party, and meanwhile, if the program set of the application service is not matched with a database to be tested, a problem of inaccurate test result is caused, so that misjudgment on the performance and function of the database to be tested occurs, and the authenticity and comprehensiveness of the test result are defective.
In view of the above, overcoming the drawbacks of the prior art is an urgent problem in the art.
Disclosure of Invention
In view of the above drawbacks or needs for improvement in the prior art, the present invention provides a test method and a test apparatus, which aim to create a test case based on a log file of a production database, and improve the coverage and practicability of the test case because the log file can truly reflect the actual usage scenario of an application service. When the test case is suitable for the database to be tested, the database to be tested is tested based on the test case, so that the test accuracy is improved, and the occurrence of misjudgment is reduced, thereby solving the technical problems that the program set of the application service required by a third party is difficult to deploy, and meanwhile, if the program set of the application service is not matched with the database to be tested, the test result is inaccurate, so that the misjudgment of the performance and the function of the database to be tested is caused, and the authenticity and the comprehensiveness of the test result have defects.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, the present invention provides a test method, comprising:
acquiring benchmark test data and a log file of a production database;
creating a test case according to the benchmark test data and the log file;
judging and determining whether the test case is suitable for the database to be tested;
and if the test case is suitable for the database to be tested, testing the database to be tested according to the test case, and outputting a test result.
Preferably, the determining whether the test case is applicable to the database under test includes:
acquiring a reference test environment of the production database and a target test environment of the database to be tested;
carrying out similarity matching on the reference test environment and the target test environment to obtain the similarity between the reference test environment and the target test environment;
judging and determining whether the similarity between the reference test environment and the target test environment is greater than a preset similarity threshold;
and if the similarity between the reference test environment and the target test environment is greater than a preset similarity threshold, the test case is applicable to the database to be tested.
Preferably, the benchmark test environment and the target test environment include one or more of an operating system platform, an operating system version, a data size, a client concurrency size, and a hardware resource.
Preferably, the determining whether the test case is applicable to the database under test includes:
acquiring the test weight between the production database and the database to be tested according to the incidence relation between the production database and the database to be tested;
acquiring a reference test result of the production database running the test case and an actual test result of the to-be-tested database running the test case;
performing weighting calculation on the actual test result and the test weight to obtain a weighted test result, and acquiring a difference result between the weighted test result and the reference test result;
judging and determining whether the difference result is smaller than a preset difference result threshold value;
and if the difference result is smaller than a preset difference result threshold value, the test case is applicable to the database to be tested.
Preferably, the test method further comprises:
if the difference result is larger than a preset difference result threshold value, the test case is not applicable to the database to be tested;
and recreating the test case according to the incidence relation between the production database and the database to be tested.
Preferably, the creating a test case according to the benchmark test data and the log file includes:
analyzing the log file to obtain an operation command set;
and creating a test case according to the benchmark test data and the operation command set.
Preferably, the creating a test case according to the benchmark test data and the log file further includes:
filtering the operation command set according to a preset processing condition to obtain a first operation command set;
executing the first operation command set, and removing operation commands which fail to be executed to obtain a target operation command set;
creating a test case according to the benchmark test data and the operation command set comprises:
and creating a test case according to the benchmark test data and the target operation command set.
Preferably, the test method further comprises:
re-acquiring the benchmark test data and the log file of the production database according to a preset time interval;
creating a new test case according to the newly acquired benchmark test data and the log file;
judging and determining whether the difference between the new test case and a historical test case is greater than a preset difference threshold value, wherein the historical test case is a test case for testing the database to be tested at present;
if the difference between the new test case and the historical test case is larger than a preset difference threshold value, obtaining a difference test case according to the difference between the new test case and the historical test case, and testing the database to be tested according to the new test case and/or the difference test case.
Preferably, the test method further comprises:
running the test case on the production database according to a preset time interval to obtain a simulation test result;
judging and determining whether the difference between the simulation test result and the expected test result is greater than a preset difference threshold value;
if the difference between the simulation test result and the expected test result is larger than a preset difference threshold value, discarding the corresponding test case, and recreating a new test case;
and if the difference between the simulation test result and the expected test result is smaller than a preset difference threshold value, executing judgment to determine whether the test case is suitable for the database to be tested.
In a second aspect, the present invention provides a test apparatus comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being programmed to perform the test method of the first aspect.
In a third aspect, the present invention also provides a non-transitory computer storage medium having stored thereon computer-executable instructions for execution by one or more processors for performing the method of testing of the first aspect.
Generally, compared with the prior art, the technical scheme of the invention has the following beneficial effects: the testing method comprises the steps of obtaining benchmark test data and a log file of a production database; creating a test case according to the benchmark test data and the log file; judging and determining whether the test case is suitable for the database to be tested; and if the test case is suitable for the database to be tested, testing the database to be tested according to the test case, and outputting a test result. According to the test method, on one hand, the test case is created based on the log file of the production database, and the log file can truly reflect the actual use scene of the application service, so that the test case created based on the log file can well simulate the actual use scene of the application service, and the coverage rate and the practicability of the test case are improved. On the other hand, because the production database and the database to be tested may have differences, the method judges whether the test case is suitable for the database to be tested, and tests the database to be tested based on the test case when the test case is suitable for the database to be tested, so that the test accuracy is improved, and the occurrence of misjudgment is reduced.
Drawings
FIG. 1 is a schematic flow chart of a testing method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of one embodiment of step 12 of FIG. 1;
FIG. 3 is a schematic flow chart of another embodiment of step 12 of FIG. 1;
FIG. 4 is a schematic flow chart of another testing method provided by the embodiment of the invention;
FIG. 5 is a schematic flow chart of another testing method provided by the embodiment of the present invention;
fig. 6 is a schematic structural diagram of a testing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1:
referring to fig. 1, the present embodiment provides a testing method for testing performance, function, and the like of a database to be tested, the testing method includes the following steps:
step 10: and acquiring benchmark test data and a log file of the production database.
In this embodiment, first, benchmark test data and a log file of a production database are acquired. The benchmark test data file can be obtained by copying or backing up the production database, or can be directly derived from the production database.
In an actual application scenario, when the production database runs, a corresponding log file is generated, and the log file is stored on a local or remote storage device. The log file mainly records the operation behavior of the application program on the database, and the log file can be used for knowing which operations and specific operation processes are performed on the database by the application program. For example, the application logs in the database and performs query, modification, and the like on the data of the database based on the business logic, wherein the business logic may be a transaction behavior involving goods query, shopping, payment, order processing, and the like.
Step 11: and creating a test case according to the benchmark test data and the log file.
In this embodiment, the test apparatus generates a test case from the benchmark test data and the log file. The log file is mainly used for recording operation logic and related data, and has no control logic, so that the test device needs to generate a test case according to the reference test data and the log file, and the test case is actually an operation script for testing the database to be tested. Specifically, the log file is analyzed to obtain an operation command set, an executable Structured Query Language (SQL) statement is obtained through analysis according to the structure of the log file, and the execution parameter is associated with the corresponding SQL statement to obtain the operation command set. And then, creating a test case according to the benchmark test data and the operation command set.
In an actual application scenario, repeated operations or operations that do not meet test conditions may be recorded in the log file, so as to improve the test efficiency, the test case may be created after filtering the operation command set acquired based on the log file according to a preset condition.
In a preferred embodiment, the operation command set is filtered according to a preset processing condition to obtain a first operation command set, the first operation command set is executed, and operation commands which fail to be executed are removed to obtain a target operation command set. The preset processing conditions comprise deduplication processing and operation object screening processing.
In an actual application scenario, the operation command set is first subjected to deduplication processing, and the deduplication processing may be performed on the operation command set according to the following function.
/**
*@author Administrator
Description of the drawings: for removing duplicate similar sql, reducing the number of test sql
*/
public class ItemFilter{
HashMap<Integer,ArrayList<String>>mapFilter=new HashMap<Integer,ArrayList<String>>();
SqlLog log=null;
SqlLog log2=null;
public void setDelLog(String filename){
if(log!=null){
log.Close();
log2.Close();
}
log=new SqlLog(filename);
log2=new SqlLog(filename+".2.log");
}
public void finish(){
if(log!=null){
log.Close();
log2.Close();
}
}
After the operation command set is subjected to deduplication processing, the operation command set is subjected to filtering processing by using filters, for example, each filter contains an SQL feature (e.g., a specific SQL statement type, SQL for operating a specific object) and processing operations (e.g., a delete operation, one or more of the same or similar operations reserved), and the first operation command set is obtained by repeatedly processing the operation command set by using the filters. In a practical application scenario, the filtering process may be performed as follows.
/**
Determining whether to retain the sql
*@param:
*@return:boolean
*/
public boolean isKeep(String sql){
boolean flag=true;
if(sql==null
||sql.trim().length()==0){
return false;
}
if (isDeleteSql (sql)) {// if deletion of sql is confirmed, then no reservation is required
log.logMessageNotPrint(sql+"\r\n");
return false;
}
Pitch ("\ \ s +") length; // separating sql strings according to spaces
if(!mapFilter.containsKey(new Integer(len))){//if hash-code notexists,keep sql
ArrayList<String>tmp=new ArrayList<String>();
tmp.add(sql);
mapFilter.put(new Integer(len),tmp);
return true;
}
ArrayList<String>values=mapFilter.get(new Integer(len));
Iterator<String>it=values.iterator();
while(it.hasNext()){
String sqltmp=it.next();
if (sql. trim (). equalsIgnoreCase (sqltmp)) {// find the same sql, then that sql does not remain
log.logMessageNotPrint(sql+"\r\n");
flag=false;
break;
}
//to-do
If a similar sql is found, then not retained, the deleted sql is entered into the temporary file
if(isSimilarSql(sql,sqltmp)){
log2.logMessageNotPrint(sql+"\r\n");
flag=false;
break;
}
}
if(flag){//sql need keep
values.add(sql);
mapFilter.put(new Integer(len),values);
}
return flag;
}
In this embodiment, after the first operation command set is obtained, the first operation command set is also simulated to be executed, and the target operation command set is obtained after the operation command which is failed to be executed is removed. And then, creating a test case according to the benchmark test data and the target operation command set.
In this embodiment, the test case is created based on the log file, and is an operation script for the database. The test case can simulate the operation behavior of the application program on the database, which is equivalent to the playback of the operation behavior of the application program on the database, so that the database can be truly and comprehensively tested on the premise of not depending on the application program.
Step 12: and judging and determining whether the test case is suitable for the database to be tested.
In an actual application scenario, because a difference may exist between a production database and a database to be tested, a test case created according to reference test data and a log file of the production database may not match the database to be tested, so that a test result does not conform to actual performance of the database to be tested, and accuracy of testing is improved. According to the embodiment, whether the test case is suitable for the database to be tested is judged and determined, and the database to be tested is tested based on the test case when the test case is suitable for the database to be tested, so that the test accuracy is improved, and the occurrence of misjudgment is reduced.
The judgment and determination of whether the test case is suitable for the database to be tested can be carried out from different dimensionalities. In an alternative embodiment, the determination may be performed according to a reference test environment of the production database and a target test environment of the to-be-tested database. Referring to fig. 2, step 12: judging and determining whether the test case is suitable for the database to be tested specifically comprises the following steps:
step 1211: and acquiring a reference test environment of the production database and a target test environment of the database to be tested.
In this embodiment, a reference test environment of the production database and a target test environment of the to-be-tested database are obtained. The benchmark test environment is an actual working environment for producing the database when the benchmark test data and the corresponding log files are obtained, and the benchmark test environment comprises one or more of an operating system platform, an operating system version, a data scale, a client side concurrent scale and hardware resources. The target test environment is a working environment of the to-be-tested database for testing the to-be-tested database, and the reference test environment comprises one or more of an operating system platform, an operating system version, a data scale, a client concurrent scale and hardware resources.
Step 1212: and performing similarity matching on the reference test environment and the target test environment to acquire the similarity between the reference test environment and the target test environment.
In this embodiment, similarity matching is performed between the benchmark test environment and the target test environment, so as to obtain a similarity between the benchmark test environment and the target test environment. In order to obtain more accurate similarity, corresponding weights may be configured according to the type of the test environment, for example, the operating system platform and the operating system version are configured as a first similarity weight value, and the client concurrency scale and the hardware resources of the database are configured as a second similarity weight value.
For example, the method includes the steps of comparing an operating system platform and an operating system version of a production database and a database to be tested, determining a first similarity according to the similarity between the operating system platform and the operating system version of the production database and the database to be tested, then comparing the client concurrent scale and hardware resources of the production database and the database to be tested, determining a second similarity according to the client concurrent scale and the hardware resources of the production database and the database to be tested, multiplying the first similarity by a first similarity weighted value to obtain a first similarity weighted value, multiplying the second similarity by a second similarity weighted value to obtain a second similarity weighted value, and overlapping the first similarity weighted value and the second similarity weighted value to obtain the similarity between the production database and the database to be tested.
The first similarity weight and the second similarity weight are determined according to actual conditions. For example, if the operating system platforms and operating system versions of the production database and the database to be tested are the same, the first similarity is 1; the concurrent scales of the production database and the client of the database to be tested and the hardware resources are different, and the second similarity is determined to be 0.9. Because the operating system platform and the operating system version have a greater determinant on the test result, the preset first similarity weight is 60%, the second similarity weight is 40%, and the similarity between the production database and the database to be tested is 1 × 60% +0.9 × 40% — 0.96.
Step 1213: and judging and determining whether the similarity between the reference test environment and the target test environment is greater than a preset similarity threshold.
The preset similarity threshold is designed according to actual conditions, and may be 80% or 90%, and is not specifically limited herein.
In this embodiment, it is determined whether the similarity between the reference test environment and the target test environment is greater than a preset similarity threshold. For example, if the preset similarity threshold is 90% and the similarity between the production database and the database to be tested is 0.96, the similarity between the production database and the database to be tested is greater than the preset similarity threshold. If the similarity between the production database and the database to be tested is greater than the preset similarity threshold, step 1214 is executed, and if the similarity between the production database and the database to be tested is less than the preset similarity threshold, step 1215 is executed.
Step 1214: and if the similarity between the reference test environment and the target test environment is greater than a preset similarity threshold, the test case is applicable to the database to be tested.
Step 1215: and if the similarity between the reference test environment and the target test environment is smaller than a preset similarity threshold, the test case is not suitable for the database to be tested, and the test case is created again.
In another alternative embodiment, whether the test case is suitable for the database to be tested may be determined according to the association relationship between the production database and the database to be tested. Referring to fig. 3, step 12: judging and determining whether the test case is suitable for the database to be tested specifically comprises the following steps:
step 1221: and acquiring the test weight between the production database and the database to be tested according to the incidence relation between the production database and the database to be tested.
In this embodiment, the test weight between the production database and the database to be tested is obtained according to the association relationship between the production database and the database to be tested. The association relationship may be determined by the data size or the operation configuration.
Step 1222: and acquiring a reference test result of the production database running the test case and an actual test result of the to-be-tested database running the test case.
The reference test result can be a reference time length for producing the database running test case, and correspondingly, the actual test result is an actual time length for the database to be tested to run the test case. Alternatively, the test result may be other indexes.
Step 1223: and performing weighting calculation on the actual test result and the test weight to obtain a weighted test result, and acquiring a difference result between the weighted test result and the reference test result.
Step 1224: and judging and determining whether the difference result is smaller than a preset difference result threshold value.
In this embodiment, it is determined whether the difference result is smaller than a preset difference result threshold, and if the difference result is smaller than the preset difference result threshold, step 1225 is executed; if the difference result is greater than the preset difference result threshold, go to step 1226.
For example, in the same operation command, when the configuration of the production database and the configuration of the database to be tested are basically consistent, and the data scale of the database to be tested is a times of the data scale of the production database, the test weight between the production database and the database to be tested is 1/a, the reference duration for the production database to execute the test case is T1, the actual duration for the database to be tested to execute the test case is T2, and the weighting duration for the database to be tested to execute the test case is T2 x 1/a according to the actual duration multiplied by the test weight. The difference between the weighted test results and the baseline test results was T2 x 1/A-T1.
For example, when the data size of the database to be tested is 2 times of the data size of the production database, the test weight between the production database and the database to be tested is 1/2, the reference duration for the production database to execute the test case is 4s, the actual duration for the database to be tested to execute the test case is 9s, and the weighted duration for the database to be tested to execute the test case is calculated according to the actual duration multiplied by the test weight, wherein 9s is 1/2, and 4.5 s. The difference between the weighted test result and the baseline test result was 4.5s-4 s-0.5 s.
The difference result depends on the actual situation, for example, the difference result is 1 s. And if the 0.5s is less than 1s, the test case is suitable for the database to be tested.
Step 1225: and if the difference result is smaller than a preset difference result threshold value, the test case is applicable to the database to be tested.
Step 1226: and if the difference result is larger than a preset difference result threshold value, the test case is not suitable for the database to be tested, and the test case is created again.
In this embodiment, if the difference result is greater than the preset difference result threshold, the test case is not applicable to the to-be-tested database, and the test case is re-created according to the association relationship between the production database and the to-be-tested database.
In yet another alternative embodiment, first, the similarity between the reference test environment of the production database and the target test environment of the database to be tested is obtained by performing a judgment according to the reference test environment of the production database and the target test environment of the database to be tested, when the similarity between the reference test environment of the production database and the target test environment of the database to be tested reaches a preset similarity threshold, then obtaining the test weight of the production database and the database to be tested according to the incidence relation between the production database and the database to be tested, combining the similarity of the reference test environment of the production database and the target test environment of the database to be tested and the test weight between the production database and the database to be tested to obtain the target test weight, and then, judging and determining whether the test case is suitable for the database to be tested by using the target test weight according to the modes of the steps 1223 to 1226 (adjusting the value corresponding to the test weight to the value corresponding to the target test weight).
Step 13: and if the test case is suitable for the database to be tested, testing the database to be tested according to the test case, and outputting a test result.
In this embodiment, if the test case is applicable to the database to be tested, the database to be tested is tested according to the test case, and a test result is output.
Example 2:
different from the embodiment 1, in order to further ensure the accuracy of the test case, in this embodiment, after the test case is created, the test case is periodically returned to the production database for simulation test, evaluation is performed according to the simulation test result and the expected test result, and whether the test case can truly simulate the operation behavior of the user is determined; if the difference between the simulation test result and the expected test result is large, the operation behavior of the user cannot be truly simulated by the test case, and the test case is recreated. Please refer to fig. 4 for the specific steps of the testing method of the present embodiment.
Step 20: and acquiring benchmark test data and a log file of the production database.
In this embodiment, first, benchmark test data and a log file of a production database are acquired. The benchmark test data file can be obtained by copying or backing up the production database, or can be directly derived from the production database.
Step 20 of this embodiment is the same as step 10 of embodiment 1, and the detailed process refers to step 10 and the related text.
Step 21: and creating a test case according to the benchmark test data and the log file.
In this embodiment, the test apparatus generates a test case from the benchmark test data and the log file. The log file is mainly used for recording operation logic and related data, and has no control logic, so that the test device needs to generate a test case according to the reference test data and the log file, and the test case is actually an operation script for testing the database to be tested.
Step 21 of this embodiment is the same as step 10 of embodiment 1, and the detailed process refers to step 10 and the related text.
Step 22: and running the test case on the production database according to a preset time interval to obtain a simulation test result.
Considering that the application program update or other factors can cause changes to the database operation command set (the database log file can also change), the test case changes are caused, and the log needs to be analyzed again to generate the test case. In this embodiment, the test case is run on the production database at preset time intervals to obtain a simulation test result. The preset time interval can be determined according to actual conditions. Step 23: and judging and determining whether the difference between the simulation test result and the expected test result is greater than a preset difference threshold value.
Wherein the expected test result can be determined according to the actual result output by the current production database.
In this embodiment, the current latest output result of the production database when the operation matched with the test case is executed is obtained, and the current latest output result is configured as the expected test result. Then, whether the difference between the simulation test result and the expected test result is larger than a preset difference threshold value is judged and determined. If the difference between the simulated test result and the expected test result is greater than a preset difference threshold, executing step 24; if the difference between the simulated test result and the expected test result is less than the preset difference threshold, step 25 is executed.
The difference threshold may be determined according to actual situations, and is not specifically limited herein. Step 24: and if the difference between the simulation test result and the expected test result is greater than a preset difference threshold value, discarding the corresponding test case and recreating a new test case.
In this embodiment, a new test case is created again according to steps 20 and 21.
Step 25: and if the difference between the simulation test result and the expected test result is smaller than a preset difference threshold value, judging and determining whether the test case is suitable for the database to be tested.
In this embodiment, if the difference between the simulation test result and the expected test result is smaller than a preset difference threshold, it is determined whether the test case is applicable to the to-be-tested database.
Please refer to fig. 1, fig. 2, fig. 3 and the related text description in embodiment 1, which are not limited herein specifically for the determination process of whether the test case is applicable to the database to be tested.
In an actual application scenario, an application update causes a change in the database operation command set resulting in a change in the test case. If the data structure is not changed during updating of the application program, an expected result can be output when the database to be tested is tested through the historical test case, and a new test case does not need to be created again. If the data structure is changed by updating the application program, and an exception may occur when the production database is tested through the historical test case, it indicates that a new test case needs to be created again.
Step 26: and if the test case is suitable for the database to be tested, testing the database to be tested according to the test case, and outputting a test result.
In this embodiment, if the test case is applicable to the database to be tested, the database to be tested is tested according to the test case, and a test result is output.
In another embodiment, the benchmark test data and the log file of the production database are obtained again according to a preset time interval, and a new test case is created according to the obtained benchmark test data and the log file. And judging whether the test case needs to be replaced or not based on the difference between the new test case and the historical test case. Please refer to fig. 5 for a detailed method flow.
Step 30: and re-acquiring the benchmark test data and the log file of the production database according to a preset time interval.
Considering that the application program update or other factors can cause changes to the database operation command set (the database log file can also change), the test case changes are caused, and the log needs to be analyzed again to generate the test case. In this embodiment, the benchmark test data and the log file of the production database are obtained again at preset time intervals to create a new test case. The preset time interval can be determined according to actual conditions.
Step 31: and creating a new test case according to the reference test data and the log file which are obtained again.
Step 32: and judging and determining whether the difference between the new test case and a historical test case is greater than a preset difference threshold value, wherein the historical test case is the test case for testing the database to be tested at present.
In this embodiment, it is determined whether the difference between the new test case and the historical test case is greater than a preset difference threshold, if so, it is determined that the test case needs to be replaced, step 33 is executed, and if not, it is determined that the test case does not need to be replaced, and step 34 is executed. The preset difference threshold value is determined according to the actual situation.
Step 33: if the difference between the new test case and the historical test case is larger than a preset difference threshold value, obtaining a difference test case according to the difference between the new test case and the historical test case, and testing the database to be tested according to the new test case and/or the difference test case.
In this embodiment, if the difference between the new test case and the historical test case is greater than the preset difference threshold, it indicates that the historical test case cannot truly and comprehensively test the database to be tested, and the test case needs to be replaced.
For example, when the test result has been output according to the historical test case, in order to improve the testing efficiency, the database to be tested may be tested only according to the differential test case, and the test result corresponding to the differential test case is analyzed with emphasis. Optionally, in order to ensure the accuracy of the test, the database to be tested may also be directly retested with a new test case. Or, the database to be tested may be tested by using the differential test case and the new test case, and the test result corresponding to the new test case and the differential test result corresponding to the differential test case may be output. The manner of replacing the test cases may be determined according to actual situations, and is not particularly limited herein.
Step 33: and if the difference between the new test case and the historical test case is smaller than a preset difference threshold value, the test case does not need to be replaced.
It should be noted that the method for creating the test case is the same as step 10 and step 20 in embodiment 1, and is not described herein again. After the new test case or the differential test case is obtained, whether the new test case or the differential test case is suitable for the database to be tested may be determined according to step 12 and step 13 in embodiment 1, which is not described herein again.
Different from the prior art, the test method comprises the steps of obtaining benchmark test data and log files of a production database; creating a test case according to the benchmark test data and the log file; judging and determining whether the test case is suitable for the database to be tested; and if the test case is suitable for the database to be tested, testing the database to be tested according to the test case, and outputting a test result. According to the test method, on one hand, the test case is created based on the log file of the production database, and the log file can truly reflect the actual use scene of the application service, so that the test case created based on the log file can well simulate the actual use scene of the application service, and the coverage rate and the practicability of the test case are improved. On the other hand, because the production database and the database to be tested may have differences, the method judges whether the test case is suitable for the database to be tested, and tests the database to be tested based on the test case when the test case is suitable for the database to be tested, so that the test accuracy is improved, and the occurrence of misjudgment is reduced.
Example 3:
referring to fig. 6, fig. 6 is a schematic structural diagram of a testing apparatus according to an embodiment of the present invention. The test apparatus of the present embodiment includes one or more processors 61 and a memory 62. In fig. 6, one processor 61 is taken as an example.
The processor 61 and the memory 62 may be connected by a bus or other means, such as the bus connection in fig. 6.
The memory 62, which is a non-volatile computer-readable storage medium based on a testing method, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as the testing method and corresponding program instructions in embodiment 1. The processor 61 implements the functions of the test method of embodiment 1 or embodiment 2 by executing various functional applications of the test method and data processing by executing nonvolatile software programs, instructions, and modules stored in the memory 62.
The memory 62 may include, among other things, high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, the memory 62 may optionally include memory located remotely from the processor 61, and these remote memories may be connected to the processor 61 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
For the testing method, please refer to fig. 1 to fig. 6 and the related text description, which are not repeated herein.
It should be noted that, for the information interaction, execution process and other contents between the modules and units in the apparatus and system, the specific contents may refer to the description in the embodiment of the method of the present invention because the same concept is used as the embodiment of the processing method of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the embodiments may be implemented by associated hardware as instructed by a program, which may be stored on a computer-readable storage medium, which may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.