[go: up one dir, main page]

CN100369424C - Method and apparatus for estimating terminal to terminal service grade protocol - Google Patents

Method and apparatus for estimating terminal to terminal service grade protocol Download PDF

Info

Publication number
CN100369424C
CN100369424C CNB2006101044356A CN200610104435A CN100369424C CN 100369424 C CN100369424 C CN 100369424C CN B2006101044356 A CNB2006101044356 A CN B2006101044356A CN 200610104435 A CN200610104435 A CN 200610104435A CN 100369424 C CN100369424 C CN 100369424C
Authority
CN
China
Prior art keywords
test
network performance
service level
evaluation
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006101044356A
Other languages
Chinese (zh)
Other versions
CN1905497A (en
Inventor
王佳玮
曹军
关福俊
李轶军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Iwncomm Co Ltd
Original Assignee
China Iwncomm Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Iwncomm Co Ltd filed Critical China Iwncomm Co Ltd
Priority to CNB2006101044356A priority Critical patent/CN100369424C/en
Publication of CN1905497A publication Critical patent/CN1905497A/en
Application granted granted Critical
Publication of CN100369424C publication Critical patent/CN100369424C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention is an end-to-end service grade protocol evaluating method, comprising the steps of: setting database; scanning database; generating test list of tasks; making active measurement and passive measurement; and evaluating network performance data. And an evaluating device for implementing the method comprises task management module, service grade protocol analyzing module, network performance module, service grade protocol evaluating module and report generating module, where the task management module makes bidirectional communication with the service grade protocol analyzing module, network performance module and service grade protocol evaluating module. And it can accurately collect measuring sample, achieve an expected effect of SLA monitoring, and has high testing accuracy and strong pertinence.

Description

Method and device for evaluating end-to-end service level protocol
Technical Field
The present invention relates to an evaluation method and an evaluation apparatus for a service level agreement based on an IP network, and more particularly, to an evaluation method and an evaluation apparatus for an end-to-end service level agreement.
Background
A Service Level Agreement (hereinafter, referred to as SLA, short for Service Level agent) refers to an Agreement between a Service provider and a user, which requires the Service provider to guarantee certain Service performance and reliability, and pays a certain fee for the user. Traditional service level agreements typically include guarantees of business availability and customer support services, such as: the technical support center guarantees the time for solving the user problem, the service interruption times and time, and the like. In recent years, with the rapid development of internet-based service applications, more and more users have made service performance guarantee requirements for service level agreements, such as: network/traffic response time, throughput, packet loss rate, etc. Therefore, it is a general concern of internet service providers at home and abroad to evaluate whether a service level agreement can be accurately put in place according to the requirements of different services.
Referring to the definition of the Internet Engineering Task Force (IETF), a service level agreement contains two parts:
(1) Commercial part: this section defines the two parties signed up, the right responsibility, the charging scheme, etc.
(2) The technical part is as follows: i.e. the Service Level Specification (SLS) in the following, specifies the source node address, the destination node address, the reserved bandwidth, the IP packet transmission delay, the jitter, the IP packet loss rate, and other quality of service (QoS) parameters related to the network, which all have thresholds agreed between the service provider and the user.
In order to assess whether a signed SLA between a service provider and a user is performing well and whether a quality of service indicator in the SLS is being met well, true and reliable network performance parameter measurements are required to be validated. In this regard, the information that can be obtained by a conventional Internet Control Message Protocol (ICMP) command, such as Ping/Trace command, is very limited, and the application is very cumbersome and inefficient.
Existing SLA assessment methods are mainly implemented based on passive measurements, especially Simple Network Management Protocol (SNMP) queries for IP addresses (e.g. routers) on the other end of the link and collects Management Information Base (MIB) data. The method for evaluating the network performance by monitoring the data on the test points has the advantages that extra network flow is not increased, the topological structure of the network is not changed, and therefore the normal operation of the network is not interfered; the method has the defects of being not flexible enough, generally only monitoring the problem of a certain device or a certain network segment, and being difficult to monitor the end-to-end network performance.
And the active measurement is to inject certain flow into the network to be measured, and the network performance is evaluated according to the test data of the flow acquired by the test points. The method has the advantages that the test is targeted, namely, the targeted test can be easily carried out on a certain performance index, and the efficiency is higher; the disadvantage is that the network flow is increased to a certain extent and there is a possibility that it interferes with the normal operation of the network. Active measurements are therefore typically used during network device performance testing, network construction, acceptance and commissioning phases, but not during network operation for SLA evaluation.
Disclosure of Invention
One of the objectives of the present invention is to provide an end-to-end service level protocol assessment method, which solves the technical problems that the existing assessment method can only monitor a certain device or a certain network segment, and is difficult to monitor the end-to-end network performance; the method overcomes the defects that the existing method only analyzes and evaluates a single performance index and does not evaluate the whole IP network service comprehensively.
The invention also provides an end-to-end service level protocol assessment device, which solves the technical problems that the existing assessment device has low testing precision and cannot efficiently and pertinently test various network performance indexes.
The technical solution of the invention is as follows:
a method for evaluating an end-to-end service level agreement, comprising the steps of:
step 1] setting a database: setting a database for storing the entered SLA information, the business-parameter weight matrix, the intermediate data involved in the SLA evaluation and the final evaluation result;
step 2] scanning the database: the task management module controls the service level agreement analysis module to scan the database regularly and check whether a new SLA is input;
step 3] generating a test task table: when a new SLA is input, the task management module controls the service level agreement analysis module to read and analyze the SLA, extracts an SLS part and stores the SLS part in a database as a service level target (SLO); the task management module generates a test task table and stores the test task table in a database;
and 4] simultaneously carrying out active measurement and passive measurement: in the SLA validity period range, the task management module controls the network performance test module to initiate periodic active packet sending tests from the source node to the target node according to the arrangement of the test task table, and stores the test results in a database; meanwhile, one passive measurement is executed on the network exit node of the user side, and the test result of the passive measurement is obtained and stored in a database;
step 5] evaluating network performance data: the task management module controls the service level agreement evaluation module to evaluate the SLA according to the network performance data stored in the database by taking an evaluation period as a unit, and stores an evaluation result to the database;
the step 3] of generating the test task table comprises the following specific steps:
step 31] extracting SLA serial numbers, test index names to be tested, source node IP addresses and destination node IP addresses from the SLS;
step 32] generating a test time in a test period according to the test period requirement required in the SLS;
step 33], generating a test task table, and storing the test task table in a database, wherein the format of the test task table is as follows: test sequence number-SLA sequence number-test time-source IP address-destination IP address-test index.
The method for evaluating the network performance data in the step 5) comprises the following steps:
step 51, according to the minimum evaluation time period set in the service level protocol, reading all IP network performance parameter values at the moment of acquiring IP network performance parameter data in an evaluation period;
step 52] counting the number of sampling points in the evaluation period, and recording as s;
step 53]Sequentially evaluating the compliance percentage p of each IP network performance parameter in the evaluation period 1 ,p 2 ,...p n (ii) a Counting the number of violation points for the ith IP network performance parameter in the evaluation period, and calculating the compliance percentage for the ith IP network performance parameter in the evaluation period;
step 54] performs a weighted summation of the compliance percentages of the various IP network performance parameters as the service compliance percentage during the evaluation period.
The step 4] of simultaneously performing the active measurement and the passive measurement comprises the following specific steps:
step 41] presetting parameters for testing: the parameters comprise source and destination IP addresses and test index items;
step 42] determining the distribution of the active measurement packets in time, namely determining the distribution of the packet sending time in one measurement;
step 43] carrying out active packet sending measurement between the source node and the destination node to obtain a test result of the active measurement;
step 44] when the active measurement is executed, a passive measurement is executed on the network exit node of the user terminal, and a test result of the passive measurement is obtained;
and step 45, storing the test result to a database.
The distribution of the package sending time in the step 42] is determined by the poisson arrival process:
step 421] determining the value of the average arrival interval lambda;
step 422] determining the packet sending number n in the one-time test period T;
step 423]Produce an independent homogeneous distribution of compliance U0, 1]Of a pseudo-random number sequence u 1 ,u 2 ,...,u n
Step 424]Generating an inter-arrival time pseudorandom sequence Δ 1 ,Δ 2 ,...,Δ n
Step 425]Determining the time t of sending a packet 1 ,t 2 ,...,t n Is t i =t 012 +...+Δ i
The basis for judging whether the nth sampling point violates the ith IP network performance parameter in the step 53] is as follows:
step 531]Set of IP network performance parameters A collected from sample point n n ={a n1 ,a n2 ,...,a ni Centrifuge test plan obtains the ith IP network of the sampling pointTrue measurement of a parameter ni
Step 532]Threshold set of IP network performance parameters agreed from service level agreements, B = { B = { (B) } 1 ,b 2 ,...,b i Obtaining a threshold b corresponding to the performance parameter of the ith IP network i
Step 533]Set of allowed errors for IP network performance parameters agreed from service level agreements, L = { L = { [ L ] 1 ,l 2 ,...,l i Obtaining corresponding allowable error l of I-th IP network performance parameter i
Step 534]IP network performance parameter comparison method set E = { E = agreed from service level agreement 1 ,e 2 ,...,e i A comparison method e corresponding to the IP network performance parameter of the ith item is obtained from the test i
Step 535]Comparison a ni And b i If the condition e is not satisfied i Then the sampling point n is regarded as the violation point of the IP network performance parameter.
Step 535 above]In (a) ni And b i The comparison method is as follows:
step 5351]If e i To be greater than number, firstly judge a ni Whether or not it is greater than b i If a is ni Greater than b i Then a is ni And b i Satisfies the condition e i (ii) a If a is ni Is less than b i Then, a is judged ni Minus b i Is less than l i If so, then a is determined ni And b i Satisfies the condition e i Otherwise, e is not satisfied i
Step 5352]If e i To be less than number, firstly judge a ni Whether or not it is less than b i If a is ni Is less than b i Then a is ni And b i Determining that condition e is satisfied i If a is ni Greater than b i Then, judging b i Minus a ni Is less than l i If so, then a is determined ni And b i Satisfies the condition e i (ii) a Otherwise, e is not satisfied i
An evaluation device for realizing an evaluation method of an end-to-end service level protocol comprises a task management module 1, a service level protocol analysis module 2, a network performance test module 3, a service level protocol evaluation module 4 and a report generation module 5; the task management module 1 is in bidirectional communication with the service level protocol analysis module 2, the network performance test module 3 and the service level protocol evaluation module 4 respectively;
the task management module 1 can control all the evaluation processes and generate a network performance test task;
the service level agreement analysis module 2 can periodically scan the database, read and analyze SLA, extract SLS part to generate a service level target table and store the service level target table in the database;
the network performance testing module 3 performs end-to-end performance testing on the network between the source node and the destination node in an optimized mode of combining active improvement measurement and passive measurement according to the requirements of the testing task table, and stores a testing result, namely network performance data, into a database;
the service level agreement evaluation module 4 may evaluate the performance of a given SLA over its lifetime based on the test results and service level target requirements stored in the database.
The specific steps of the task management module 1 for generating the test task table and storing the test task table into the database are as follows:
extracting SLA serial numbers, test index names to be tested, and source and destination node IP addresses from the SLS; generating a test moment in a test period according to the test period requirement required in the SLS; generating a test task table and storing the test task table in a database; the format is as follows: testing serial number-SLA serial number-testing time-source IP address-destination IP address-testing index;
the specific steps of the service level agreement evaluation module 4 for evaluating the SLA according to the test results and the service level objective requirements stored in the database are as follows:
reading all IP network performance parameter values at the moment of acquiring IP network performance parameter data in an evaluation period according to the minimum evaluation period set in the service level protocol; counting the number of sampling points in the evaluation period and recording as s; sequentially evaluating compliance percentages p of each IP network performance parameter in the evaluation period 1 ,p 2 ,...p n (ii) a Counting the number of violation points for the ith IP network performance parameter in the evaluation period, and calculating the compliance percentage for the ith IP network performance parameter in the evaluation period; and performing weighted summation on the compliance percentage of each IP network performance parameter to serve as the service compliance percentage in the evaluation period.
The above apparatus for evaluating an end-to-end service level agreement further comprises a report generating module 5; the report generating module 5 is connected with the task management module 1, and can generate an evaluation report according to a set format for an evaluation result.
The invention has the following advantages:
1. the current dynamic network performance may be determined based on the current bandwidth utilization. Active and passive measurements are used because: network performance from a service provider end to a user end network exit node is difficult to monitor only by passive measurement, and the execution condition of SLA cannot be evaluated by collecting sufficient performance data naturally; the bandwidth utilization rate at the network egress node of the user terminal cannot be obtained only by active measurement, and therefore, the execution condition of the SLA cannot be comprehensively evaluated. The above method comprehensively considers the results of active and passive measurements, mainly takes the active measurement as a main measure, measures parameters such as IP packet transmission delay, jitter, IP packet loss rate, and the like, and meanwhile, the bandwidth utilization rate parameter calculated by the passive measurement result (usually, flow parameter value) is used for explaining the current network resource use condition. The requirements on the performance indicator thresholds may be reduced in the SLA assessment module if the current bandwidth utilization is above a given percentage.
2. Measurement samples of periodic behavior can be accurately collected. The poisson arrival process is used to determine the distribution of the package-sending times because: the purpose of the test is to detect the SLA violation, and the counting process of the SLA violation conforms to the characteristics of the Poisson process: the random events of SLA violations occur independently in disjoint time intervals and only occur at most once over a sufficiently small time interval. Active measurement packets using the poisson arrival process correspond to a progressive unbiased poisson sampling of the network state, which does not tend to induce synchronization and can be used to accurately collect measurement samples of periodic behavior.
3. An appropriate balance point can be found between accurate detection of network performance and injection of little detection flow, and the expected effect of SLA monitoring is achieved. The end-to-end network performance can also be sampled by using periodic (i.e. fixed time interval) active probing packets, but in order to detect the SLA violation, a short measurement interval, i.e. a high sampling frequency, is required, which means that a large amount of probing traffic is injected into the network, which affects the network in operation (in case of network congestion, the congestion is aggravated) and the service performance perceived by the user, and also causes errors in the active measurement result. The injection flow can be reduced by reducing the packet sending frequency, but the probability of detecting the SLA violation condition is reduced, and the purpose of SLA monitoring is violated. And the active detection packet in the poisson arrival process only needs to inject relatively less flow to monitor the actual condition of the network performance, and a proper balance point is found between the accurate detection of the network performance and the injection of less detection flow, so that the expected effect of SLA monitoring is achieved.
4. The test precision is high, and the pertinence is strong. The device adopts an optimized active measurement improvement method and combines a passive measurement method to obtain end-to-end network performance data, can efficiently and pertinently test each network performance index, and accurately evaluates end-to-end SLA based on the network performance indexes. The invention only generates little network detection flow, does not cause great influence on the network in operation, overcomes the defect of inaccurate test caused by the influence on the network in operation in the traditional active measurement, and has higher test precision. The SLA evaluation method can set the weight of IP network performance parameters in the service according to the requirements of different services, and comprehensively evaluate the service, thereby breaking through the limitation that the analysis evaluation is only carried out on each performance index in isolation in the traditional method.
Drawings
FIG. 1 is a flow chart of an evaluation method of the present invention;
FIG. 2 is a flow chart of rules for determining SLA violations during an evaluation period;
FIG. 3 is a schematic view of an evaluation apparatus of the present invention;
wherein: the system comprises a task management module 1, a service level protocol analysis module 2, a network performance test module 3, a service level protocol evaluation module 4 and a report generation module 5.
Detailed Description
The flow of an end-to-end service level agreement evaluation method is shown in fig. 1:
setting a database for storing the entered SLA information, intermediate data involved in the SLA evaluation (such as test return values of network performance parameters, test task tables and the like) and final evaluation results, and pre-storing a service-parameter weight matrix in the database (step 1). The task management module controls the SLA analysis module to periodically scan the database to check for new SLA entries (step 2). When a new SLA is input, the task management module controls the SLA analysis module to read and analyze the SLA, extracts an SLS part and stores the SLS part in a database as a service level target (SLO); the task management module generates a test task table and stores the test task table in a database (step 3). In the SLA validity period range, the task management module controls the network performance testing module to initiate periodic active packet sending tests from the source node to the destination node according to the arrangement of the testing task table. The test object is the network performance parameters involved in SLS, and the test result is returned and stored in the database. And while the active measurement is executed, one passive measurement is executed on the network outlet node of the user side, so that the test result of the passive measurement, such as the flow parameters of throughput and the like, is obtained and is stored in the database (step 4). The task management module controls the evaluation module to evaluate the SLA according to the network performance data stored in the database by taking an evaluation period (the minimum evaluation period is one day) as a unit, and the final result of the evaluation is whether a certain SLA is violated in a certain evaluation period. And outputting the evaluation result to a database (step 5). The above steps were repeated for the next evaluation.
For the ith IP network performance parameter, the basis of whether the nth sampling point violates is as follows:
IP network performance parameter set A collected by sampling point n n ={a n1 ,a n2 ,...,a ni Obtaining the real measured value a of the I-th IP network performance parameter of the sampling point ni . Threshold set of IP network performance parameters, B = { B =, { B } agreed upon in service level agreements 1 ,b 2 ,...,b i Obtaining a threshold b corresponding to the performance parameter of the ith IP network i . Set of allowed errors for IP network performance parameters agreed from service level agreements, L = { L = { [ L ] 1 ,l 2 ,...,l i Obtaining corresponding allowable error l of I-th IP network performance parameter i . IP network agreed from service level agreements set of performance parameter comparison methods E = { E = } 1 ,e 2 ,...,e i A comparison method e corresponding to the IP network performance parameter of the ith item is obtained from the test i . Comparison a ni And b i Determine whether it satisfies the condition e i . If not, the sampling point n is regarded as the violation point of the IP network performance parameter.
Comparison a ni And b i Determine whether it satisfies the condition e i The comparison method (2) is as follows:
if e i Is greater than number, thenFirst, a is judged ni Whether or not it is greater than b i If a is ni Greater than b i Then it is determined that it satisfies condition e i . Otherwise, judge a again ni Minus b i Is less than l i If less than, then determine a ni And b i Satisfies the condition e i Otherwise, e is not satisfied i
If e i Is less than the number, then a is judged first ni Whether or not less than b i If a is ni Is less than b i Then it is determined that it satisfies the condition e i Otherwise, judging b again i Minus a ni Is less than l i If less than, then determine a ni And b i Satisfies the condition e i . Otherwise, e is not satisfied i
The specific steps of the task management module generating the test task table and storing the test task table into the database are as follows:
extracting SLA serial numbers, test index names to be tested, and source and destination node IP addresses from the SLS; generating a test time of day (24 hours) according to the required test cycle requirements in SLS; and generating a test task table and storing the test task table in a database. The format is as follows: test serial number-SLA serial number-test time-source IP address-destination IP address-test label
The network performance test module executes one-time end-to-end network measurement at a preset test time in the test task table according to the requirements in the test task table, and the specific steps are as follows:
parameters for testing are preset. Such as source, destination IP addresses, test index entries, etc.; determining the distribution of active measurement packets over time, i.e.: the distribution of the times of transmission of the packets in one measurement is determined. Active packet sending measurement is carried out between a source node and a destination node (IP address) to obtain test results of the active measurement, such as IP packet transmission delay, jitter, IP packet loss rate and the like. While the active measurement is executed, a passive measurement is executed on the exit node of the user side network to obtain the test result of the passive measurement, mainly the flow parameters such as throughput. The test results (both active and passive) are saved to a database.
The distribution of package-sending times is typically determined using the poisson arrival process, i.e.: each packet sending time interval obeys exponential distribution G (t) =1-e -λt Where λ is the average arrival time interval. Time t of sending out a packet 1 ,t 2 ,...,t n The following method may be used to determine:
the value of the mean arrival interval λ is determined, for example λ =3 seconds. The number of packets n in one test period T is determined by taking the integer part of the quotient of T/λ (rounded to the small). Produce an independent homogeneous distribution U [0, 1]]Of a pseudo-random number sequence u 1 ,u 2 ,...,u n . By
Figure C20061010443500131
Generating an inter-arrival time pseudorandom sequence Δ 1 ,Δ 2 ,...,Δ n . Such a i Independent obedience exponential distribution F Δi (t)=1-e -λt . Determining the time t of sending a packet 1 ,t 2 ,...,t n . The determination method is t i =t 012 +...+Δ 1 Wherein t is 0 Is the starting time.
The SLA evaluation method of the evaluation module for the X service is shown in FIG. 2:
according to the minimum evaluation time period (which is called an evaluation period, and generally, the minimum evaluation period is a day) set in the service level agreement, the IP network performance parameter values of all the times (which are called sampling points) for collecting the IP network performance parameter data in an evaluation period are read (step 51). The number of samples in the evaluation period is counted as s (step 52). Sequentially evaluating the compliance percentage p of each IP network performance parameter in the evaluation period 1 ,p 2 ,...p n . If the IP network performance parameter exceeds the threshold value set in the service level agreement on a certain sampling point, which is called as a violation, the sampling point is regarded as a violation point of the IP network performance parameter. SystemCounting violation points for the ith IP network performance parameter in the evaluation period as v i . The percentage compliance with respect to the i-th IP network performance parameter during the evaluation period is: p is a radical of formula i =(1-v i S) × 100% (step 53). Weighted summation is carried out on the compliance percentage of each IP network performance parameter as the service compliance percentage in the evaluation period
Figure C20061010443500132
Wherein q is i Is a weight corresponding to the ith IP network performance parameter, and
Figure C20061010443500133
(step 54); for the same IP network performance parameter, in different industriesThe occupied weight can be set according to the business requirement in the business. The task management module controls the evaluation module to evaluate the SLA according to the network performance data saved in the database in units of an evaluation period (a minimum evaluation period is one day), if P < P '(wherein P' is a preset traffic compliance percentage threshold value) in the evaluation period, the service level agreement is considered to be violated in the evaluation period, otherwise, the violation is not caused, and an evaluation result is output and saved in the database (step 5).
An evaluation device for implementing the method of the invention is shown in fig. 3, and comprises a task management module 1, a service level protocol analysis module 2, a network performance test module 3, a service level protocol evaluation module 4 and a report generation module 5; the task management module 1 is in bidirectional communication with the service level protocol analysis module 2, the network performance test module 3 and the service level protocol evaluation module 4 respectively; the report generation module 5 is connected with the task management module 1 and can generate an evaluation report according to a set format for an evaluation result; the task management module 1 can control the whole process of the service level protocol evaluation process, organize the cooperative work of other modules and generate a network performance test task; the service level agreement analysis module 2 can scan the database regularly, check whether a new SLA is recorded, if so, read and analyze the SLA, extract an SLS part to generate a service level target table and store the service level target table in the database as a basis for further generating a test task table and an evaluation table; the network performance testing module 3 can execute a testing task according to the requirement of the testing task table, perform end-to-end performance testing on the network between the source node and the destination node in a mode of combining optimized active improvement measurement and passive measurement, and store a testing result, namely network performance data, in a database to be used as the basis of SLA evaluation; the service level agreement evaluation module 4 may evaluate the performance of a given SLA over its lifetime based on the test results and service level objective requirements stored in the database.
The noun explains:
1. IP packet transmission delay: defined as the time it takes to transmit an IP packet across one or more network segments (regardless of whether the transmission was successful or not).
2. Dithering: and the difference value of the maximum IP packet transmission delay and the minimum IP packet transmission delay in a short measurement time interval.
3. IP packet loss rate: the IP packet loss rate is a ratio of a lost IP packet transmission result to all IP packets.
4. Throughput: the amount of data transmitted per unit time through a given measurement point in the network.
5. SNMP, simple Network Management Protocol: simple network management protocol, which is a standard protocol for managing nodes on an IP network.

Claims (7)

1. A method for evaluating an end-to-end service level agreement, the method comprising: the method comprises the following steps:
step 1] setting a database: setting a database for storing the entered SLA information, the business-parameter weight matrix, the intermediate data involved in the SLA evaluation and the final evaluation result;
step 2] scanning the database: the task management module (1) controls the service level agreement analysis module (2) to scan the database periodically and check whether a new SLA is input;
step 3] generating a test task table: when a new SLA is input, the task management module (1) controls the service level agreement analysis module (2) to read and analyze the SLA, extracts an SLS part and stores the SLS part in a database as a service level target (SLO); the task management module (1) generates a test task table and stores the test task table into a database;
and 4] simultaneously carrying out active measurement and passive measurement: in the SLA validity period range, the task management module (1) controls the network performance test module (3) to initiate a periodic active packet sending test from a source node to a destination node according to the arrangement of a test task table, and stores a test result in a database; meanwhile, one passive measurement is executed on the exit node of the user end network, and the test result of the passive measurement is obtained and stored in a database;
step 5] evaluating network performance data: the task management module (1) controls the service level agreement evaluation module (4) to evaluate the SLA according to the network performance data stored in the database by taking an evaluation period as a unit, and stores an evaluation result in the database;
the step 3] comprises the following specific steps of generating a test task table:
step 31] extracting SLA serial numbers, test index names to be tested, source node IP addresses and destination node IP addresses from the SLS;
step 32] generating a test time in a test period according to the test period requirement required in the SLS;
step 33], generating a test task table, and storing the test task table in a database, wherein the format of the test task table is as follows: test sequence number-SLA sequence number-test time-source IP address-destination IP address-test index.
The method for evaluating the network performance data in the step 5) comprises the following steps:
step 51, reading all IP network performance parameter values at the moment of acquiring IP network performance parameter data in an evaluation period according to the minimum evaluation time period set in the service level protocol;
step 52, counting the number of sampling points in the evaluation period and recording as s;
step 53]Sequentially evaluating the compliance percentage p of each IP network performance parameter in the evaluation period 1 ,p 2 ,...p n (ii) a Counting the number of violation points for the ith IP network performance parameter in the evaluation period, and calculating the compliance percentage for the ith IP network performance parameter in the evaluation period;
step 54] performs a weighted summation of the compliance percentages of the various IP network performance parameters as the service compliance percentage during the evaluation period.
2. The method of evaluating an end-to-end service level agreement according to claim 1, wherein: the step 4] of simultaneously performing active measurement and passive measurement comprises the following specific steps:
step 41] presetting parameters for testing: the parameters comprise source and destination IP addresses and test index items;
step 42] determining the distribution of the active measurement packets in time, namely determining the distribution of the packet sending time in one measurement;
step 43] carrying out active packet sending measurement between the source node and the destination node to obtain a test result of the active measurement;
step 44] when the active measurement is executed, a passive measurement is executed on the network exit node of the user terminal, and a test result of the passive measurement is obtained;
and step 45, storing the test result to a database.
3. The method of evaluating an end-to-end service level agreement according to claim 2, wherein: the distribution of the package sending time in the step 42] is determined by the Poisson arrival process:
step 421] first determining the value of the mean arrival interval λ;
step 422] determining the packet sending number n in the test period T;
step 423]Generating a singleUniformly distributed U0, 1]Of a pseudo-random number sequence u 1 ,u 2 ,...,u n
Step 424]Generating an inter-arrival time pseudorandom sequence Δ 1 ,Δ 2 ,...,Δ n
Step 425]Determining the time t of sending a packet 1 ,t 2 ,...,t n Is t i =t 012 +...+Δ i
4. The method of evaluating an end-to-end service level agreement according to claim 1, wherein: the basis for judging whether the performance parameter of the ith IP network of the nth sampling point is violated in the step 53] is as follows:
step 531]IP network performance parameter set A collected from sampling point n n ={a n1 ,a n2 ,...,a ni Obtaining the real measured value a of the ith IP network performance parameter of the sampling point ni
Step 532]Threshold set of IP network performance parameters agreed from service level agreementsB={b 1 ,b 2 ,...,b i Obtaining a threshold b corresponding to the performance parameter of the ith IP network i
Step 533]IP network agreed from service level agreements set of allowable errors for performance parameters L = { L = { L } 1 ,l 2 ,...,l i Get the corresponding allowable error l of the ith IP network performance parameter i
Step 534]IP network performance parameter comparison method set E = { E = agreed from service level agreement 1 ,e 2 ,...e i A comparison method e corresponding to the IP network performance parameter of the ith item is obtained from the test i
Step 535]Comparison a ni And b i If the condition e is not satisfied i Then the sampling point n is regarded as the violation point of the IP network performance parameter.
5. The method of evaluating an end-to-end service level agreement according to claim 4, wherein: said step 535]In (a) ni And b i The comparison method is as follows:
step 5351]If e i To be greater than number, firstly judge a ni Whether or not it is greater than b i If a is ni Greater than b i Then a is ni And b i Satisfies the condition e i (ii) a If a is ni Is less than b i Then, judge a ni Minus b i Is less than l i If so, then a is determined ni And b i Satisfies the condition e i Otherwise, e is not satisfied i
Step 5352]If e i To be less than number, firstly judge a ni Whether or not it is less than b i If a is ni Is less than b i Then a is ni And b i Determining that condition e is satisfied i If a is ni Greater than b i Then, judging b i Minus a ni Is less than l i If so, then a is determined ni And b i Satisfies the condition e i (ii) a Otherwise, e is not satisfied i
6. An evaluation device for implementing an evaluation method of an end-to-end service level agreement, characterized in that: the system comprises a task management module (1), a service level protocol analysis module (2), a network performance test module (3), a service level protocol evaluation module (4) and a report generation module (5); the task management module (1) is in bidirectional communication with the service level protocol analysis module (2), the network performance test module (3) and the service level protocol evaluation module (4) respectively;
the task management module (1) can control all the evaluation processes and generate a network performance test task;
the service level agreement analysis module (2) can periodically scan the database, read and analyze SLA, extract SLS part and generate a service level target table and store the service level target table in the database;
the network performance testing module (3) performs end-to-end performance testing on the network between the source node and the destination node in an optimized mode of combining active improvement measurement and passive measurement according to the requirements of a testing task table, and stores a testing result, namely network performance data, into a database;
the service level agreement evaluation module (4) may evaluate the performance of a given SLA over its lifetime based on the test results and service level objective requirements stored in the database.
The specific steps of the task management module (1) for generating the test task table and storing the test task table into the database are as follows:
extracting SLA serial numbers, test index names to be tested and source and destination node IP addresses from the SLS; generating a test time in a test period according to the test period requirement required in the SLS; generating a test task table and storing the test task table in a database; the format is as follows: testing serial number-SLA serial number-testing time-source IP address-destination IP address-testing index;
the specific steps of the service level agreement evaluation module (4) for evaluating the SLA according to the test results and the service level target requirements stored in the database are as follows:
reading all IP network performance parameter values at the moment of acquiring IP network performance parameter data in an evaluation period according to the minimum evaluation period set in the service level protocol; counting the number of sampling points in the evaluation period and recording as s; sequentially evaluating compliance percentage p of each IP network performance parameter in the evaluation period 1 ,p 2 ,...p n (ii) a Counting the number of violation points for the ith IP network performance parameter in the evaluation period, and calculating the compliance percentage for the ith IP network performance parameter in the evaluation period; and performing weighted summation on the compliance percentage of each IP network performance parameter to serve as the service compliance percentage in the evaluation period.
7. The apparatus for evaluating an end-to-end service level agreement according to claim 6, wherein: it comprises a report generation module (5); the report generation module (5) is connected with the task management module (1) and can generate an evaluation report according to a set format of an evaluation result.
CNB2006101044356A 2006-07-31 2006-07-31 Method and apparatus for estimating terminal to terminal service grade protocol Expired - Fee Related CN100369424C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101044356A CN100369424C (en) 2006-07-31 2006-07-31 Method and apparatus for estimating terminal to terminal service grade protocol

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101044356A CN100369424C (en) 2006-07-31 2006-07-31 Method and apparatus for estimating terminal to terminal service grade protocol

Publications (2)

Publication Number Publication Date
CN1905497A CN1905497A (en) 2007-01-31
CN100369424C true CN100369424C (en) 2008-02-13

Family

ID=37674632

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101044356A Expired - Fee Related CN100369424C (en) 2006-07-31 2006-07-31 Method and apparatus for estimating terminal to terminal service grade protocol

Country Status (1)

Country Link
CN (1) CN100369424C (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247282B (en) * 2008-01-29 2010-10-27 杭州华三通信技术有限公司 Network test method, system and network managing station based on service level protocol
CN101237358B (en) * 2008-02-01 2010-08-25 北京工业大学 Analysis method on available bandwidth of network parameter measurement system and point-to-point access time sequence
CN103098417B (en) * 2010-06-08 2016-04-06 工业和信息化部电信研究院 A kind of comprehensive evaluation system for soft-switch usability
EP2834962A1 (en) * 2012-04-06 2015-02-11 Interdigital Patent Holdings, Inc. Optimization of peer-to-peer content delivery service
CN107579862B (en) * 2017-10-17 2021-05-18 北京安控科技股份有限公司 Method for measuring network communication capability of equipment
CN107948021A (en) * 2018-01-03 2018-04-20 深圳市国电科技通信有限公司 A kind of detection method and device of end-to-end communication performance
CN110874352B (en) * 2018-08-29 2023-04-11 阿里巴巴集团控股有限公司 Database management method and system
CN109245949B (en) * 2018-10-31 2022-03-01 新华三技术有限公司 Information processing method and device
CN109587011A (en) * 2019-01-10 2019-04-05 北京新宇航星科技有限公司 Multifunctional network ability meter and test method
CN110825643A (en) * 2019-11-11 2020-02-21 广东电网有限责任公司 Method for monitoring execution condition of test task
CN113965475B (en) * 2020-07-02 2023-09-05 中国移动通信集团设计院有限公司 Network slicing project acceptance method and system
CN113098708B (en) * 2021-03-23 2022-07-05 北京首都在线科技股份有限公司 Public network quality evaluation method and device, electronic equipment and storage medium
CN113472567B (en) * 2021-06-16 2022-08-19 中国联合网络通信集团有限公司 Network SLA calculation method and device
CN115473871B (en) * 2022-09-19 2023-08-04 广州市百果园网络科技有限公司 Domain name local resolution method and device, equipment, medium and product thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008402A2 (en) * 2003-07-11 2005-01-27 International Business Machines Corporation Systems and methods for monitoring and controlling business level service level agreements
CN1601976A (en) * 2003-09-26 2005-03-30 国际商业机器公司 Real-time service level agreement (SLA) impact analysis method and system
WO2005096192A1 (en) * 2004-03-31 2005-10-13 International Business Machines Corporation Method enabling real-time testing of on-demand infrastructure to predict service level agreement compliance

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008402A2 (en) * 2003-07-11 2005-01-27 International Business Machines Corporation Systems and methods for monitoring and controlling business level service level agreements
CN1601976A (en) * 2003-09-26 2005-03-30 国际商业机器公司 Real-time service level agreement (SLA) impact analysis method and system
WO2005096192A1 (en) * 2004-03-31 2005-10-13 International Business Machines Corporation Method enabling real-time testing of on-demand infrastructure to predict service level agreement compliance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
端到端SLA性能测量与分析. 成卫青,龚俭,葛立青,赵钧.东南大学学报(自然科学版),第34卷第5期. 2004 *

Also Published As

Publication number Publication date
CN1905497A (en) 2007-01-31

Similar Documents

Publication Publication Date Title
CN100369424C (en) Method and apparatus for estimating terminal to terminal service grade protocol
Jain et al. Ten fallacies and pitfalls on end-to-end available bandwidth estimation
JP5204295B2 (en) Estimating available end-to-end bandwidth in packet-switched communication networks
US10003506B2 (en) Automatic discovery and enforcement of service level agreement settings
US20040153563A1 (en) Forward looking infrastructure re-provisioning
EP2590081A2 (en) Method, computer program, and information processing apparatus for analyzing performance of computer system
CN104348678B (en) Ethernet performance measurement method, apparatus and system
Zarifzadeh et al. Range tomography: combining the practicality of boolean tomography with the resolution of analog tomography
JP2004289791A (en) A measurement architecture for obtaining hop-by-hop one-way packet loss and delay in multi-class service networks
Liu et al. Multi-hop probing asymptotics in available bandwidth estimation: Stochastic analysis
CN110740065B (en) Method, device and system for identifying deterioration fault point
JP2008283621A (en) Apparatus and method for monitoring network congestion state, and program
US8437264B1 (en) Link microbenchmarking with idle link correction
JP4935635B2 (en) Network bandwidth estimation program, network bandwidth estimation device, network bandwidth estimation method, and measurement device
Choi et al. Practical delay monitoring for ISPs
Angrisani et al. Testing communication and computer networks: an overview
CN115643192A (en) Detection system, method, equipment and medium for no-missing packet grabbing index
Serral-Gracia et al. Network performance assessment using adaptive traffic sampling
Serral-Gracià et al. An efficient and lightweight method for Service Level Agreement assessment
CN112583658A (en) Available bandwidth measuring method, storage medium and equipment
JP2001119396A (en) Method for evaluating service quality in frequency area on packet network
Qiu et al. Packet doppler: Network monitoring using packet shift detection
Homayouni et al. CMPT: A methodology of comparing performance measurement tools
Nguyen et al. On the correlation of internet packet losses
Tan et al. Abshoot: A reliable and efficient scheme for end-to-end available bandwidth measurement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee

Owner name: XI'AN IWNCOMM CO., LTD.

Free format text: FORMER NAME: XIDIAN JIETONG WIRELESS NETWORK COMMUNICATION CO LTD, XI'AN

CP01 Change in the name or title of a patent holder

Address after: High tech Zone technology two road 710075 Shaanxi city of Xi'an Province, No. 68 Xi'an Software Park A201

Patentee after: CHINA IWNCOMM Co.,Ltd.

Address before: High tech Zone technology two road 710075 Shaanxi city of Xi'an Province, No. 68 Xi'an Software Park A201

Patentee before: CHINA IWNCOMM Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080213

Termination date: 20210731