CN115955447B - Data transmission method, switch and switch system - Google Patents
Data transmission method, switch and switch system Download PDFInfo
- Publication number
- CN115955447B CN115955447B CN202310231807.5A CN202310231807A CN115955447B CN 115955447 B CN115955447 B CN 115955447B CN 202310231807 A CN202310231807 A CN 202310231807A CN 115955447 B CN115955447 B CN 115955447B
- Authority
- CN
- China
- Prior art keywords
- queuing
- data packets
- data
- queue
- priority
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application relates to a data transmission method, a switch and a switch system, wherein the method comprises the steps of responding to an acquired data packet, analyzing the data packet and acquiring an address of the data packet; constructing a plurality of queuing queues according to the addresses, wherein the priority of the data packets in each queuing queue is the same; transmitting data packets on the arrangement queue according to the priority, and adjusting the bandwidth of the arrangement queue according to the data packet buffer capacity on the arrangement queue; and configuring a first dynamic cache pool to the low-priority queuing queue, and returning the data packets sent to the first dynamic cache pool to the corresponding queuing queue in a sequential alternate queuing mode. According to the data transmission method, the switch and the switch system, the high-priority data is rapidly processed by unified management of the received data and the transmission mode according to the priority level, and the temporarily stored data is stored by means of the dynamic cache pool in a matched mode, so that the high-priority data can be rapidly circulated in a network.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a data transmission method, a switch, and a switch system.
Background
The switch has three transmission modes of Cut through, store and forward and fragment free; the Cut through transmission mode is forwarded after receiving the destination address, and the mode has small delay, but the damaged data is forwarded as well; after the Store-and-forward transmission mode receives the complete data packet, the data packet is checked for quality, forwarded well, and retransmitted poorly. This way the transmission is reliable, but its delay is long; after the Fragment free transmission mode receives the data packet, the forwarding of more than 64bytes and the discarding of less than 64bytes are performed, and the quality of the mode is between the two modes.
The above manner can achieve a large data throughput, but in a prioritized usage scenario, the above manner is not used any more, because the user's wishes are different for data generated by different applications, for example, for applications such as voice and video, the user expects the network to react quickly; for downloading, such a speed craving is not expected. Under the condition of limited exchanger resources, how to reasonably use the bandwidth, the requirements of users are met, and further requirements are needed.
Disclosure of Invention
The application provides a data transmission method, a switch and a switch system, which realize the rapid processing of high-priority data by the unified management of received data and the transmission mode according to the priority level, and cooperate with the storage of temporary stored data by means of a dynamic cache pool, so that the data with high priority level can be circulated in a network rapidly.
The above object of the present application is achieved by the following technical solutions:
in a first aspect, the present application provides a data transmission method, including:
responding to the acquired data packet, analyzing the data packet, and acquiring the address of the data packet, wherein the address comprises an MAC address and a public network address;
constructing a plurality of queuing queues according to the addresses, wherein the priority of the data packets in each queuing queue is the same;
transmitting data packets on the arrangement queue according to the priority, and adjusting the bandwidth of the arrangement queue according to the data packet buffer capacity on the arrangement queue; and
configuring a first dynamic cache pool to a low-priority queuing queue, and returning the data packets sent to the first dynamic cache pool to the corresponding queuing queue in a sequential alternate queuing mode;
and merging the queuing queues to the queuing queues of a higher level or a lower level when the number of the data packets on the queuing queues of a certain priority is smaller than the set number in unit time.
In a possible implementation manner of the first aspect, the acquired data packets are filtered, and the data packets with the length smaller than 64bytes are removed.
In a possible implementation manner of the first aspect, for the data packets on the high priority queuing, the transmission time of each data packet is the same, and the high level queuing order takes up the bandwidth of the low level queuing.
In a possible implementation manner of the first aspect, when a bandwidth shortage occurs, a second dynamic buffer pool is allocated to the queuing queue without bandwidth, and the second dynamic buffer pool is used for storing the extruded data packet;
after the bandwidth is recovered, the data packets in the second dynamic cache pool are returned to the queuing queue in a mode of sequential alternate queuing.
In a possible implementation manner of the first aspect, the method further includes:
extracting the data packets on the arrangement queue and copying the data packets into a check cache pool;
carrying out integrity check on the data packet in the checking cache pool; and
implanting the integrity check result into an unsent data packet on the queuing;
wherein, the data packet which does not pass the integrity check requests to send a correction data packet.
In a possible implementation manner of the first aspect, the data packet including the integrity check result is not copied to the check cache pool.
In a possible implementation manner of the first aspect, the correction data packets enter the corresponding queuing queues in a dequeuing manner;
or correct the packet into the enqueue buffer pool and then enqueue to the first bit of the queue.
In a second aspect, the present application provides a switch, comprising:
the analyzing unit is used for responding to the acquired data packet, analyzing the data packet and acquiring the address of the data packet, wherein the address comprises an MAC address and a public network address;
the queue unit is used for constructing a plurality of queuing queues according to the addresses, and the priority of the data packets in each queuing queue is the same;
the sending unit is used for sending the data packets on the arrangement queue according to the priority, and adjusting the bandwidth of the arrangement queue according to the data packet buffer capacity on the arrangement queue;
the configuration unit is used for configuring the first dynamic cache pool to the low-priority queuing queue, and the data packets sent to the first dynamic cache pool are returned to the corresponding queuing queue in a sequential alternate queuing mode; and
and the queue adjusting unit is used for merging the queuing queues to the queuing queues of a higher level or a lower level when the number of the data packets on the queuing queue of a certain priority is smaller than the set number in unit time.
In a third aspect, the present application provides a switch system, the system comprising:
one or more memories for storing instructions; and
one or more processors configured to invoke and execute the instructions from the memory to perform the method as described in the first aspect and any possible implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium comprising:
a program which, when executed by a processor, performs a method as described in the first aspect and any possible implementation of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising program instructions which, when executed by a computing device, perform a method as described in the first aspect and any possible implementation manner of the first aspect.
In a sixth aspect, the present application provides a chip system comprising a processor for implementing the functions involved in the above aspects, e.g. generating, receiving, transmitting, or processing data and/or information involved in the above methods.
The chip system can be composed of chips, and can also comprise chips and other discrete devices.
In one possible design, the system on a chip also includes memory to hold the necessary program instructions and data. The processor and the memory may be decoupled, provided on different devices, respectively, connected by wire or wirelessly, or the processor and the memory may be coupled on the same device.
Drawings
Fig. 1 is a schematic block diagram of a data transmission method according to the present application.
Fig. 2 is a schematic diagram of a queue arrangement processing procedure of a data packet provided in the present application.
Fig. 3 is a schematic diagram of a process for encapsulating a data packet provided in the present application.
Fig. 4 is a schematic diagram of a data packet provided in the present application.
Fig. 5 is a schematic diagram of a process of entering a first dynamic buffer pool by a data packet provided in the present application.
Fig. 6 is a schematic diagram of a process of entering a second dynamic buffer pool for a data packet provided in the present application.
Fig. 7 is a schematic block diagram of a step flow for inspecting a data packet provided in the present application.
Fig. 8 is a schematic diagram of inspecting a data packet and transmitting inspection results provided in the present application.
Detailed Description
Working principle of switch data transmission: after any node of the exchanger receives the data sending instruction, the address table stored in the memory is quickly searched, the connection position of the MAC address network card is confirmed, and then the data is sent to the node. If the corresponding position is found in the address table, the corresponding position is sent; otherwise, the switch will record the address for the next search and use. In general, the switch only needs to transmit frames to the corresponding points, and does not need to transmit them to all nodes like a hub, thereby saving resources and time and improving a data transmission rate.
Hubs use more sharing methods to transmit data and cannot require communication speed. The hub sharing method, also called a shared network, uses hubs as connected devices and has only one data flow direction, so that the efficiency of network sharing is very low.
In contrast, a switch may identify each computer to which it is connected and use the physical address (commonly referred to as a MAC address) of the network card of each computer for storage and identification. On the premise that broadcast searching is not needed, the memory MAC address of the corresponding position can be directly found, and the data transmission between the two nodes is completed by using the temporary special data transmission channel without external interference. Since the switch also has a full duplex transmission method, it can also form a stereo cross data transmission channel structure by simultaneously establishing temporary dedicated channels between pairs of nodes.
The three transmission modes mentioned in the foregoing are as follows:
straight-Through (Cut Through): when an input port detects a data packet, the header of the packet is checked and the data packet is passed through to the corresponding port according to the destination address in the packet.
The advantages are that: the method starts forwarding without waiting for the completion of the data packet receiving, and has the advantages of high switching speed and very small delay.
Disadvantages: without providing error detection services, it is possible to forward out erroneous data packets. No buffer is provided, ports with different rates cannot be directly connected, and packet loss is easy.
Storage-forwarding (Store and Forward): the method firstly receives the data packet completely, and if the data packet has no error after CRC check, the data packet is forwarded according to the address.
The advantages are that: providing error detection services improves network performance. The forwarding service of ports with different speeds is supported, and the cooperation between the high-speed port and the low-speed port can be ensured.
Disadvantages: the transmission delay is large and a large buffer capacity is required.
Fragment Free forwarding (Fragment Free): it checks if the packet is 64bytes long, if it is less than 64bytes, it indicates a waste packet, it is discarded, and if it is more than 64bytes, it sends the packet.
This approach ensures that collision fragments do not propagate in the network, improving network efficiency, and its data processing speed is between that of the through-type and the storage-forwarding type.
The low-end switch products generally have only one switching mode, while some high-end switch products have two switching modes, and the switching modes can be automatically selected according to network environments.
The technical solutions in the present application are described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a data transmission method disclosed in the present application includes the following steps:
s101, responding to the acquired data packet, analyzing the data packet, and acquiring an address of the data packet, wherein the address comprises an MAC address and a public network address;
s102, constructing a plurality of queuing queues according to addresses, wherein the priority of data packets in each queuing queue is the same;
s103, sending the data packets on the arrangement queue according to the priority, and adjusting the bandwidth of the arrangement queue according to the data packet buffer capacity on the arrangement queue; and
s104, configuring a first dynamic cache pool to the low-priority queuing queue, and returning the data packets sent to the first dynamic cache pool to the corresponding queuing queue in a sequential alternate queuing mode;
and merging the queuing queues to the queuing queues of a higher level or a lower level when the number of the data packets on the queuing queues of a certain priority is smaller than the set number in unit time.
Specifically, in step S101, the switch receives a data packet, and in response to the received data packet, the switch parses the data packet to obtain an address of the data packet, where the address includes a MAC address and a public network address, and the MAC address and the public network address are both terminal addresses to which the data packet is to be sent, and the difference is that the switch knows the MAC address and does not know the public network address.
It will be appreciated that a table of MAC addresses is maintained within the switch, which table records the MAC addresses of all devices connected together with the corresponding port, i.e. the MAC address of a certain device, and the port location at which this device is inserted into the switch.
Then, according to the MAC address of the receiver in the transmitted data packet, the line record is found on the front and back surfaces, so that the port of the switch from which the data packet is to be forwarded can be known. The public network address is not recorded in the MAC address table and thus needs to be handled separately.
After the public network address is broadcasted in the network to obtain the corresponding MAC address, the switch stores the MAC address in the own MAC address table, and processes the part of data packets in the mode of step S101.
Referring to fig. 2, in step S102, a plurality of queues are constructed according to the addresses, the priority of the data packets in each queue is the same, and when the number of addresses is a plurality of addresses, the number of queues is a plurality of groups, and the number of queues in each group is a plurality of groups.
The purpose of queuing is to classify the priority of the data packets, for example, the following way may be used: priorities 6 and 7 are typically reserved for network control data usage; priority 5 recommends voice data usage; priority 4 is used by video conferences and video streams; priority 3 is used for voice control data; priority 1 and 2 for data traffic usage; priority 0 is a default flag value.
IP header length (header): length 4 bits. The purpose of this field is to describe the length of the IP header, since there are variable length optional parts in the IP header. This part takes up 4 bits, in units of 32 bits (4 bytes), i.e. the present field value=ip header length (in units of bits)/(8*4), and therefore the length of one IP header is "1111" at maximum, i.e. 15×4=60 bytes. The minimum length of the IP header is 20 bytes.
Referring to fig. 3 and 4, for the header in the data packet, IP is taken as an example for explanation: IP header length (header): length 4 bits. The purpose of this field is to describe the length of the IP header, since there are variable length optional parts in the IP header. This part takes up 4 bits, in units of 32 bits (4 bytes), i.e. the present field value=ip header length (in units of bits)/(8*4), and therefore the length of one IP header is "1111" at maximum, i.e. 15×4=60 bytes. The minimum length of the IP header is 20 bytes.
Type of Service (Type of Service): and 8 bits in length. The 8 bits are defined as PPP DT R C0 as follows.
The TOS field has been redefined as part of the differentiated services (Diffsrv) architecture, with the first 6 bits constituting the differentiated services code point (DiffServ Code Piont, DSCP) with which 64 different service classes can be defined. ECN explicit congestion notification.
Total IP packet Length (Total Length): length 16 bits. The length of the IP packet (including the header and the data) is calculated in bytes, so the maximum length of the IP packet is 65535 bytes.
Identifier (datagram ID) 16 bits in length. This field is used in conjunction with the Flags and Fragment offset fields to segment (Fragment) large upper layer packets. After the router splits a packet, all split packets are marked with the same value so that the destination device can distinguish which packet is part of the split packet.
Flags (Flags): length 3 bits. The first bit of this field is not used. The second bit is the DF (Don't Fragment) bit, which when set to 1 indicates that the router cannot Fragment the upper layer packet. If an upper layer packet cannot be forwarded without fragmentation, the router discards the upper layer packet and returns an error message. The third bit is MF (More Fragments) bits, when the router fragments an upper layer packet, the router will set the MF bit to 1 in the header of the IP packet except the last fragment.
Slice Offset (Fragment Offset): length 13 bits. Indicating the position of the IP packet in the set of fragmented packets, whereby the receiving end assembles a recovered IP packet.
Time To Live (TTL): and 8 bits in length. When an IP packet is transmitted, a specific value is given to this field. As the IP packet passes through each of the routers along the way, each router along the way will decrease the TTL value of the IP packet by 1. If the TTL is reduced to 0, the IP packet is dropped. This field may prevent IP packets from being forwarded in the network due to routing loops.
Each time a data packet passes through a layer, the data packet is packed once, and the packed information comprises relevant information such as paths and the like. It will be appreciated that a router may be considered a transfer station having a plurality of input channels and a plurality of output channels, data optimisation being such that data packets flowing on the input channels and output channels can flow in an optimal or otherwise most appropriate manner.
The centralized processing of the data packets on the input channel aims at distinguishing which data packets need to be processed preferentially and which data packets can be processed in a hysteresis way, then the data packets are uniformly queued, and after uniform queuing, the data packets are transmitted. Meanwhile, in the unified queuing process, the addresses of the data packets are analyzed, namely, the data packets on the same group of queuing queues are all required to be sent to the same address.
In step S103, the data packets on the queue are sent according to the priority, and the bandwidth of the queue is adjusted according to the buffer capacity of the data packets on the queue, when the data packets are sent, the data packets with high priority are sent according to the priority mentioned above, and the data packets with high priority are always in a sending state, and the bandwidth of the queue with high priority is preferentially ensured; low priority packets use intermittent transmissions or occupy small bandwidth resources.
In addition, considering that the processing speeds between the sending data packet and the receiving data packet are inconsistent, the bandwidth of the queuing queue is adjusted according to the data packet buffer capacity on the queuing queue, specifically, when the data packet buffer capacity is large, the bandwidth is preferentially allocated to the queuing queue, so that the data packet on the queuing queue can be sent out at a speed as fast as possible, and the phenomenon of packet loss is avoided, because the total capacity of the data packet buffer capacity is limited.
When allocating bandwidth, the bandwidth is allocated also in accordance with the priority level, that is, the queuing queue can only squeeze the bandwidth of the queuing queue with the low priority level in the process of allocating bandwidth. When the bandwidth has no margin, the subsequent data packets are discarded, and retransmission of the data packets is applied.
Referring to fig. 5, in step S104, a first dynamic buffer pool is configured to the low-priority queuing, and the data packets sent to the first dynamic buffer pool are returned to the corresponding queuing by sequentially alternately inserting the data packets, and the first dynamic buffer pool is used for storing the data packets temporarily unable to be sent.
The purpose of the alternation is to keep the data packets as ordered as possible, if a centralized transmission is used, during which new data packets are inevitably generated into the first dynamic buffer pool. The alternating transmission mode can enable the data packets on the arrangement queue and the data packets in the first dynamic cache pool to be in a flowing state, and meanwhile can occupy bandwidth.
In the foregoing process, the situation that the number of data packets on a certain priority arrangement queue is too small may occur, and for the processing of this situation, the following manner is used: and merging the queuing queues into the queuing queues of a higher level or a lower level when the number of the data packets on the queuing queues of a certain priority is smaller than the set number in unit time.
Preferably, the queues are combined into a higher order queue.
The purpose of this approach is to reduce the number of queues, since the management of queues also consumes a certain amount of processor resources, and after optimizing the number of queues, this part of the resources can be freed up for the reception and transmission of data packets.
Before the data packets enter the queuing, the acquired data packets are screened, the data packets with the length smaller than 64bytes are removed, the data packets with the length smaller than 64bytes are all invalid data packets, and the invalid data packets do not enter the queuing, so that the utilization rate of the queuing can be improved.
For invalid data packets, the sender needs to be applied for retransmission, and the newly transmitted data packets are processed in the manner from step S101 to step S104.
For the data packets on the high-priority arrangement, the processing strategy that the sending time of each data packet is the same is adopted, and the processing mode can enable the data packets on the high-priority arrangement to be received and sent at a stable speed. Compared with the transmission in an equal bandwidth mode, the constant speed transmission does not cause the retention problem. This portion requires occupied bandwidth and the higher order queues are ordered to occupy less bandwidth than the lower order queues.
In the case of insufficient bandwidth, this approach may occur when the partial queuing queue has no bandwidth, which is handled as follows:
referring to fig. 6, when the bandwidth is insufficient, a second dynamic buffer pool is allocated to the queuing queue without bandwidth, and the second dynamic buffer pool is used for storing the extruded data packet; after the bandwidth is recovered, the data packets in the second dynamic cache pool are returned to the queuing queue in a mode of sequential alternate queuing.
Referring to fig. 7, for the integrity of the data packet, the present application uses the following method to process:
s201, extracting the data packets on the arrangement queue and copying the data packets into a check cache pool;
s202, carrying out integrity check on the data packet in the check buffer pool; and
s203, implanting the integrity check result into an unsent data packet on the queuing;
wherein, the data packet which does not pass the integrity check requests to send a correction data packet.
Specifically, the data packets to be checked are copied into the checking buffer pool, and then the integrity of the data packets is checked in the checking buffer pool, and the checking mode can adopt a hash algorithm, because the hash is an irreversible mapping, the data can be calculated to obtain a hash value through the hash algorithm, and the hash value can not be reflected any more to obtain the original data.
In general, the hash values obtained for different data are different, but there is little possibility that collisions will occur, and this very small probability is not considered here. The hash algorithm used in network data integrity verification generally comprises: MD5, SHA.
Referring to fig. 8, a data packet passing through the integrity check is deleted from the check buffer pool, the integrity check result is embedded in an unsent data packet on the queue, the data packet is sent to the using terminal, the using terminal can know which data packet can be used and which data packet cannot be used through analyzing the data packet, and the method is limited to the data packet carrying out the integrity check in the check buffer pool.
For the data packet which is not subjected to the integrity check in the check cache pool, the user terminal needs to perform the integrity check by itself, and then requests a new data packet from the transmitting end.
The processing mode has the advantages that the integrity check of the data packet is simultaneously borne by the switch and the using terminal, and when the load capacity of the switch is small, the switch bears most of the integrity check of the data packet; when the load of the switch is large, the switch bears the integrity check of a small part of the data packets.
That is, when the load capacity of the switch is large, the capacity of the checking cache pool is also allocated to the first dynamic cache pool and the second dynamic cache pool for use. The dynamic allocation of the buffer pool can enable the switch to have more buffer space to store data packets which cannot be transmitted temporarily on the queue arrangement when the load capacity is large.
In some possible implementations, the data packet containing the integrity check result is not copied to the check cache pool. Since this packet is a corrupted packet, it is useful in the manner of use of the present application, and it is possible to allocate limited computing resources to other processes for use without performing an integrity check on such a packet.
The exchanger and the correction data packet requested by the user terminal enter the corresponding queuing queue in a queue inserting mode; or correct the packet into the enqueue buffer pool and then enqueue to the first bit of the queue. The two processing methods are different in that the receiving time of the terminal is different, and the receiving time of the first processing method is slower than that of the first processing method.
The application also provides a switch, comprising:
the analyzing unit is used for responding to the acquired data packet, analyzing the data packet and acquiring the address of the data packet, wherein the address comprises an MAC address and a public network address;
the queue unit is used for constructing a plurality of queuing queues according to the addresses, and the priority of the data packets in each queuing queue is the same;
the sending unit is used for sending the data packets on the arrangement queue according to the priority, and adjusting the bandwidth of the arrangement queue according to the data packet buffer capacity on the arrangement queue;
the configuration unit is used for configuring the first dynamic cache pool to the low-priority queuing queue, and the data packets sent to the first dynamic cache pool are returned to the corresponding queuing queue in a sequential alternate queuing mode; and
and the queue adjusting unit is used for merging the queuing queues to the queuing queues of a higher level or a lower level when the number of the data packets on the queuing queue of a certain priority is smaller than the set number in unit time.
Further, the obtained data packets are screened, and the data packets with the length smaller than 64bytes are removed.
Further, for the data packets on the high priority queuing queue, the sending time of each data packet is the same, and the bandwidth of the low primary queuing queue is squeezed by the sequence of the high primary queuing queue.
Further, when the condition of insufficient bandwidth occurs, a second dynamic cache pool is allocated to the queuing queue without bandwidth, and the second dynamic cache pool is used for storing the extruded data packet;
after the bandwidth is recovered, the data packets in the second dynamic cache pool are returned to the queuing queue in a mode of sequential alternate queuing.
Further, the method further comprises the following steps:
the copying unit is used for extracting the data packets on the arrangement queue and copying the data packets into the check cache pool;
the checking unit is used for checking the integrity of the data packet in the checking cache pool;
an implanting unit, configured to implant the integrity check result into an unsent data packet on the queuing; and
and the request unit is used for requesting to send the corrected data packet when the data packet which does not pass the integrity check is found.
Further, the data packet containing the integrity check result is not copied to the check cache pool.
Further, correcting the data packet to enter a corresponding queuing queue in a queue inserting way;
or correct the packet into the enqueue buffer pool and then enqueue to the first bit of the queue.
In one example, the unit in any of the above apparatuses may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (application specific integratedcircuit, ASIC), or one or more digital signal processors (digital signal processor, DSP), or one or more field programmable gate arrays (field programmable gate array, FPGA), or a combination of at least two of these integrated circuit forms.
For another example, when the units in the apparatus may be implemented in the form of a scheduler of processing elements, the processing elements may be general-purpose processors, such as a central processing unit (central processing unit, CPU) or other processor that may invoke the program. For another example, the units may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Various objects such as various messages/information/devices/network elements/systems/devices/actions/operations/processes/concepts may be named in the present application, and it should be understood that these specific names do not constitute limitations on related objects, and that the named names may be changed according to the scenario, context, or usage habit, etc., and understanding of technical meaning of technical terms in the present application should be mainly determined from functions and technical effects that are embodied/performed in the technical solution.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It should also be understood that in various embodiments of the present application, first, second, etc. are merely intended to represent that multiple objects are different. For example, the first time window and the second time window are only intended to represent different time windows. Without any effect on the time window itself, the first, second, etc. mentioned above should not impose any limitation on the embodiments of the present application.
It is also to be understood that in the various embodiments of the application, terms and/or descriptions of the various embodiments are consistent and may be referenced to one another in the absence of a particular explanation or logic conflict, and that the features of the various embodiments may be combined to form new embodiments in accordance with their inherent logic relationships.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a computer-readable storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned computer-readable storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The present application also provides a switch system, the system comprising:
one or more memories for storing instructions; and
one or more processors configured to invoke and execute the instructions from the memory to perform the method as described above.
The present application also provides a computer program product comprising instructions that, when executed, cause the switch and the switch system to perform operations of the switch and switch system corresponding to the above-described method.
The present application also provides a chip system comprising a processor for implementing the functions involved in the above, e.g. generating, receiving, transmitting, or processing data and/or information involved in the above method.
The chip system can be composed of chips, and can also comprise chips and other discrete devices.
The processor referred to in any of the foregoing may be a CPU, microprocessor, ASIC, or integrated circuit that performs one or more of the procedures for controlling the transmission of feedback information described above.
In one possible design, the system on a chip also includes memory to hold the necessary program instructions and data. The processor and the memory may be decoupled, and disposed on different devices, respectively, and connected by wired or wireless means, so as to support the chip system to implement the various functions in the foregoing embodiments. In the alternative, the processor and the memory may be coupled to the same device.
Optionally, the computer instructions are stored in a memory.
Alternatively, the memory may be a storage unit in the chip, such as a register, a cache, etc., and the memory may also be a storage unit in the terminal located outside the chip, such as a ROM or other type of static storage device, a RAM, etc., that may store static information and instructions.
It is to be understood that the memory in this application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory.
The nonvolatile memory may be a ROM, a Programmable ROM (PROM), an Erasable Programmable ROM (EPROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory.
The volatile memory may be RAM, which acts as external cache. There are many different types of RAM, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM.
The embodiments of the present invention are all preferred embodiments of the present application, and are not intended to limit the scope of the present application in this way, therefore: all equivalent changes in structure, shape and principle of this application should be covered in the protection scope of this application.
Claims (10)
1. A data transmission method, comprising:
responding to the acquired data packet, analyzing the data packet, and acquiring the address of the data packet, wherein the address comprises an MAC address and a public network address;
constructing a plurality of queuing queues according to the addresses, wherein the priority of the data packets in each queuing queue is the same;
transmitting the data packets on the queuing queue according to the priority, adjusting the bandwidth of the queuing queue according to the buffer capacity of the data packets on the queuing queue, transmitting the data packets according to the priority when transmitting the data packets, wherein the data packets with high priority are always in a transmitting state, and the bandwidth of the queuing queue with high priority is preferentially ensured; the low-priority data packets are intermittently transmitted or occupy small bandwidth resources; and
configuring a first dynamic cache pool to a low-priority queuing queue, and returning the data packets sent to the first dynamic cache pool to the corresponding queuing queue in a sequential alternate queuing mode;
and merging the queuing queues to the queuing queues of a higher level or a lower level when the number of the data packets on the queuing queues of a certain priority is smaller than the set number in unit time.
2. The data transmission method according to claim 1, wherein the acquired data packets are filtered to remove data packets having a length of less than 64 bytes.
3. The data transmission method according to claim 1, wherein for the data packets on the high priority queuing, the transmission time of each data packet is the same, and the higher order queuing order takes up the bandwidth of the lower order queuing.
4. A data transmission method according to claim 3, wherein, when a bandwidth shortage occurs, a second dynamic buffer pool is allocated to the queuing queue without bandwidth, the second dynamic buffer pool being used for storing the extruded data packets;
after the bandwidth is recovered, the data packets in the second dynamic cache pool are returned to the queuing queue in a mode of sequential alternate queuing.
5. The data transmission method according to any one of claims 1 to 4, characterized by further comprising:
extracting the data packets on the arrangement queue and copying the data packets into a check cache pool;
carrying out integrity check on the data packet in the checking cache pool; and
implanting the integrity check result into an unsent data packet on the queuing;
wherein, the data packet which does not pass the integrity check requests to send a correction data packet.
6. The data transmission method of claim 5, wherein the data packet containing the integrity check result is not copied to the check buffer pool.
7. The method of claim 5, wherein the correction packets are queued into corresponding queues;
or correct the packet into the enqueue buffer pool and then enqueue to the first bit of the queue.
8. A switch, comprising:
the analyzing unit is used for responding to the acquired data packet, analyzing the data packet and acquiring the address of the data packet, wherein the address comprises an MAC address and a public network address;
the queue unit is used for constructing a plurality of queuing queues according to the addresses, and the priority of the data packets in each queuing queue is the same;
the transmitting unit is used for transmitting the data packets on the queuing and adjusting the bandwidth of the queuing according to the priority and the buffer capacity of the data packets on the queuing, transmitting the data packets according to the priority when transmitting the data packets, wherein the data packets with high priority are always in a transmitting state, and the bandwidth of the queuing with high priority is preferentially ensured; the low-priority data packets are intermittently transmitted or occupy small bandwidth resources;
the configuration unit is used for configuring the first dynamic cache pool to the low-priority queuing queue, and the data packets sent to the first dynamic cache pool are returned to the corresponding queuing queue in a sequential alternate queuing mode; and
and the queue adjusting unit is used for merging the queuing queues to the queuing queues of a higher level or a lower level when the number of the data packets on the queuing queue of a certain priority is smaller than the set number in unit time.
9. A switch system, the system comprising:
one or more memories for storing instructions; and
one or more processors to invoke and execute the instructions from the memory to perform the method of any of claims 1 to 7.
10. A computer-readable storage medium, the computer-readable storage medium comprising:
program which, when executed by a processor, performs a method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310231807.5A CN115955447B (en) | 2023-03-13 | 2023-03-13 | Data transmission method, switch and switch system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310231807.5A CN115955447B (en) | 2023-03-13 | 2023-03-13 | Data transmission method, switch and switch system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115955447A CN115955447A (en) | 2023-04-11 |
CN115955447B true CN115955447B (en) | 2023-06-27 |
Family
ID=85896313
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310231807.5A Active CN115955447B (en) | 2023-03-13 | 2023-03-13 | Data transmission method, switch and switch system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115955447B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118118434B (en) * | 2023-08-05 | 2025-03-07 | 哈尔滨商业大学 | A method for managing data processing device and reducing processing delay |
CN118714105B (en) * | 2024-08-27 | 2024-12-20 | 苏州元脑智能科技有限公司 | Data caching method, device, switch equipment, system and program product |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1798092A (en) * | 2004-12-29 | 2006-07-05 | 中兴通讯股份有限公司 | Fast weighted polling dispatching method, and fast weighted polling despatcher and device |
CN112134813A (en) * | 2020-09-22 | 2020-12-25 | 上海商米科技集团股份有限公司 | A bandwidth allocation method and electronic device based on application process priority |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6977940B1 (en) * | 2000-04-28 | 2005-12-20 | Switchcore, Ab | Method and arrangement for managing packet queues in switches |
JP3726741B2 (en) * | 2001-11-16 | 2005-12-14 | 日本電気株式会社 | Packet transfer apparatus, method and program |
CN106685853B (en) * | 2016-11-23 | 2020-05-12 | 泰康保险集团股份有限公司 | Method and apparatus for processing data |
WO2019232694A1 (en) * | 2018-06-05 | 2019-12-12 | 华为技术有限公司 | Queue control method, device and storage medium |
CN109246031A (en) * | 2018-11-01 | 2019-01-18 | 郑州云海信息技术有限公司 | A kind of switch port queues traffic method and apparatus |
CN110099000B (en) * | 2019-03-27 | 2021-11-19 | 华为技术有限公司 | Method for forwarding message, network equipment and computer readable medium |
CN114489952A (en) * | 2022-01-28 | 2022-05-13 | 深圳云豹智能有限公司 | Queue distribution method and device |
-
2023
- 2023-03-13 CN CN202310231807.5A patent/CN115955447B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1798092A (en) * | 2004-12-29 | 2006-07-05 | 中兴通讯股份有限公司 | Fast weighted polling dispatching method, and fast weighted polling despatcher and device |
CN112134813A (en) * | 2020-09-22 | 2020-12-25 | 上海商米科技集团股份有限公司 | A bandwidth allocation method and electronic device based on application process priority |
Also Published As
Publication number | Publication date |
---|---|
CN115955447A (en) | 2023-04-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113382442B (en) | Message transmission method, device, network node and storage medium | |
CN115955447B (en) | Data transmission method, switch and switch system | |
US11968111B2 (en) | Packet scheduling method, scheduler, network device, and network system | |
US7355971B2 (en) | Determining packet size in networking | |
JP3898965B2 (en) | Radio resource allocation method and base station | |
EP1060598B1 (en) | Reduced packet header in wireless communications network | |
US11722407B2 (en) | Packet processing method and apparatus | |
CN104956637B (en) | Method, apparatus and system for prioritizing encapsulation of data packets in multiple logical network connections | |
US7602809B2 (en) | Reducing transmission time for data packets controlled by a link layer protocol comprising a fragmenting/defragmenting capability | |
CN107258076B (en) | Data transmission in a communication network | |
CN102132535A (en) | Method and switching device for transmitting data packets in a communication network | |
CN107770085B (en) | Network load balancing method, equipment and system | |
CN110300431A (en) | A kind of data traffic processing method and related network device | |
CN112313911B (en) | Method and computer program product for transmitting a data packet, method and computer program product for receiving a data packet, communication unit and motor vehicle having a communication unit | |
KR20220006606A (en) | Message processing method and related device | |
CN107231269B (en) | Accurate cluster speed limiting method and device | |
JP4772553B2 (en) | Data transmitting / receiving apparatus and data transmitting / receiving method | |
US9143448B1 (en) | Methods for reassembling fragmented data units | |
CN112838992B (en) | Message scheduling method and network equipment | |
CN105763375B (en) | A kind of data packet sending method, method of reseptance and microwave station | |
CN111740919B (en) | Load reporting and sharing method and network equipment | |
CN118075811A (en) | Message processing method, device, relay node, core network equipment and storage medium | |
US20030091067A1 (en) | Computing system and method to select data packet | |
US8682996B2 (en) | Apparatus for handling message reception | |
US20020085525A1 (en) | Wireless transmission system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |