CN119544823A - Message transmission method, device, equipment and automatic driving vehicle - Google Patents
Message transmission method, device, equipment and automatic driving vehicle Download PDFInfo
- Publication number
- CN119544823A CN119544823A CN202311103116.3A CN202311103116A CN119544823A CN 119544823 A CN119544823 A CN 119544823A CN 202311103116 A CN202311103116 A CN 202311103116A CN 119544823 A CN119544823 A CN 119544823A
- Authority
- CN
- China
- Prior art keywords
- message
- queue
- transmission
- queues
- network card
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Small-Scale Networks (AREA)
Abstract
The disclosure provides a message transmission method, a message transmission device, message transmission equipment and an automatic driving vehicle, relates to the technical field of automatic driving and communication, and particularly relates to the technical field of vehicle-mounted sensor communication. The method comprises the steps of obtaining a first message and a second message to be transmitted and a plurality of queues used for transmitting the first message and the second message, wherein the second message has higher time delay sensitivity than the first message, determining at least one first queue in the plurality of queues, wherein the at least one first queue is set to transmit through a kernel protocol stack, determining at least one second queue in the plurality of queues, wherein the at least one second queue is different from the at least one first queue and is set to skip the kernel protocol stack for transmission, transmitting the first message through the at least one first queue, and transmitting the second message through the at least one second queue.
Description
Technical Field
The present disclosure relates to the field of autopilot and communication technology, and in particular to the field of vehicle-mounted sensor communication technology, and more particularly to a method, an apparatus, an electronic device, a computer readable storage medium, a computer program product, and an autopilot vehicle for message transmission.
Background
With the development of autopilot technology, the amount of sensor data of an autopilot system of an autopilot vehicle is increasing. The processing delay of the sensor data reception directly affects the central processing unit (CPU, central Processing Unit) load of the autopilot system, the end-to-end delay, and the maximum travel speed supported by the autopilot system. Therefore, how to reduce the delay of the reception process of sensor data while ensuring the system performance has been a hot spot of research in the art.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a method, apparatus, electronic device, computer-readable storage medium, computer program product, and autonomous vehicle for message transmission.
According to one aspect of the disclosure, a message transmission method is provided, which includes obtaining a first message and a second message to be transmitted and a plurality of queues for transmitting the first message and the second message, wherein the second message has higher delay sensitivity than the first message, determining at least one first queue of the plurality of queues, wherein the at least one first queue is set to transmit via a kernel protocol stack, determining at least one second queue of the plurality of queues, wherein the at least one second queue is different from the at least one first queue and is set to skip the kernel protocol stack for transmission, transmitting the first message through the at least one first queue, and transmitting the second message through the at least one second queue.
According to another aspect of the disclosure, there is provided a packet transmission apparatus including an acquisition module configured to acquire a first packet and a second packet to be transmitted and a plurality of queues for transmitting the first packet and the second packet, wherein the second packet has a higher latency sensitivity than the first packet, a first queue determination module configured to determine at least one first queue of the plurality of queues, wherein the at least one first queue is configured to transmit via a kernel protocol stack, a second queue determination module configured to determine at least one second queue of the plurality of queues, wherein the at least one second queue is different from the at least one first queue and is configured to skip the kernel protocol stack for transmission, a first transmission module configured to transmit the first packet via the at least one first queue, and a second transmission module configured to transmit the second packet via the at least one second queue.
According to another aspect of the present disclosure, there is provided an electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the present disclosure as provided above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of the present disclosure as provided above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method as provided above.
According to another aspect of the present disclosure, there is provided an autonomous vehicle comprising a controller, wherein the controller is configured to perform the method of the present disclosure as provided above.
According to one or more embodiments of the present disclosure, the reception processing delay of sensor data can be reduced while ensuring system performance.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
fig. 2 shows a flow chart of a message transmission method according to an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart of a process of acquiring a message according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a system using a message transmission method according to an embodiment of the present disclosure;
fig. 5 shows a block diagram of a message transmission device according to an embodiment of the present disclosure;
fig. 6 is a block diagram illustrating a message transmission apparatus according to another embodiment of the present disclosure;
Fig. 7 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another element. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
With the development of autopilot technology, the amount of sensor data of an autopilot system of an autopilot vehicle is increasing. The time delay of the receiving and processing of the sensor data directly influences the CPU load of the automatic driving system, the time delay from end to end and the maximum driving speed supported by the automatic driving system.
In the related art, interconnection and interworking of the entire in-vehicle ethernet network is often responsible for by the switching chip. Each ethernet device in the vehicle may be connected to the autopilot system through a switching chip, which may include sensors, gateways, locating devices, central control, etc. The devices can send out messages and transmit the messages to the automatic driving system through the ports connected with the exchange chip and the controller so as to realize the interaction between the devices and the automatic driving system operated on the controller. Multiple DMA (Direct Memory Access ) queues may be supported inside these ports. Messages sent by the devices may be mixed together and randomly transferred to the controller via one of the plurality of DMA queues. At this time, the autopilot system will respond to the message interrupt and read the message from the queue, and perform preliminary processing on the message via a TCP/IP (Transmission Control Protocol/Internet Protocol ) protocol stack, and then send the message to software running on the controller through the socket interface for subsequent processing.
However, some of the messages sent by the above devices may relate to some data of delay sensitive services, such as point cloud data of a lidar and position data of a positioning system. These messages are mixed with other large amount of messages for transmission, so that the transmission delay of these messages is larger, and the accuracy of related services is greatly affected. Because all the messages are transmitted through the TCP/IP protocol stack, the CPU occupancy rate of the automatic driving system is increased, and the transmission delay of the messages is further increased. Meanwhile, the messages are randomly transmitted to different CPUs through different DMA sequences for processing, so that the cache is likely to be frequently disabled, and the performance of the automatic driving system is reduced.
Therefore, a method capable of reducing the time delay of the reception process of the sensor data while ensuring the system performance is demanded.
In view of the above technical problems, according to one aspect of the present disclosure, a method for transmitting a message is provided.
Before describing in detail the method of message transmission according to embodiments of the present disclosure, a schematic diagram of an exemplary system in which the various methods and apparatus described herein may be implemented is first described in connection with fig. 1.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes a motor vehicle 110, a server 120, and one or more communication networks 130 coupling the motor vehicle 110 to the server 120.
In an embodiment of the present disclosure, motor vehicle 110 may include a computing device in accordance with an embodiment of the present disclosure and/or be configured to perform a method in accordance with an embodiment of the present disclosure.
The server 120 may run one or more services or software applications that enable the method of message transmission. In some embodiments, server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user of motor vehicle 110 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from motor vehicle 110. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of motor vehicle 110.
Network 130 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, the one or more networks 130 may be a satellite communications network, a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (including, for example, bluetooth, wiFi), and/or any combination of these with other networks.
The system 100 may also include one or more databases 150. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 150 may be used to store information such as audio files and video files. The data store 150 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 150 may be of different types. In some embodiments, the data store used by server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 150 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
Motor vehicle 110 may include a sensor 111 for sensing the surrounding environment. The sensor 111 may include one or more of a vision camera, an infrared camera, an ultrasonic sensor, a millimeter wave radar, and a laser radar (LiDAR). Different sensors may provide different detection accuracy and range. The camera may be mounted in front of, behind or other locations on the vehicle. The vision cameras can capture the conditions inside and outside the vehicle in real time and present them to the driver and/or passengers. In addition, by analyzing the captured images of the visual camera, information such as traffic light indication, intersection situation, other vehicle running state, etc. can be acquired. The infrared camera can capture objects under night vision. The ultrasonic sensor can be arranged around the vehicle and is used for measuring the distance between an object outside the vehicle and the vehicle by utilizing the characteristics of strong ultrasonic directivity and the like. The millimeter wave radar may be installed in front of, behind, or other locations of the vehicle for measuring the distance of an object outside the vehicle from the vehicle using the characteristics of electromagnetic waves. Lidar may be mounted in front of, behind, or other locations on the vehicle for detecting object edges, shape information for object identification and tracking. The radar apparatus may also measure a change in the speed of the vehicle and the moving object due to the doppler effect.
Motor vehicle 110 may also include a communication device 112. The communication device 112 may include a satellite positioning module capable of receiving satellite positioning signals (e.g., beidou, GPS, GLONASS, and GALILEO) from satellites 141 and generating coordinates based on these signals. The communication device 112 may also include a module for communicating with the mobile communication base station 142, and the mobile communication network may implement any suitable communication technology, such as the current or evolving wireless communication technology (e.g., 5G technology) such as GSM/GPRS, CDMA, LTE. The communication device 112 may also have a Vehicle-to-Everything (V2X) module configured to enable, for example, vehicle-to-Vehicle (V2V) communication with other vehicles 143 and Vehicle-to-Infrastructure (V2I) communication with Infrastructure 144. In addition, the communication device 112 may also have a module configured to communicate with a user terminal 145 (including but not limited to a smart phone, tablet computer, or wearable device such as a watch), for example, by using a wireless local area network or bluetooth of the IEEE802.11 standard. With the communication device 112, the motor vehicle 110 can also access the server 120 via the network 130.
Motor vehicle 110 may also include a control device 113. The control device 113 may include a processor, such as a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU), or other special purpose processor, etc., in communication with various types of computer readable storage devices or mediums. The control device 113 may include an autopilot system for automatically controlling various actuators in the vehicle. The autopilot system is configured to control a powertrain, steering system, braking system, etc. of a motor vehicle 110 (not shown) via a plurality of actuators in response to inputs from a plurality of sensors 111 or other input devices to control acceleration, steering, and braking, respectively, without human intervention or limited human intervention. Part of the processing functions of the control device 113 may be implemented by cloud computing. For example, some of the processing may be performed using an onboard processor while other processing may be performed using cloud computing resources. The control device 113 may be configured to perform a method according to the present disclosure. Furthermore, the control means 113 may be implemented as one example of a computing device on the motor vehicle side (client) according to the present disclosure.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
The message transmission method according to the embodiment of the present disclosure is described in detail below.
Fig. 2 shows a flow chart of a message transmission method 200 according to an embodiment of the disclosure. As shown in fig. 2, the method 200 includes steps S210, S220, S230, S240, and S250. The method 200 can be applied to various scenarios with high requirements on performance and time delay, such as sensor data transmission and control of an autonomous vehicle, and receiving scenarios of positioning data and/or laser radar point cloud data in an on-board network, where data reception is performed based on the ethernet.
In step S210, a first message and a second message to be transmitted and a plurality of queues for transmitting the first message and the second message are acquired, wherein the second message has higher delay sensitivity than the first message.
At step S220, at least one first queue of the plurality of queues is determined, the first queue being configured for transmission via the kernel protocol stack.
At step S230, at least one second queue of the plurality of queues is determined, the second queue being different from the first queue and configured to skip the kernel protocol stack for transmission.
In step S240, the first message is transmitted through at least one first queue.
In step S250, a second message is transmitted through at least one second queue.
In an example, the first message and the second message may be, for example, sent from an ethernet device such as a sensor, gateway, location device, central control, or the like. The devices can send out a first message and a second message, and the first message and the second message are transmitted to the automatic driving system through a port connected with the controller through the exchange chip to interact with the automatic driving system operated on the controller. In general, a vehicle-mounted switching chip is a hardware device obtained by integrating a switch and a physical network card, and the switching chip can configure transmission of a message in a queue. The first message and the second message may be two messages sent from different ethernet devices, or two messages sent from the same ethernet device.
In an example, there may be multiple queues for transmission of messages. The queue may be, for example, a DMA queue, which may be understood as a Channel (Channel) for transmitting messages. In some embodiments, as the vehicle moves, some messages, such as point cloud data messages and positioning data messages of the lidar, have a strong timeliness. If the transmission delay of such messages is too great, the data acquired by the upper layer application may have a large error with the current actual data, or even be completely unavailable. Such a message may therefore be considered to have a higher delay sensitivity and may then be determined to be a second message. While other messages may be associated with the control of the device, for example, a certain degree of delay may have less impact on the accuracy of the control operation or related traffic, with lower delay sensitivity, and may then be determined as the first message.
In an example, the first message and the second message may be transferred via different DMA queues. Since the second message has higher delay sensitivity than the first message, the DMA queue for transmitting the second message, i.e., the above-mentioned second queue, can provide a higher message transmission speed than the first queue.
In an example, the switch chip may randomly determine or designate one or more queues of the plurality of queues as the first queue and determine other queues of the plurality of queues as the second queue, or may randomly determine or designate one or more queues of the plurality of queues other than the first queue as the second queue.
In the prior art, a message sent by each device in the vehicle-mounted network is generally processed primarily through a kernel protocol stack, and then sent to software running on a controller for subsequent processing. The kernel protocol stack may be, for example, a kernel TCP/IP protocol stack. In an embodiment of the present disclosure, to provide a higher packet transmission speed, the second queue may skip the kernel protocol stack through some special protocol family, such as af_xdp protocol family, and send the second packet directly to the application program. Because the transmission is not subjected to preliminary processing of the kernel protocol stack, but is directly sent to the software operated on the controller for processing, the transmission delay is small, the performance is high, meanwhile, the operation load of the kernel is reduced, the kernel cost can be reduced, and the transmission speed of various messages can be improved.
Generally, a manner of directly transmitting all the messages to an application layer without passing through a kernel TCP/IP protocol stack to perform decision and processing, such as a data plane Development Kit (DPDK, data Plane Development Kit) technology, may be applied to cloud computing to optimize the transceiving performance of ethernet traffic. However, such a method has extremely high CPU occupancy and poor compatibility. In addition, the method also needs to additionally configure an upper layer application protocol stack to process various types of messages, and the messages which are not additionally configured cannot be identified, so that the messages cannot be processed, and therefore, the flexibility is very poor.
Therefore, a vehicle network in which messages are various and complex is not suitable for such a transmission scheme. For this case, a message with less delay sensitivity, i.e. the first message described above, may be transmitted via a first queue different from the second queue. The first message may include, for example, some messages related to complex services such as control, which need to be transmitted through the kernel TCP/IP protocol stack.
In an example, to ensure stable transmission of messages for multiple complex services, the first queue may upload the first message to the application program via the kernel TCP/IP protocol stack through a protocol family other than the af_xdp protocol family, for example, the af_inet protocol family. Because a large part of messages with high delay sensitivity are transmitted by bypassing the kernel TCP/IP protocol stack, the flow through the kernel TCP/IP protocol stack is greatly reduced, and the transmission speed of the first message can be improved to a certain extent.
According to the message transmission method disclosed by the embodiment of the invention, the more complex message with lower delay sensitivity is transmitted through the kernel TCP/IP protocol stack, so that multiple complex services and functions can be supported without adding excessive message processing burden to an upper application program. Meanwhile, messages with simple functions and higher time delay sensitivity are skipped over the kernel TCP/IP protocol stack and are directly transmitted to an upper application program, so that the time delay of the message data can be greatly reduced, and the timeliness of the data and the accuracy of control are improved. By combining the two transmission modes, the time delay of receiving the sensor data forwarded based on the Ethernet link is greatly reduced, the receiving processing capacity and the overall performance of the data of the automatic driving system are improved, and the resource load of the automatic driving system is reduced.
Various aspects of the message transmission methods according to embodiments of the present disclosure are described further below.
According to some embodiments, the message transmission method 200 may further include creating at least one virtual network card based on the physical network card, assigning at least one first queue to the physical network card, and assigning at least one second queue to the at least one virtual network card.
In an example, the physical network card may be considered as equivalent to network card hardware integrated into the switch chip, and may need to be registered in the kernel through a network card driver to operate. The physical network card may communicate data between the kernel network protocol stack and the external network, and the user may configure network card interface attributes, such as IP addresses, for the physical network card, which are all configured in the kernel network protocol stack. The virtual network card is a virtual network adapter constructed by simulating a network environment through software, and can realize local area network communication among the virtual network cards through VPN (Virtual Private Network ) technology. Therefore, the virtual network card can be utilized to realize that the second message is not transmitted through the kernel protocol stack.
In an example, one or more virtual network cards may be created based on a physical network card by means of SR-IOV (Single Root I/O Virtualization) technology. For the one physical network card and one or more virtual network cards, each of the network cards may be assigned a particular DMA queue and interrupt.
In an example, a first message with relatively complex and low latency sensitivity may be transmitted by the physical network card and its corresponding DMA queue. Corresponding to the first message, the DMA queue allocated for the physical network card may be one or more queues in the first queue.
Correspondingly, the virtual network card and the corresponding DMA queue can be used for transmitting the second message with simple function and higher time delay sensitivity. Corresponding to the second message, the DMA queue allocated for the virtual network card may be one or more queues in the second queue.
In an example, the virtual network card may be multiple. Each virtual network card may be assigned a specific second queue and interrupts, through which second messages may be randomly transmitted. The second message may be further divided into messages with different priority transmission levels, and transmitted through different virtual network cards and the second queue according to the priority transmission levels.
According to the embodiment of the disclosure, the virtual network card is created based on the physical network card, so that messages can be conveniently and independently transmitted among various queues applying different protocol families, and the stability of message transmission is ensured.
According to some embodiments, in the case where there are a plurality of virtual network cards and a plurality of second queues, the second queues of the plurality of second queues to which each of the plurality of virtual network cards is allocated may not coincide with each other.
In one possible embodiment, there may be two virtual network cards and four second queues may be acquired, and these queues may be respectively assigned numbers 1 to 4, for example, so as to be distinguished. The second queues numbered 1 and 2 may be allocated to one of the virtual network cards, while the second queues numbered 3 and 4 may be allocated to the other virtual network card. Or the second queue numbered 1 may be assigned to one of the virtual network cards while the second queues numbered 2, 3, and 4 are assigned to the other virtual network card. It can be appreciated that, according to actual requirements, the number of virtual network cards and the number of second queues may be different from the above embodiments, and the specific correspondence between the virtual network cards and the second queues may be appropriately adjusted.
According to the embodiment of the disclosure, by distributing different second queues for each virtual network card, the message can be conveniently and independently transmitted among the plurality of virtual network cards, so that the message transmission is more controllable.
According to some embodiments, allocating at least one first queue and at least one second queue may be performed by configuring RSS (RECEIVE SIDE SCALING, receive end scaling) rules on the physical network card.
In an example, RSS is a network card driven technology that can be used for efficient distribution of network packet reception processing power. Because the virtual network card is established on the basis of the physical network card, RSS rules can be configured on the physical network card, and based on the characteristics of the messages, the messages of different types are transmitted through the designated network card and the designated DMA queue.
According to the embodiment of the disclosure, the first queue and the second queue for message transmission are allocated by utilizing the RSS rule, so that message transmission is more orderly and controllable, and messages with higher delay sensitivity can obtain higher transmission priority.
Fig. 3 illustrates a flow chart of a process 300 of acquiring a message according to an embodiment of the present disclosure. As shown in fig. 3, process 300 may include steps S310, S320, and S330.
In step S310, a plurality of messages to be transmitted may be acquired.
In step S320, transmission information of each of the plurality of messages may be determined. The transmission information may indicate a delay sensitivity of the message.
In step S330, the message may be determined as the first message or the second message based on the transmission information of the message.
In an example, the message may include data information to be transmitted and transmission information related to the address, protocol, port of the transmission. The data information may be received and processed such that the receiving end obtains information such as instruction content, request content, sensed data, etc. The transmission information indicates where the message originated, how it was transmitted, and where it was sent. All kinds of messages sent by different devices have special transmission information, so that the messages can be determined to be the first message or the second message based on the transmission information of the messages. Because the data volume of the transmission information is small, the information can be identified almost without processing, and therefore, the first message and the second message can be identified efficiently and simply based on the transmission information of the message.
According to the embodiment of the disclosure, by determining the transmission information of the message, whether the message has higher delay sensitivity and needs to be allocated with higher transmission priority can be determined, so that whether the message is the first message or the second message and to which queue the message should be allocated for transmission can be determined.
According to some embodiments, the transmission information may include at least one of an address, a protocol number, and a port number of the message.
In an example, different messages sent by different ethernet devices may have their specific addresses, protocol numbers, and port numbers. The network card may then determine the type of the message by means of any of the address, protocol number, and port number in the message. In some embodiments, a network card with higher configuration can identify more message transmission information, so that the identification and classification of the message can be more accurate.
According to the embodiment of the disclosure, the messages can be accurately identified and classified by means of the address, the protocol number and the port number in the messages.
According to some embodiments, the second message may be given a higher transmission priority than the first message.
According to the embodiment of the disclosure, the second message is given higher transmission priority than the first message, so that the message with higher time delay sensitivity can be transmitted through the queue with faster transmission, the time delay of the message data is preferentially reduced, and the timeliness of the data and the accuracy of control are improved.
Fig. 4 shows a schematic diagram of a system using a message transmission method according to an embodiment of the present disclosure.
Fig. 4 shows an example of processing a laser radar point cloud with a virtual network card. The diagram includes lidar 430, devices 440, 450, and 460. Devices 440, 450, and 460 may be, for example, gateway devices, positioning devices, and in-vehicle consoles, respectively. The devices 440, 450, and 460 may send various messages to the autopilot system 411 via ports to which the switch chip 420 is coupled to the controller 410 to interact with the autopilot system 411 running on the controller 410.
In an example, referring to fig. 4, lidar 430 may send out two different messages, including a point cloud data message 431 and a control and status message 432. The flow of the point cloud data message 431 is large, and the delay requirement is very high, while the relative flow of the control and status message 432 is small, and the delay requirement is also low.
In an example, a virtual network card 422 may be created by means of the SR-IOV function of the network card, corresponding to which is a physical network card 421. It will be appreciated that fig. 4 shows an example of creating only one virtual network card. In practical application, at most twenty virtual network cards can be created according to scene requirements so as to process various time delay sensitive services such as laser radar point cloud, positioning and the like. These network cards can transmit various messages in parallel.
In an example, DMA queue 401 may be allocated to virtual network card 422, while DMA queues 402, 403, and 404 are allocated to physical network card 421. Virtual network card 422 may be configured to specifically receive point cloud data message 431 of lidar 430.
After configuration is completed, the application software may be started. Two sockets may be created at the application software, one of which is configured to bind the virtual network card 422 using the af_xdp protocol family to receive the point cloud data message 431 of the lidar 430, and the other of which is configured to receive the control and status message 432 of the lidar 430, the message 441 of the device 440, the message 451 of the device 450, and the message 461 of the device 460 using the af_inet protocol family from the physical network card 421 or to perform device configuration.
In an example, ACL (Access Control list ) rules may be configured on switch chip 420. The point cloud data message 431 may be given a higher internal forwarding priority to the point cloud data message 431 on the switching chip 420 based on the characteristics of the point cloud data message 431 of the lidar 430, so as to ensure that the internal forwarding delay of the switching chip 420 is as small as possible.
In an example, a network card may be integrated on the switch chip 420, and the flow rule may be configured on the switch chip 420 using the RSS function of the network card. The traffic rules may be used to control the transmission of different messages through a particular channel.
As shown in fig. 4, the lidar 430 may send the point cloud data packet 431 to the DMA queue 401 for transmission, and may trigger a network card packet receiving interrupt. The controller 410 may respond to the network card packet receiving interrupt, and directly send the point cloud data packet 431 to the upper layer application, that is, the autopilot system 411, without passing through the TCP/IP protocol stack 412 when the point cloud data packet 431 matches the packet receiving condition of the af_xdp protocol family. Then, the controller 410 receives the point cloud data message 431 through the socket corresponding to the af_xdp, and can perform subsequent processing on the point cloud data message.
In an example, queues may also be provided for transmission of control and status messages 432 of lidar 430, messages 441 of device 440, messages 451 of device 450, and messages 461 of device 460. As shown in fig. 4, control and status message 432 of lidar 430 may be configured to be transmitted via DMA queue 402, message 441 of device 440, message 451 of device 450, and message 461 of device 460 may all be configured to be transmitted via DMA queue 403, and DMA queue 404 may be configured to not transmit any messages. The transfer of messages in DMA queue 402, DMA queue 403, and DMA queue 404 may all be performed via TCP/IP protocol stack 412. It is understood that this is only one configuration example. In practical application, a correspondence relationship different from that shown in fig. 4 may be set for the message and the queue.
According to another aspect of the disclosure, a message transmission device is also provided.
Fig. 5 shows a block diagram of a message transmission apparatus 500 according to an embodiment of the present disclosure.
As shown in fig. 5, the message transmission apparatus 500 includes an acquisition module 510 configured to acquire a first message and a second message to be transmitted and a plurality of queues for transmitting the first message and the second message, wherein the second message has a higher latency sensitivity than the first message, a first queue determination module 520 configured to determine at least one first queue of the plurality of queues, wherein the at least one first queue is configured to transmit via a kernel protocol stack, a second queue determination module 530 configured to determine at least one second queue of the plurality of queues, wherein the at least one second queue is different from the at least one first queue and is configured to skip the kernel protocol stack, a first transmission module 540 configured to transmit the first message via the at least one first queue, and a second transmission module 550 configured to transmit the second message via the at least one second queue.
Since the acquiring module 510, the first queue determining module 520, the second queue determining module 530, the first transmitting module 540, and the second transmitting module 550 in the packet transmitting apparatus 500 may correspond to steps S210 to S250 in fig. 2, respectively, details of each aspect thereof will not be described herein.
In addition, the message transmission device 500 and the modules included therein may further include further sub-modules, which will be described in detail below in connection with fig. 6.
According to the embodiment of the disclosure, the more complex messages with lower delay sensitivity are transmitted through the kernel TCP/IP protocol stack, so that various complex services and functions can be supported without adding excessive message processing burden to an upper application program. Meanwhile, messages with simple functions and higher time delay sensitivity are skipped over the kernel TCP/IP protocol stack and are directly transmitted to an upper application program, so that the time delay of the message data can be greatly reduced, and the timeliness of the data and the accuracy of control are improved. By combining the two transmission modes, the time delay of receiving the sensor data forwarded based on the Ethernet link is greatly reduced, the receiving processing capacity and the overall performance of the data of the automatic driving system are improved, and the resource load of the automatic driving system is reduced.
Fig. 6 shows a block diagram of a message transmission apparatus 600 according to another embodiment of the present disclosure.
As shown in fig. 6, the packet transmission apparatus 600 may include an acquisition module 610, a first queue determining module 620, a second queue determining module 630, a first transmission module 640, and a second transmission module 650. The acquisition module 610, the first queue determination module 620, the second queue determination module 630, the first transmission module 640, and the second transmission module 650 may correspond to the acquisition module 510, the first queue determination module 520, the second queue determination module 530, the first transmission module 540, and the second transmission module 550 shown in fig. 5, and thus the details thereof will not be repeated here.
In an example, the message transmission apparatus 600 may further include a virtual network card creation module 660 configured to create at least one virtual network card based on the physical network card, a first queue allocation module 670 configured to allocate at least one first queue to the physical network card, and a second queue allocation module 680 configured to allocate at least one second queue to the at least one virtual network card.
In an example, in a case where there are a plurality of virtual network cards and a plurality of second queues, the second queues of the plurality of second queues to which each of the plurality of virtual network cards is allocated may not coincide with each other.
In an example, allocating the at least one first queue and the at least one second queue may be performed by configuring a receiving end scaling rule on the physical network card.
In an example, the acquisition module 610 may include a message acquisition module 611 configured to acquire a plurality of messages to be transmitted, a transmission information determination module 612 configured to determine transmission information of each of the plurality of messages, wherein the transmission information indicates a delay sensitivity of the message, and a message category determination module 613 configured to determine the message as the first message or the second message based on the transmission information of the message.
In an example, the transmission information may include at least one of an address, a protocol number, and a port number of the message.
In an example, the second message may be given a higher transmission priority than the first message.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium, a computer program product and an autonomous vehicle.
Referring to fig. 7, a block diagram of an electronic device 700 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the electronic device 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the electronic device 700 are connected to the I/O interface 705, including an input unit 706, an output unit 707, a storage unit 708, and a communication unit 709. The input unit 706 may be any type of device capable of inputting information to the electronic device 700, the input unit 706 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 707 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 708 may include, but is not limited to, magnetic disks, optical disks. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices through computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the various methods and processes described above, such as a message transmission method. For example, in some embodiments, the messaging method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the message transmission method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the message transmission method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), the Internet, and a blockchain network.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.
Claims (18)
1. A message transmission method comprises the following steps:
acquiring a first message and a second message to be transmitted and a plurality of queues for transmitting the first message and the second message, wherein the second message has higher time delay sensitivity than the first message;
Determining at least one first queue of the plurality of queues, wherein the at least one first queue is configured to transmit via a kernel protocol stack;
determining at least one second queue of the plurality of queues, wherein the at least one second queue is different from the at least one first queue and is configured to skip the kernel protocol stack for transmission;
transmitting the first message through the at least one first queue, and
And transmitting the second message through the at least one second queue.
2. The method of claim 1, further comprising:
Creating at least one virtual network card based on the physical network card;
assigning the at least one first queue to the physical network card, and
And allocating the at least one second queue to the at least one virtual network card.
3. The method of claim 2, wherein there are a plurality of virtual network cards and a plurality of second queues, and wherein the second queues of the plurality of second queues to which each of the plurality of virtual network cards is assigned do not coincide with each other.
4. A method according to claim 2 or 3, wherein allocating the at least one first queue and the at least one second queue is performed by configuring a receiver scaling rule on the physical network card.
5. The method according to any one of claims 1 to 4, wherein the obtaining the first and second messages to be transmitted and the plurality of queues for transmitting the first and second messages comprises:
acquiring a plurality of messages to be transmitted;
determining transmission information of each of the plurality of messages, wherein the transmission information indicates delay sensitivity of the message, and
And determining the message as the first message or the second message based on the transmission information of the message.
6. The method of claim 5, wherein the transmission information comprises at least one of an address, a protocol number, and a port number of the message.
7. The method of any of claims 1-6, wherein the second message is given a higher transmission priority than the first message.
8. A message transmission apparatus, comprising:
the system comprises an acquisition module, a transmission module and a transmission module, wherein the acquisition module is configured to acquire a first message and a second message to be transmitted and a plurality of queues for transmitting the first message and the second message, and the second message has higher time delay sensitivity than the first message;
a first queue determination module configured to determine at least one first queue of the plurality of queues, wherein the at least one first queue is configured to transmit via a kernel protocol stack;
A second queue determination module configured to determine at least one second queue of the plurality of queues, wherein the at least one second queue is different from the at least one first queue and is configured to skip the kernel protocol stack for transmission;
a first transmission module configured to transmit the first message via the at least one first queue, and
And the second transmission module is configured to transmit the second message through the at least one second queue.
9. The apparatus of claim 8, further comprising:
A virtual network card creation module configured to create at least one virtual network card based on the physical network card;
A first queue allocation module configured to allocate the at least one first queue to the physical network card, and
And a second queue allocation module configured to allocate the at least one second queue to the at least one virtual network card.
10. The apparatus of claim 9, wherein there are a plurality of virtual network cards and a plurality of second queues, and wherein the second queues of the plurality of second queues to which each of the plurality of virtual network cards is assigned do not coincide with each other.
11. The apparatus of claim 9 or 10, wherein allocating the at least one first queue and the at least one second queue is performed by configuring a receiver scaling rule on the physical network card.
12. The apparatus of any of claims 8 to 11, wherein the acquisition module comprises:
The message acquisition module is configured to acquire a plurality of messages to be transmitted;
A transmission information determining module configured to determine transmission information of each of the plurality of messages, wherein the transmission information indicates a delay sensitivity of the message, and
And the message category determining module is configured to determine the message as the first message or the second message based on the transmission information of the message.
13. The apparatus of claim 12, wherein the transmission information comprises at least one of an address, a protocol number, and a port number of the message.
14. The apparatus of any of claims 8 to 13, wherein the second message is given a higher transmission priority than the first message.
15. An electronic device, comprising:
At least one processor, and
A memory communicatively coupled to the at least one processor, wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method according to any of claims 1-7.
18. An autonomous vehicle comprising a controller, wherein the controller is configured to perform the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311103116.3A CN119544823A (en) | 2023-08-29 | 2023-08-29 | Message transmission method, device, equipment and automatic driving vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311103116.3A CN119544823A (en) | 2023-08-29 | 2023-08-29 | Message transmission method, device, equipment and automatic driving vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119544823A true CN119544823A (en) | 2025-02-28 |
Family
ID=94701205
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311103116.3A Pending CN119544823A (en) | 2023-08-29 | 2023-08-29 | Message transmission method, device, equipment and automatic driving vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119544823A (en) |
-
2023
- 2023-08-29 CN CN202311103116.3A patent/CN119544823A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11403517B2 (en) | Proximity-based distributed sensor processing | |
US11210023B2 (en) | Technologies for data management in vehicle-based computing platforms | |
US20240231470A9 (en) | Application priority based power management for a computer device | |
US11582064B1 (en) | Secure ethernet and transmission control protocol | |
CN114394111B (en) | Lane changing method for automatic driving vehicle | |
EP3941008A1 (en) | Method and electronic device processing data | |
CN114047760A (en) | Path planning method and device, electronic equipment and automatic driving vehicle | |
CN119544823A (en) | Message transmission method, device, equipment and automatic driving vehicle | |
CN115412580A (en) | PHY chip working mode determining method and device and automatic driving vehicle | |
CN217435657U (en) | Electrical system of automatic driving vehicle and automatic driving vehicle | |
US20240201694A1 (en) | Mission critical data delivery booster drones for satellite data centers | |
CN116533987A (en) | Parking path determination method, device, equipment and automatic driving vehicle | |
CN114283583B (en) | Method for vehicle-road coordination, vehicle-mounted intelligent terminal, cloud control platform and system | |
CN108401003A (en) | Synchronous method, device, equipment and the computer storage media of radar data | |
CN113850909A (en) | Point cloud data processing method and device, electronic equipment and automatic driving equipment | |
CN114283604B (en) | Method for assisting in parking a vehicle | |
CN111770472A (en) | Vehicle positioning method and device, vehicle, storage medium and terminal | |
CN116311943B (en) | Method and device for estimating average delay time of intersection | |
US12179619B2 (en) | Systems and methods for proactive electronic vehicle charging | |
CN114179834B (en) | Vehicle parking method, device, electronic equipment, medium and automatic driving vehicle | |
CN114333405B (en) | Method for assisting in parking a vehicle | |
CN116456302A (en) | Control method and control device for automatic driving vehicle | |
CN115348557B (en) | Time delay testing method, time delay sending method, time delay receiving method and related devices | |
CN119806812A (en) | Memory allocation method and device for target computing system and automatic driving vehicle | |
CN116414845A (en) | Method, apparatus, electronic device and medium for updating map data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |