WO2024221465A1 - 操作系统的运行控制方法和装置,以及嵌入式系统和芯片 - Google Patents
操作系统的运行控制方法和装置,以及嵌入式系统和芯片 Download PDFInfo
- Publication number
- WO2024221465A1 WO2024221465A1 PCT/CN2023/091864 CN2023091864W WO2024221465A1 WO 2024221465 A1 WO2024221465 A1 WO 2024221465A1 CN 2023091864 W CN2023091864 W CN 2023091864W WO 2024221465 A1 WO2024221465 A1 WO 2024221465A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- operating system
- target
- service
- processor
- operating
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
- G06F9/4406—Loading of operating system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
- G06F9/4418—Suspend and resume; Hibernate and awake
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4812—Task transfer initiation or dispatching by interrupt, e.g. masked
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4812—Task transfer initiation or dispatching by interrupt, e.g. masked
- G06F9/4825—Interrupt from clock, e.g. time of day
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
Definitions
- Embodiments of the present application relate to the field of computers, and in particular, to an operating system operation control method and device, as well as an embedded system and chip.
- CPLD Complex Programmable Logic Device
- EC Embeded Controller
- the embodiments of the present application provide an operating system operation control method and device, as well as an embedded system and chip, to at least solve the problem of low operating efficiency of the operating system in the related art.
- an embedded system comprising: a chip and at least two operating systems, wherein:
- the chip includes a processor, a hardware controller, a first bus and a second bus, wherein the bandwidth of the first bus is higher than that of the second bus, and the first bus is configured as a multi-master and multi-slave mode, and the second bus is configured as a one-master and multi-slave mode; at least two operating systems run based on the processor; at least two operating systems communicate through the first bus; and at least two operating systems control the hardware controller through the second bus.
- another embedded system comprising: a first operating system, a second operating system, a controller and a processor, wherein the first operating system and the second operating system are run based on the processor, and the controller is configured to detect the running state of the first operating system during operation and control the processor resources used by the first operating system according to the running state.
- a method for controlling operation of an operating system comprising:
- Processor resources used by the first operating system are controlled according to the operating state.
- an operation control device for an operating system comprising:
- a first detection module is configured to detect an operating state of the first operating system during operation, wherein the first operating system and the second operating system are operated based on a processor;
- the control module is configured to control the processor resources used by the first operating system according to the running state.
- a chip is also provided, wherein the chip includes at least one of a programmable logic circuit and an executable instruction, and the chip runs in an electronic device and is configured to implement the steps in any of the above method embodiments.
- a BMC chip which includes: a storage unit and a processing unit connected to the storage unit, the storage unit is configured to store a program, and the processing unit is configured to run the program to execute the steps in any of the above method embodiments.
- a mainboard which includes: at least one processor; at least one memory configured to store at least one program; when the at least one program is executed by the at least one processor, the at least one processor implements the steps in any one of the above method embodiments.
- a server which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other through the communication bus; the memory is configured to store computer programs; and the processor is configured to implement the steps of any one of the above method embodiments when executing the program stored in the memory.
- a non-volatile readable storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps of any of the above method embodiments when running.
- an electronic device comprising a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
- the first operating system and the second operating system are run based on the processor, the running state of the first operating system during the running process is detected, and the processor resources used by the first operating system are controlled according to the running state. Since the first operating system and the second operating system are both based on the same processor, the increase and deployment of hardware devices are avoided, the system cost is reduced, and the processor resources used by the operating system can be controlled during the operation of the operating system, so as to reasonably use the processor resources to support the operation between systems. Therefore, the technical problem of low operating efficiency of the operating system can be solved, and the technical effect of improving the operating efficiency of the operating system is achieved.
- FIG1 is a schematic diagram of a hardware environment of an operating system operation control method according to an embodiment of the present application.
- FIG2 is a flow chart of a method for controlling the operation of an operating system according to an embodiment of the present application
- FIG3 is a schematic diagram of an operation service takeover process according to an embodiment of the present application.
- FIG4 is a schematic diagram of a processor core occupation process according to an embodiment of the present application.
- FIG5 is a first schematic diagram of a processor resource control process according to an embodiment of the present application.
- FIG6 is a second schematic diagram of a processor resource control process according to an embodiment of the present application.
- FIG7 is a schematic diagram of a business data interaction process according to an embodiment of the present application.
- FIG8 is a first schematic diagram of a running process of a first operating system according to an embodiment of the present application.
- FIG9 is a second schematic diagram of a first operating system operation process according to an embodiment of the present application.
- FIG10 is a schematic diagram of a system abnormality monitoring process according to an embodiment of the present application.
- FIG11 is a schematic diagram 1 of an embedded system according to an embodiment of the present application.
- FIG12 is a block diagram of an optional BMC chip according to an embodiment of the present application.
- FIG. 13 is a schematic diagram of a service data communication process between operating systems according to an optional implementation manner of the present application.
- FIG. 14 is a schematic diagram of a service management process in an embedded system according to an optional implementation manner of the present application.
- FIG15 is a schematic diagram of a task scheduling process according to an optional implementation mode of the present application.
- FIG16 is a second schematic diagram of an optional embedded system according to an embodiment of the present application.
- FIG. 17 is a structural block diagram of an operation control device of an operating system according to an embodiment of the present application.
- FIG. 1 is a hardware environment schematic diagram of an operation control method of an operating system according to an embodiment of the present application.
- the server may include one or more (only one is shown in FIG. 1) processors 102 (the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 configured to store data.
- the above-mentioned server may also include a transmission device 106 and an input/output device 108 configured to be a communication function.
- FIG. 1 is only for illustration, and it does not limit the structure of the above-mentioned server.
- the server may also include more or fewer components than those shown in FIG. 1, or have different configurations with equivalent functions as shown in FIG. 1 or more functions than those shown in FIG. 1.
- the memory 104 may be configured to store computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the operation control method of the operating system in the embodiment of the present invention.
- the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, that is, to implement the above method.
- the memory 104 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
- the memory 104 may include a memory remotely arranged relative to the processor 102, and these remote memories may be connected to the server via a network. Examples of the above-mentioned network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
- the transmission device 106 is configured to receive or send data via a network.
- the above-mentioned optional examples of the network may include a wireless network provided by the communication provider of the server.
- the transmission device 106 includes a network adapter (Network Interface Controller, referred to as NIC), which can be connected to other network devices through a base station so as to communicate with the Internet.
- the transmission device 106 can be a radio frequency (Radio Frequency, referred to as RF) module, which is configured to communicate with the Internet wirelessly.
- RF Radio Frequency
- FIG. 2 is a flow chart of the method for controlling the operation of an operating system according to an embodiment of the present application. As shown in FIG. 2 , the process includes the following steps:
- Step S202 detecting the running state of the first operating system during running, wherein the first operating system and the second operating system are run based on a processor;
- Step S204 Control the processor resources used by the first operating system according to the running state.
- the first operating system and the second operating system are run based on the processor, the running state of the first operating system during the running process is detected, and the processor resources used by the first operating system are controlled according to the running state. Since the first operating system and the second operating system are both based on the same processor, the increase and deployment of hardware devices are avoided, the system cost is reduced, and the processor resources used by the operating system can be controlled during the operation of the operating system, so that the processor resources can be reasonably used to support the operation between systems. Therefore, the technical problem of low operating efficiency of the operating system can be solved, and the technical effect of improving the operating efficiency of the operating system can be achieved.
- the execution subject of the above steps may be a server, a device, a mainboard, a chip, a processor, an embedded system, etc., but is not limited thereto.
- the first operating system and the second operating system may be, but are not limited to, two heterogeneous or homogeneous operating systems, that is, the first operating system and the second operating system may be of the same type or different types.
- the first operating system and the second operating system may be operating systems with different sensitivities to response time, for example, the first operating system is more sensitive to response time than the second operating system.
- the first operating system and the second operating system may be operating systems with different resource occupancy, for example, the first operating system occupies less resources for services than the second operating system.
- the first operating system and the second operating system mentioned above may be but are not limited to two heterogeneous operating systems deployed on the processor of the embedded system, i.e., embedded operating systems.
- Embedded operating systems can be divided into real-time operating systems (RTOS) and non-real-time operating systems according to their sensitivity to response time.
- Real-time operating systems may include but are not limited to Free RTOS (Free Real-Time Operating System) and RT Linux (Real Time Linux).
- Non-real-time operating systems may include but are not limited to contiki (Contiki Operating System), HeliOS (Helix Operating System) and Linux (Linux Operating System), etc.
- An embedded system is a device that is set up to control, monitor or assist in the operation of machines and equipment. It is a special computer system.
- An embedded system is application-centric, based on computer technology, with tailored software and hardware to meet the strict requirements of application systems for functions, reliability, cost, volume, power consumption, etc.
- an embedded system is a combination of software and hardware, and can also cover auxiliary devices such as machinery.
- the embedded system may include, but is not limited to, hardware devices such as processors, memories, and peripheral circuits.
- the first operating system and the second operating system are based on the processor of the embedded system.
- the embedded system may include, but is not limited to, underlying drivers, operating systems, and applications.
- the first operating system and the second operating system are the operating systems in the embedded system.
- the operation control method of the operating system may be, but is not limited to, a control logic implemented in an embedded system.
- the control logic realizes the control, allocation and scheduling of heterogeneous dual operating systems, processors, memories and other hardware and software resources in the embedded system.
- the operation control method of the above operating system may be, but is not limited to, executed by the first operating system, or by a functional module for performing resource control provided on the first operating system.
- its operation status may, but is not limited to, represent its operation status.
- the operation status may, but is not limited to, be a single dimension, or may, but is not limited to, be a comprehensive consideration of multiple dimensions.
- the operation status may, but is not limited to, include the usage of software and hardware resources, the execution of instructions, the operation status of operating services, and the like.
- the running process of the first operating system may refer to but is not limited to the entire process from power-on to power-off.
- the first operating system may be but is not limited to being awake all the time, or may have both a wake-up phase and a sleep phase.
- the processor resources used by the first operating system may include but are not limited to operating services, processor cores, storage space on the processor (such as memory, cache), timers, registers, input and output interfaces, etc.
- control of processor resources may be a separate control of one of them, or may be, but is not limited to, a coordinated control of multiple processor resources.
- control of processor resources may include but is not limited to release, occupation, allocation, recycling, etc.
- Reasonable control operations on the processor resources used by the first operating system according to its running state during operation can improve resource utilization and improve the operating efficiency of the operating system.
- the detected operating state can determine the controlled processor resources, for example, detecting the service state can control the adjustment of the operating service, and detecting the system state can control the use of the processor core.
- Different detection objects can also be set according to the processor resources to be controlled, for example, if the operating service needs to be adjusted, the service state can be detected, and if the use of the processor core needs to be controlled, the system state can be detected.
- the business state of the operating business run by the first operating system can reflect the operation status of the first operating system.
- the business state of the operating business can be detected.
- the business state of the target operating business run by the processor based on the first operating system can be detected, but is not limited to, wherein the operation state includes the business state.
- the target operation service may be, but is not limited to, an operation service that has certain requirements on the operating performance or operating environment of the system, such as: a fan control service that has certain requirements on the operating time, a log backtracking service that has certain requirements on the data storage space, an interface switching service that has certain requirements on the response speed, and a hardware interface waveform signal simulation service, etc.
- the business status of the operating business may be but is not limited to indicating the operating status of the operating business in various dimensions, such as: whether it is interrupted, whether it runs to a certain extent (for example: whether the running time reaches a threshold, whether the running result reaches a preset result), etc.
- a control operation matching the target service state can be executed on the service, thereby realizing the control adapted to the current service state, such as transferring the operating service from one operating system to another operating system, starting and stopping the operating service, suspending and resuming the operating service, etc.
- the target operating service is released, wherein the processor resources include the target operating service, and the second operating system is used to run the target operating service.
- the target operating service if the service state of the target operating service reaches the target service state, such as being interrupted, or running to a certain extent (for example, the running time reaches a threshold, the running result reaches a certain preset result), etc., the target operating service on the first operating system is released, and the second operating system continues to run the target operating service.
- the operating service is alternately run between the operating systems, so that the operating service runs on the operating system that is more suitable for its operation.
- the first operating system running the target operation business is interrupted by the second operating system, for example: when the first interrupt request sent by the second operating system to the first operating system is obtained, it is determined that the business state detected is the target business state, wherein the first interrupt request is used to request to take over the target operation business.
- the service attribute of the target operation service reaches the target service attribute. For example, when the service attribute of the target operation service reaches the target service attribute, it is determined that the detected service state is the target service state.
- the operating system switch when the target operation business performs the operating system switch can be determined by, but is not limited to, the second operating system.
- the second operating system decides to take over the target operation business, it can send a first interrupt request to the first operating system to instruct it to take over the target operation business.
- the first interrupt request is obtained, it can be considered that the business state of the first operating system for the target operation business has reached the target business state, and the target operation business can be released in response to the first interrupt request, and the second operating system takes over the operation of the target operation business.
- the service attributes of the target operation service may include, but are not limited to, operation duration, operation result, operation load, etc.
- the operation duration reaching the target service attribute may be, but are not limited to, the operation duration reaching a preset duration
- the operation result reaching the target service attribute may be, but are not limited to, the target operation service running out a preset operation result
- the operation load reaching the target service attribute may be, but are not limited to, the operation resources occupied by the target operation service exceeding or about to exceed the range that the first operating system can bear.
- the target operation business when the target operation business performs the operating system conversion may also be determined by, but not limited to, the business attributes of the target operation business itself. If the target operation business runs to the extent that the business attributes reach the target business attributes, it can be considered that the business state of the first operating system for the target operation business has reached the target business state, and the second operating system can take over the operation of the target operation business.
- a determination mechanism may be established for the qualification of the second operating system to take over the target operating service, but is not limited to it. For example, in the case of obtaining the first interrupt request, responding to the first interrupt request, determining whether the second operating system takes over the target operating service; in the case of the second operating system taking over the target hardware controller, releasing the target operating service.
- the target operation business run by the first operating system may not be released immediately. Instead, it is determined whether the second operating system takes over the target operation business, thereby determining the qualification of the second operating system to take over the target operation business. If it is determined that the second operating system takes over the target operation business, the target operation business run by the first operating system is released.
- the second operating system can refuse to take over the target operation business. For example: after determining whether the second operating system takes over the target operation business, if the second operating system does not take over the target operation business, a second interrupt request is sent to the second operating system, wherein the second interrupt request is used to indicate the refusal of the second operating system to take over the target operation business.
- the rejection of the second operating system from taking over the target operation service may be indicated or notified to the second operating system by, but not limited to, sending an interrupt request between systems.
- the second interrupt request may not be sent, the first operating system does not release the target operation service, and continues to run the target operation service, and the second operating system cannot take over the target operation service.
- the first operating system can continue to run the target operation business until the conditions for the second operating system to take over the target operation business are met (for example, the business attributes reach the target attributes). Then, the first operating system releases the target operation business to the second operating system and notifies the second operating system to take over the operation.
- the second operating system can actively sense that the target operation service has been released and take over the target operation service. Alternatively, if the second operating system actively sends the first interrupt request to request to take over the target operation service, the second operating system can default to directly taking over the target operation service as long as it does not receive the second interrupt request for rejecting its taking over of the target operation service within a certain period of time, thereby improving the takeover efficiency of the target operation service.
- an interrupt request may be actively sent to the second operating system to notify the second operating system that the target operation service has been released.
- a third interrupt request is sent to the second operating system, wherein the third interrupt request is used to indicate that the target operation service has been released, and the second operating system is used to respond to the third interrupt request to run the target operation service.
- a third interrupt request can be sent to the second operating system to notify the second operating system that the target operation service has been released.
- the second operating system then takes over the subsequent operation of the target operating business.
- FIG3 is a schematic diagram of an operation service takeover process according to an embodiment of the present application.
- the second operating system sends a first interrupt request to the first operating system to request to take over the target operation service running on the first operating system. If the first operating system allows the second operating system to take over the target operation service, the target operation service is released, and the second operating system takes over the target operation service, and the target operation service runs on the second operating system. If the first operating system does not allow the second operating system to take over the target operation service, a second interrupt request is sent to the second operating system to refuse it from taking over the target operation service, and the target operation service continues to run on the first operating system.
- the system state of the first operating system can reflect the running state of the first operating system, and the processor core used by the first operating system can be reasonably controlled based on the system state of the first operating system, but is not limited to.
- the system state of the operating system can be detected. For example: in the above step S202, the system state of the first operating system can be detected, but is not limited to, wherein the running state includes the system state, and the first operating system runs based on the target processor core in the processor.
- the target processor core may be, but is not limited to, a processor core in the processor allocated to the first operating system and configured to run the first operating system, and the number of the target processor cores may be, but is not limited to, one or more.
- the system status of the operating system may be, but is not limited to, indicating the operating status of the operating system in various dimensions, such as: whether it is interrupted, whether it runs to a certain extent (for example: whether the running time reaches a threshold, whether the running result reaches a preset result), etc.
- step S204 when it is detected that the system state is the target system state, the target processor core is released, wherein the processor resources include the target processor core, and the second operating system is used to add the target processor core to the scheduling resource pool of the second operating system, and the scheduling resource pool includes the processor core in the processor allocated to the second operating system.
- the target processor core used by the first operating system is released, and the target processor core is used by the second operating system.
- the processor core is alternately used between the operating systems, so that the processor core is used more reasonably.
- the first operating system is interrupted by the second operating system, for example: when the fourth interrupt request sent by the second operating system to the first operating system is obtained, it is determined that the system state is detected to be the target system state, wherein the fourth interrupt request is used to request to occupy the target processor core.
- the system attribute of the first operating system reaches the target system attribute, for example: when the system attribute of the first operating system reaches the target system attribute, it is determined that the system state is detected to be the target system state.
- when the target processor core performs the operating system switch can be determined by, but is not limited to, the second operating system.
- the second operating system decides to take over the target processor core, it can send a fourth interrupt request to the first operating system to instruct it to take over the target processor core.
- the fourth interrupt request is obtained, it can be considered that the system state of the first operating system has reached the target system state.
- the target processor core can be released in response to the fourth interrupt request, and the second operating system takes over the target processor core and adds it to the scheduling resource pool for use.
- the data currently running by the first operating system can be pushed into the stack, the first operating system enters a sleep state, and the second operating system occupies the target processor core for scheduling and use.
- the second operating system may, but is not limited to, initiate a fourth interrupt request based on its own demand for processor core resources. For example, the second operating system detects whether the resource occupancy rate of the core allocated to it is higher than a certain threshold, or detects whether the remaining amount of resources of the core allocated to it is sufficient to run the next process. If the resource occupancy rate is higher than a certain threshold or the remaining amount is insufficient to run the next process, it can be considered that the second operating system needs an additional processor core, and the second operating system can actively send a fourth interrupt request to the first operating system.
- An interrupt request is used to request to occupy the target processor core to reduce its operating pressure or support the running of the next process.
- the second operating system when the second operating system (such as Linux) detects that the resource occupancy rate of the core allocated to it is high (for example, the occupancy rate is higher than 95% of the total resources), it can send a fourth interrupt request to the first operating system (such as RTOS).
- the first operating system After receiving the fourth operating system, the first operating system (RTOS) saves its running business scene (for example: pushes the running data into the stack) and releases the target processor core it is using.
- the second operating system (Linux) occupies the target processor core and allocates the threads to be run to the target processor core, or schedules the threads on other processor cores with higher occupancy rates to run on the target processor core.
- the system attributes of the operating system may include, but are not limited to, the system's operating time, operating results, operating load, etc.
- the system's operating time reaching the target system attribute may be, but are not limited to, the system's operating time reaching a preset time
- the system's operating result reaching the target system attribute may be, but are not limited to, the operating system running a preset operating result
- the system's operating load reaching the target system attribute may be, but are not limited to, the operating system's resource occupancy rate being lower than or about to be lower than its set occupancy rate lower limit.
- when the target processor core performs the operating system switch may also be determined by, but not limited to, the system properties of the operating system itself. If the operating system runs to the point where the system properties reach the target system properties, it can be considered that the system state of the operating system has reached the target system state, and the target processor core can be occupied by a second operating system.
- a determination mechanism may be established for the qualification of the second operating system to occupy the target processor core, but is not limited to such determination mechanism. For example, when the fourth interrupt request is obtained, the fourth interrupt request is responded to to determine whether the target processor core is occupied by the second operating system; when the target processor core is occupied by the second operating system, the target processor core is released.
- the target processor core when the fourth interrupt request is obtained, the target processor core may not be released immediately. Instead, it is determined whether the target processor core is occupied by the second operating system, thereby determining the qualification of the second operating system to occupy the target processor core. If it is determined that the target processor core is occupied by the second operating system, the target processor core is released to be occupied by the second operating system.
- the second operating system may be denied occupation of the target processor core. For example, when the target processor core is not occupied by the second operating system, a fifth interrupt request is sent to the second operating system, wherein the fifth interrupt request is used to indicate that the second operating system is denied occupation of the target processor core.
- the second operating system may be instructed or notified of the rejection of the second operating system's occupation of the target processor core by, but not limited to, sending an interrupt request between systems.
- the second interrupt request may not be sent, the first operating system does not release the target processor core, and continues to occupy the target processor core, and the second operating system cannot occupy the target processor core.
- the first operating system can continue to use the target processor core to process operating services until the conditions for the second operating system to occupy the target processor core are met (for example, the system properties reach the target system properties). Then, the first operating system releases the target operating service to the second operating system and notifies the second operating system to take over the operation.
- FIG4 is a schematic diagram of a processor core occupation process according to an embodiment of the present application.
- the first operating system runs based on the target processor core.
- the second operating system sends a second interrupt request to the first operating system to request to occupy the target processor core used by it. If the second operating system is allowed to occupy, the target processor core is released, and the second operating system occupies the target processor core and adds it to the resource scheduling pool. If the second operating system is not allowed to occupy, a fifth interrupt request rejection is sent to the second operating system.
- the second operating system can actively sense that the target processor core has been released and occupy the target processor core. Alternatively, if the second operating system actively sends the fourth interrupt request to request to occupy the target processor core, the second operating system can default to directly occupying the target processor core as long as it does not receive the fifth interrupt request for rejecting its occupation of the target processor core within a certain period of time, thereby improving the occupation efficiency of the target processor core.
- the first operating system actively releases the target processor core it is using, it can actively send an interrupt request to the second operating system to notify the second operating system that the target processor core has been released.
- the sixth interrupt request is sent to the second operating system, wherein the sixth interrupt request The request is used to indicate that the first operating system has released the target processor core, and the second operating system is used to respond to the sixth interrupt request to add the target processor core to the scheduling resource pool.
- the first operating system When the system properties reach the target system properties, the first operating system actively releases the target processor core and can send a sixth interrupt request to the second operating system to notify the second operating system that the target processor core has been released. After receiving the sixth interrupt request, the second operating system occupies the target processor core to schedule and use resources.
- the active sleep of the first operating system can be triggered, and the first operating system (RTOS) sends a sixth interrupt request to the second operating system (such as Linux), and saves its running scene (for example: pushes the running data into the stack) and then sleeps.
- the second operating system (Linux) adds the target processor core to its resource scheduling pool for scheduling and use.
- a dual operating system is installed in the chip and runs based on a multi-core processor CPU.
- the first operating system may be, but not limited to, RTOS
- the second operating system may be, but not limited to, Linux.
- CPU core 0 is allocated to RTOS for use, and the remaining cores are allocated to Linux for use.
- FIG5 is a schematic diagram of a processor resource control process according to an embodiment of the present application. As shown in FIG5, RTOS is periodically awakened and runs, and RTOS and Linux alternately occupy and schedule CPU core 0.
- RTOS In the time slice (T4, T5) of RTOS scheduling CPU core 0, Linux generates an interrupt to take over CPU core 0 at T4-1 (equivalent to the fourth interrupt request mentioned above), causing RTOS to sleep. At this time, RTOS saves the scene in the stack and sleeps, and then releases CPU core 0 to Linux to take over. After waiting for Linux scheduling to be completed, an interrupt to RTOS to preempt CPU core 0 will be generated at T5-1 to wake up RTOS, and RTOS starts to enter the round-robin mode to occupy and schedule CPU core 0 from T5-1.
- the taking over of the inter-system operation service and the occupation of the processor core can be but not limited to separate, for example, only the operation service is taken over, or only the processor core is occupied. They can also be occupied together, that is, both the operation service and the processor core are taken over.
- the second operating system takes over the processor resources of the first operating system.
- a startup control process of an operating system is provided, which includes the following steps:
- Step A Controlling the hardware controller of the target device via the first bus through the first operating system running on the first processor core of the processor to control the operating state of the target device.
- some specific devices can be equipped to perform operations related to the operation of the devices.
- these specific devices usually start working after the system is powered on. However, after the system is powered on, it will take some time for the operating system running on the processor to normally take over the specific device and control the operation status of the specific device. During the startup of the operating system, the specific device is uncontrollable.
- the fan starts working after the system is powered on. Since it takes some time for the operating system running on the CPU to take over the fan normally and set the fan speed after the system is powered on, the fan is uncontrollable during the operating system startup process.
- the server adopts the control method of BMC combined with CPLD
- the personal computer adopts the control method of EC chip (the EC chip has the function of adjusting the fan speed according to the temperature)
- the industrial computer adopts the control method of customized chip.
- a multi-core multi-system for example, a multi-core dual system
- startup control method can be adopted to run different operating systems of the embedded system on different processor cores of the processor. Different operating systems have different response speeds. If the second operating system is not started, restarted, or otherwise unable to control the operating state of a specific device, the operating state of the specific device can be controlled by the first operating system with a higher response speed than the second operating system. This can reduce the situation where the operating state of the specific device is uncontrollable. At the same time, since no additional cost is required, it also has good scalability.
- the first operating system can control the hardware controller of the target device via the first bus to control the operating state of the target device.
- the target device here can be a fan, or other devices that need to be running when the system is started.
- the controller is a fan controller, for example, a PWM (Pulse Width Modulation) controller, a FanTach (fan speed) controller.
- a first operating system for example, an RTOS system
- a traditional CPLD for example, a CPLD
- an EC chip for example, a EC chip
- custom chip for example, a custom chip
- RTOS system For example, dual systems, RTOS system and Linux system are implemented based on BMC dual-core, and fans are implemented based on multi-core dual systems.
- the RTOS system can replace the CPLD, EC chip, and customized chip to control the fan, that is, take over the control of the fan and control the operating status of the fan at a fast enough speed.
- Step B booting a second operating system on a second processor core of the processor.
- the second operating system can be guided to start on the second processor core of the processor so that the second operating system runs on the second processor core.
- starting the second operating system on the second processor core means scheduling the second processor core to the second operating system, and the system file or image file of the operating system can be stored in a memory on the chip where the processor is located or outside the chip, for example, in an external RAM (Random Access Memory).
- Step C after the second operating system is started, the second operating system takes over the hardware controller via the first bus to take over the control of the target device.
- the first operating system can always control the running state of the target device.
- the second operating system can also take over the control of the target device.
- the hardware controller can be taken over by the second operating system via the first bus.
- the way in which the second operating system takes over the control of the target device can be: after the second operating system is started, the second operating system sends a device takeover request to the first operating system, for example, sending an interrupt request through the second bus to request to take over the hardware controller of the target device.
- the first operating system can receive the device takeover request sent by the second operating system, transfer the control of the target device to the second operating system, and can also perform operations related to the handover of the control of the target device, for example, stop running the service (process) used to control the running state of the target device.
- the RTOS system transfers the control right of the fan to the Linux system, and the Linux system controls the fan.
- the above process can be executed after the system is powered on, that is, the multi-core dual system startup mode is adopted, and the RTOS system is started first, which is conducive to earlier intervention in fan control, and after the Linux system is fully started, the RTOS system transfers the control right of the fan to the Linux system for control.
- the first operating system running on the first processor core of the processor controls the hardware controller of the target device via the first bus, it also includes: after the chip where the processor is located is powered on, waking up the first processor core through the processor; running the boot loader of the first operating system through the first processor core to guide the first operating system to start on the first processor core.
- the entire system can be divided into two stages according to the working period, the initial startup stage and the real-time operation stage.
- the startup control method in this embodiment can be executed in the initial startup stage or the real-time operation stage.
- the initial startup stage starts when the system is powered on, that is, the chip where the processor is located is powered on.
- a core will be awakened to execute the boot action of the operating system, and the remaining cores are temporarily in a dormant state.
- the awakened core can be the first processor core.
- the system will first execute a preset core scheduling strategy (boot strategy), that is, a processor core of the processor executes the core scheduling strategy.
- the core scheduling strategy can be stored in RAM or Norflash (non-volatile flash memory) on the SOC chip (System on Chip).
- the scheduling strategy can be flexibly configured according to different design requirements. Its main functions include: specifying the initial processing resources (processor cores) required to run different operating systems, and determining the boot process of heterogeneous operating systems.
- Chip power-on can refer to power-on at the SOC chip level.
- the first operating system can be booted and run on the first processor core through the boot loader program: Yes, the first processor core boots the first operating system to start on the first processor core through the boot loader program.
- the boot loader program can be located on a computer or other computer application. It refers to a program used to load an operating system, for example, an inherent program in the Boot Rom. The inherent program refers to the code that boots the operating system to start, which belongs to the Boot Loader program.
- the Boot Rom is a CPU chip. A small piece of mask ROM (Read-Only Memory) or write-protected flash memory embedded in the processor chip.
- the boot loader is used to guide the operating system to start on the processor core corresponding to the operating system, which can improve the success rate of the operating system startup and prepare for the real-time operation phase.
- a hardware controller of a target device is controlled via a first bus by a first operating system running on a first processor core of a processor, including: executing a first control task of the first operating system on the first processor core, wherein the first control task is used to control the hardware controller; reading sensor data of a designated sensor corresponding to the target device via the first processor core; and sending a device control instruction to the hardware controller via the first bus according to the sensor data of the designated sensor by the first control task, so that the hardware controller controls the operating state of the target device according to the device control instruction.
- the operating system controls the hardware controller of the target device by a control task (business) on the processor core on which the operating system runs.
- the control task here may refer to a corresponding control task.
- a first control task (first control process) of a first operating system may be executed on a first processor core, and the hardware controller is controlled by the first control task.
- the control of the hardware controller can be based on the sensor data of the sensor.
- the parameters that affect their operation may be different, and correspondingly, the sensor data required to be obtained may also be different.
- the target device it can be a device that runs after the chip is powered on, and the sensor corresponding to it is a designated sensor.
- designated sensors There can be many types of designated sensors, which may include but are not limited to at least one of the following: temperature sensor, humidity sensor, noise sensor, etc. Since the first control task runs on the first processor core, the sensor data of the designated sensor can be read by the first processor core.
- the sensor data of the designated sensor can be stored in the storage space within the designated sensor, and can be transmitted from the designated sensor to the designated storage space. In this embodiment, the reading position of the sensor data of the designated sensor is not limited.
- the sensor data of the designated sensor read may be sensor data within a time period, or may be all sensor data since the target device was started, or may be sensor data that meets other time constraints.
- the first control task may control the operating state of the target device according to the sensor data of the designated sensor. Controlling the operating state of the target device may be achieved in the following manner: sending a device control instruction to a hardware controller of the target device through the first control task, so that the hardware controller controls the operating state of the target device according to the device control instruction.
- the first control task can determine the expected operating state of the target device based on the sensor data of the designated sensor; when the current operating state of the target device is different from the expected operating state, the above-mentioned device control instruction can be generated, and the device control instruction can control the operating state of the target device to be adjusted to the expected operating state.
- the above-mentioned device control instruction can be sent to the hardware controller of the target device via the first bus.
- the first bus is similar to the above-mentioned embodiment and will not be described in detail here.
- the operating state of the target device is controlled, thereby improving the utilization of resources.
- a device control instruction is sent to a hardware controller via a first bus according to sensor data of a designated sensor by a first control task, including: determining a target parameter value of a device operating parameter of a target device according to sensor data of a designated sensor by the first control task, wherein the device operating parameter is a parameter for controlling the operating state of the target device; and sending the device control instruction carrying the target parameter value to the hardware controller via the first bus by the first control task.
- the first control task may determine the expected operating state of the target device according to the sensor data of the designated sensor.
- the expected operating state may be represented by a parameter value of a device operating parameter, and the device operating parameter may be a parameter for controlling the operating state of the target device.
- the corresponding device operating parameters may be different.
- the corresponding device operating parameter may be a rotation speed, and for other types of devices, the device operating parameter may be other operating parameters.
- the expected operating state may correspond to a target parameter value of the device operating parameter of the target device.
- the target parameter value can be carried in the above-mentioned device control instruction, that is, the device control instruction carrying the target parameter value is sent to the hardware controller through the first control task.
- the method of sending the device control instruction to the hardware controller can be similar to that in the aforementioned embodiment, and will not be repeated here.
- the accuracy of device control can be improved.
- a target parameter value of a device operating parameter of a target device is determined according to sensor data of a designated sensor through a first control task, including: when the target device is a fan, a target parameter value of a fan operating parameter of the fan is determined according to sensor data of the designated sensor through the first control task.
- the target device may be a fan, which may be a fan configured to dissipate heat for the server or other device in which it is located, that is, a cooling fan.
- the device operating parameter may be a fan operating parameter
- the fan operating parameter may include one or more, including but not limited to at least one of the following: rotation speed, rotation cycle, cycle switching time, and other operating parameters. This is not limited in this embodiment.
- determining the target parameter value of the device operating parameter of the target device according to the sensor data of the designated sensor through the first control task may be: determining the target parameter value of the fan operating parameter of the fan according to the sensor data of the designated sensor through the first control task.
- the first control task sends the device control instruction carrying the target parameter value to the hardware controller of the fan via the first bus, thereby controlling the operating state of the fan.
- the running status of the fan can be quickly controlled in scenarios such as system power-on, system restart or other scenarios, thereby improving the timeliness of fan control.
- a target parameter value of a fan operating parameter of the fan is determined according to sensor data of a designated sensor through a first control task, including: when the target device is a fan and the designated sensor is a temperature sensor, a target speed value of the fan speed is determined according to sensor data of the temperature sensor through a first control task, wherein the speed of the fan is positively correlated with the temperature detected by the temperature sensor.
- the designated sensor may be a temperature sensor, the number of which may be one or more, and the location of the temperature sensor may be configured as required, and different temperature sensors may be located at different locations.
- the sensor data of the temperature sensor is used to represent the temperature detected by the temperature sensor, and in this regard, the first control task may determine a target speed value of the fan's speed based on the sensor data of the temperature sensor, where the fan's speed is positively correlated with the temperature detected by the temperature sensor.
- the highest temperature detected by the multiple temperature sensors can be determined based on the sensor data of each temperature sensor, and the fan speed can be determined based on the highest temperature detected by the multiple temperature sensors. Compared with determining the fan speed based on the average temperature detected by the multiple temperature sensors, the safety of device operation can be ensured.
- the speed of each fan can also be determined based on the highest temperature or average temperature detected by the temperature sensor matching each fan.
- the first operating system (for example, RTOS system) can be used to replace CPLD, EC chip, custom chip and other processing units to control the fan speed (BMC fan control can be performed in real time).
- the first processor core for example, CPU0, the first processor core can be awakened by hardware
- the first processor core runs the boot loader (for example, the specified program in Boot Rom), loads the first operating system to start, and the first processor core will read various temperature-related sensor data to perform fan control (for example, fan speed control), completely simulating the above processing unit to complete the fan regulation function.
- the first operating system can calculate the PWM value according to the temperature sensor, and then adjust the fan speed. In the above manner, the fan speed can be controlled by the first operating system during the startup of the second operating system.
- booting a second operating system on a second processor core of a processor includes: executing a secondary program loader through a first processor core to wake up the second processor core by the secondary program loader; and running a universal boot loader of the second operating system through the second processor core to boot the second operating system on the first processor core.
- the Second Program Loader can be loaded into the internal memory, for example, the Static Random-Access Memory (SRAM) inside the SOC, and the SPL can be responsible for loading the Universal Boot Loader (U-Boot) into the SRAM.
- the secondary program loader can boot and load the second operating system, and can also boot and load the first operating system.
- the secondary program loader can be executed by the first processor core to wake up the second processor core by the secondary program loader; the universal boot loader (universal boot loader) of the second operating system can be run by the second processor core, thereby guiding the second operating system to start on the first processor core.
- the boot program of the second operating system is loaded by the secondary program loader, and the boot program of the second operating system may include the universal boot loader.
- the secondary program loader is the code executed in the first stage of the universal boot loader, which is responsible for moving the second stage code of the universal boot loader to the system memory (System RAM, also called off-chip memory) for execution.
- the universal boot loader is an open source software that complies with the GPL (General Public License) agreement and can be regarded as a bare metal integrated routine.
- the processor will first wake up the CPU0 core so that the RTOS system can run as quickly as possible; then it will use the program in the Boot Rom to boot the RTOS system; during the startup of the RTOS system, it will continue to load U-Boot through SPL, and U-Boot will boot the second operating system on CPU1 until the Linux system starts normally.
- Boot Rom is the internal ROM solidification program of the chip (for example, SOC chip), which is the boot code of U-Boot.
- Boot Rom reads the startup information of the hardware (for example, the setting of the dip switch), and reads the uboot-spl code (that is, SPL) from the specified startup medium (for example, SD, MMC, etc.).
- SPL is mainly responsible for initializing the external RAM and environment, and loading the real U-Boot image into the external RAM for execution.
- the external RAM can be DDR (Double Data Rate Synchronous Dynamic Random-Access Memory) or other RAM.
- the second processor core is awakened by the secondary program loader, and then the second processor core runs the universal boot loader to boot the second operating system on the corresponding processor core, which can improve the convenience and success rate of operating system startup.
- the startup process of a multi-core dual system is explained below by taking an RTOS system and a Linux system as examples.
- the startup process of the multi-core dual system can include the following steps:
- Step 1 wake up CPU0 when the system is just powered on
- Step 2 CPU0 runs the specified program in Boot Rom and loads the RTOS system to start;
- Step 3 during the startup of the RTOS system, wake up CPU1 to boot U-Boot, and start the fan control program (FanCtrl_RTOS_APP) in the first operating system;
- Step 4 CPU1 boots U-Boot, which may include the SPL stage and the U-Boot stage, and enters the SPL stage by calling SPL;
- Step 5 in the SPL stage, SPL guides U-Boot to start;
- Step 6 In the U-Boot stage, the Linux kernel (CPU1 to CPUN) is loaded, and the BMC service program and the fan control program (FanCtrl_Linux_APP) in the second operating system are started.
- the Linux kernel CPU1 to CPUN
- the BMC service program and the fan control program FanCtrl_Linux_APP
- the fan is controlled by first starting the RTOS system, and after the Linux system is started, the second operating system takes over the control of the fan. This ensures that the fan can be quickly controlled when the system is powered on, thereby improving the efficiency of fan control.
- the second operating system after the second operating system takes over the hardware controller via the first bus, it also includes: when the second operating system is to be restarted, the first operating system is awakened by the second operating system via the second bus, and the first operating system takes over the hardware controller via the first bus to take over control of the target device; and the second operating system is controlled to restart the system.
- the second operating system can first wake up the first operating system, and the first operating system takes over the hardware controller to take over the control of the target device. Waking up the first operating system can be performed via the second bus, and the first operating system taking over the hardware controller can be performed via the first bus.
- the reliability of device control can be improved by waking up the first operating system to take over the control of the target device.
- the first operating system is awakened via the second bus by the second operating system.
- the operating system comprises: when the second operating system is to be restarted, the second operating system initiates a system wake-up interrupt to the first operating system via a second bus, wherein the system wake-up interrupt is used to wake up the first operating system.
- Waking up the first operating system may be achieved through an inter-core interrupt. If the second operating system is to be restarted (for example, the system crashes, a reboot command is received), the second operating system may initiate a system wake-up interrupt to the first operating system to wake up the first operating system.
- the system wake-up interrupt may be an active wake-up interrupt.
- the second operating system may be controlled to restart the system, and after the second operating system restarts, the hardware controller may be taken over again. The process of taking over the hardware controller is similar to that in the aforementioned embodiment, and will not be described in detail here.
- the first operating system may, but is not limited to, enjoy a higher priority for the occupation of the processor core allocated to it, or which operating system currently uses the processor core allocated to the first operating system may, but is not limited to, be determined by negotiation between operating systems. If the target processor core allocated for use by the first operating system has been occupied by the second operating system, when the first operating system is awakened and operated, it can be detected whether the target processor core has been released, and if it has been released, the first operating system operates based on the target processor core. If it has not been released, a seventh interrupt request can be sent to the second operating system to request the second operating system to release the target processor core, and the second operating system responds to the seventh interrupt request and releases the target processor core.
- the first operating system operates based on the target processor core. For example: when the target processor core in the processor has been added to the scheduling resource pool of the second operating system, and when the first operating system is awakened and operated, it is detected whether the target processor core has been released, wherein the scheduling resource pool includes the processor core allocated to the second operating system in the processor; when it is detected that the second operating system has released the target processor core when the first operating system is awakened, the first operating system operates based on the target processor core.
- the target processor core has been added to the scheduling resource pool of the second operating system, which may, but is not limited to, indicate that the target processor core has been occupied by the second operating system. If the first operating system is awakened and runs in this case, the second operating system may actively release the target processor core, or continue to occupy the target processor core until the first operating system actively requests it to release the target processor core.
- the first operating system detects whether the target processor core is released. If it is detected that the target processor core is not released, the second operating system may be requested to release the target processor core through an interrupt request. For example, when it is detected that the target processor core is not released, a seventh interrupt request is sent to the second operating system, wherein the seventh interrupt request is used to request the second operating system to release the target processor core, and the second operating system is used to release the target processor core in response to the seventh interrupt request.
- the second operating system may, but is not limited to, directly release the target processor core upon receiving the seventh interrupt request, and may also, but is not limited to, judging whether to release the target processor core and then deciding whether to immediately release the target processor core to the first operating system, or to continue running to obtain the running result and then release the target processor core to the first operating system.
- Figure 6 is a second schematic diagram of a processor resource control process according to an embodiment of the present application.
- the RTOS within the time slice (T3, T4) when Linux schedules CPU core 0, the RTOS is in a sleep state.
- the RTOS may be awakened due to an interrupt event reported by the hardware.
- Linux will retain the process running on CPU core 0 on site, and the RTOS occupies CPU core 0.
- the RTOS After processing the interrupt event reported by the hardware, it enters a sleep state again at T4-1.
- the RTOS reports the release of the interrupt of CPU core 0 to Linux, and Linux continues to schedule CPU core 0 according to the set cycle to resume the running process on site.
- business data can be interacted.
- the interaction process can be implemented, but not limited to, by using storage space and interrupt request to cooperate in transmission.
- the operating systems transfer data through storage space and notify each other of instructions through interrupt requests. For example: obtaining business data generated by the first operating system during the operation of the processor; storing the business data in the storage space on the processor; sending the eighth interrupt request to the second operating system, wherein the eighth interrupt request is used to request the second operating system to read business data from the storage space, and the second operating system is used to respond to the eighth interrupt request to read business data from the storage space.
- the business data generated by the first operating system during the operation of the processor is stored in the storage space on the processor, and the second operating system is notified through the eighth interrupt request, and the second operating system reads the business data from the storage space, thereby realizing the interaction of business data.
- the business data exchanged between operating systems may be, but is not limited to, Any data that needs to be transmitted between systems, such as business process data, business result data, etc.
- the storage space on the processor may be, but is not limited to, configuring a dedicated storage location for the interaction process between operating systems, which may be called shared memory.
- the shared memory may be, but is not limited to, further allocated according to the operating system, that is, each operating system corresponds to a dedicated shared memory.
- the information of the shared memory corresponding to the first operating system can be carried in the eighth interrupt request for requesting the second operating system to read business data from the storage space.
- the second operating system responds to the eighth interrupt request to read business data from the shared memory indicated by it.
- each interrupt request can be transmitted between systems through, but not limited to, a software protocol, or can also be transmitted through a hardware module.
- a hardware module mailbox can be established between the first operating system and the second operating system, business data is read and written through the storage space, and the interrupt request is transmitted through the mailbox channel.
- a method for inter-core communication includes the following steps:
- step a the first operating system sends target data (which may be the above-mentioned business data) to a target virtual channel (which may be the above-mentioned storage space) in the processor memory.
- target data which may be the above-mentioned business data
- target virtual channel which may be the above-mentioned storage space
- the first operating system and the second operating system can be real-time operating systems or non-real-time operating systems
- the first operating system and the second operating system can be single-core operating systems or multi-core operating systems
- the target data is the data to be sent
- the target virtual channel is a section of free storage space in the memory
- the first operating system sending the target data to the target virtual channel in the processor memory means that the CPU core of the first operating system writes the data to be sent into the target virtual channel.
- Step b sending an interrupt notification message (which may be the eighth interrupt request) to the second operating system.
- the CPU core of the first operating system sends an interrupt notification message to the CPU core of the second operating system.
- the interrupt notification message may carry the address of the target virtual channel, which is used to notify the second operating system to obtain target data from the target virtual channel.
- the interrupt notification message may be software triggered or hardware triggered.
- Step c the second operating system responds to the interrupt notification message and obtains target data from the target virtual channel in the memory.
- the CPU core of the second operating system responds to the interrupt notification message, parses the address of the target virtual channel from the interrupt notification message, then locates the target virtual channel in the memory according to the parsed address, obtains target data from the target virtual channel, and realizes data interaction between the first operating system and the second operating system.
- the first operating system sending data sends the target data to the target virtual channel in the processor memory, and sends an interrupt notification message to the second operating system.
- the second operating system receiving the data responds to the interrupt notification message to obtain the target data from the target virtual channel, thereby solving the problem of waste of resources and strong dependence on the operating system in the inter-core communication process, and achieving the effect of reducing the waste of resources and dependence on the operating system in the inter-core communication process.
- the memory includes a data storage area and a metadata storage area.
- the data storage area is divided into multiple storage units, each storage unit is configured to store business data, and the metadata storage area is configured to store the size and occupancy status of each storage unit in the data storage area.
- the target virtual channel is composed of one or more storage units of the data storage area
- the metadata storage area can be divided into storage slices with the same number as the storage units, each storage slice is configured to record the size and occupied status of a storage unit
- the size of the storage unit can be represented by the first address and the last address of the storage unit, or by the first address and the length of the storage unit
- the occupied status includes an occupied state and an unoccupied state, and can be represented by the value of the free flag.
- the first operating system sends target data to a target virtual channel in a processor memory, including: the first operating system reads records in a metadata storage area, determines at least one storage unit in the data storage area that is in an idle state and has a total space greater than or equal to the length of the target data based on the read records, and obtains the target virtual channel; sets the state of at least one storage unit corresponding to the target virtual channel in the metadata storage area to an occupied state, and stores the target data in the target virtual channel.
- the target virtual channel to be written needs to be free and have a storage space greater than or equal to the length of the target data. Since the memory is divided into a metadata storage area and a data storage area, the metadata storage area record can be read. The occupancy status of each storage unit is checked to find out the storage unit that is idle and can meet the data storage needs.
- each storage unit is equal. If the length of the target data is greater than the length of a storage space, the number of storage units required is determined based on the length of the target data, and multiple storage units that are idle, continuous, and meet the data storage requirements are found to form a target virtual channel.
- each storage unit is equal, and the data storage area has pre-combined the storage units to obtain multiple virtual channels of different sizes.
- Each virtual channel is composed of one or more storage units.
- the occupancy status of each virtual channel recorded in the metadata storage area can be read to find the virtual channel that is in an idle state and has a length greater than the length of the target data, that is, the target virtual channel.
- the system software needs to apply for shared memory space, it will determine whether the length of the data to be applied is greater than the maximum length of the virtual channel to store data. If it is greater than the maximum length of the virtual channel to store data, the system software can send the data to be sent in multiple times to ensure that the length of each sent data is less than or equal to the maximum length of the virtual channel to store data, thereby ensuring smooth communication.
- the second operating system responds to an interrupt notification message and obtains target data from a target virtual channel in the memory, including: the second operating system reads a record in a metadata storage area and determines the target virtual channel based on the read record; obtains the target data from at least one storage unit corresponding to the target virtual channel, and sets the state of the at least one storage unit to an idle state.
- the state of the storage unit corresponding to the target virtual channel is set to an idle state.
- the first operating system sends target data to a target virtual channel in a processor memory, including: a driver layer of the first operating system receives the target data, determines a virtual channel in an idle state in the memory, and obtains the target virtual channel; sets the state of the target virtual channel to an occupied state, and stores the target data in the target virtual channel.
- both the real-time operating system and the non-real-time operating system have a driver layer.
- the driver layer After the driver layer receives the target data to be sent, it calls the interface to search for the target virtual channel in the memory.
- the state of the target virtual channel is set to an occupied state, and then the target data is written to the target virtual channel.
- the application layer when the first operating system includes an application layer, the application layer is provided with a human-computer interaction interface, and before the driver layer of the first operating system determines a virtual channel in an idle state in the memory, it also includes: the application layer of the first operating system receives the data to be sent input by the user through the human-computer exchange interface, encapsulates the data to be sent in a preset format, obtains the target data, and calls the data write function to pass the target data to the driver layer through the preset communication interface, wherein the preset communication interface is set on the driver layer.
- the application layer fills the data to be sent according to the preset format to obtain the target data, and then generates a device file ipidev in the system's /dev path.
- the application layer needs to read and write data from the driver layer, it can first use the system's built-in open function to open the device file /dev/ipidev, and then use the system's built-in write function to send the target data from the application layer to the driver layer.
- the driver layer then puts the data in the target virtual channel in the shared memory, and then triggers an interrupt to notify the second operating system to fetch the data.
- the second operating system responds to the interrupt notification message and obtains target data from the target virtual channel in the memory, including: the second operating system triggers an interrupt processing function based on the interrupt notification message, determines the target virtual channel from the memory through the interrupt processing function, and obtains the target data from the target virtual channel.
- determining the target virtual channel from the memory through the interrupt processing function and obtaining target data from the target virtual channel includes: calling the target task through the interrupt processing function, and the target task determining the target virtual channel from the memory and obtaining target data from the target virtual channel.
- the interrupt processing function sends a task notification to wake up the target task responsible for data extraction.
- the target task first searches for the target virtual channel in the shared memory by calling the interface, and then reads the target data from the target virtual channel and performs data parsing.
- a function identifier is stored in the memory, the function identifier indicates a target function, and a target virtual channel is determined from the memory through an interrupt processing function, and target data is obtained from the target virtual channel.
- the method includes: determining the function identifier and the target virtual channel from the memory through an interrupt processing function, and sending the address information of the target virtual channel to a target application matching the function identifier, wherein the target application is a target application in the application layer; the target application calls a data reading function to pass the address information to the driver layer through a preset communication interface, and the driver layer obtains the target data from the target virtual channel, and passes the target data to the target application.
- An application layer program wherein a preset communication interface is set at a driver layer, and a target application processes target data according to a processing function matched by a function identifier to execute a target function.
- the application layer calls the corresponding interrupt processing function to search the target virtual channel from the memory, obtains the address information of the target virtual channel, and then generates a device file ipidev in the system's /dev path.
- the application layer needs to read and write data from the driver layer, it can first use the system's built-in open function to open the device file /dev/ipidev, and then use the system's built-in read function to read the target data in the target virtual channel. That is, the driver layer finds the corresponding target data in the shared memory according to the address information of the target virtual channel, and returns the target data and the length of the target data to the application layer.
- the state of the target virtual channel is set to idle.
- Function identifiers are stored in the memory to indicate the target functions implemented by the application through the target data.
- the function identifier can be Net or Cmd.
- the driver layer can find the PID of the application based on the received NetFn and Cmd, and send the data to the corresponding application based on the PID.
- an array will be initialized.
- the array has three columns, the first column is NetFn, the second column is Cmd, and the third column corresponds to the processing function of NetFn and Cmd is recorded as xxCmdHandler.
- a data storage area includes multiple memory channels, each memory channel is composed of one or more storage units, a metadata storage area stores multiple records, each record is used to record metadata of a memory channel, and the metadata of each memory channel at least includes a channel ID of the memory channel, a size of the memory channel, and an occupied state of the memory channel.
- a first operating system reads the records in the metadata storage area, and determines at least one storage unit in the data storage area that is in an idle state and has a total space greater than or equal to the length of target data based on the read records.
- Obtaining the target virtual channel includes: traversing the records stored in the metadata storage area, and determining whether there is a first target record indicating that the memory channel is in an idle state and the size of the memory channel is greater than or equal to the length of the target data; if the first target record exists, determining the memory channel indicated by the channel ID recorded in the first target record as the target virtual channel.
- the data storage area can be divided into n virtual memory channels, and the size of each memory channel can be different, that is, the sizes of the n virtual channels are 20*m, 21*m, 22*m, 23*m...2n-1*m, where m is the size of a storage unit, and the following structure is set as metadata to manage the memory channel:
- uint32_t Flag represents the status of the memory channel. For example, 0xA5A5A5A5 indicates that this channel is not empty, otherwise it is empty; uint16_t ChannelId indicates the channel ID; uint8_t SrcId indicates the source CPU ID, and the source CPU is the CPU that writes data to the memory channel; uint8_t NetFn and uint8_t Cmd are function parameters; uint32_t Len is the length of the data stored in the memory channel; uint32_t ChannelSize indicates the size of the memory channel; uint8_t*pData refers to the first address of the memory channel; uint8_t CheckSum refers to the checksum, the first operating system When data needs to be sent, the checksum algorithm will be used to calculate the checksum value of the data to be sent, and the checksum value will be sent to the second operating system.
- the second operating system When the second operating system receives the data and the checksum value, it will calculate the checksum value based on the received data through the same checksum algorithm, and compare the calculated checksum value with the received checksum value. If they are consistent, it means that the received data is valid. If they are inconsistent, it means that the received data is invalid.
- Each virtual memory channel corresponds to a structure record, and this structure record will be stored in the beginning position of the shared memory in ascending order according to the channel ID. After the system is powered on, these structure records will be initialized.
- the initialization Flag is 0 to indicate that this channel is empty.
- the initialization ChannelId is 0, 1, 2...n-1 in sequence.
- the ChannelSize is initialized to the size of the corresponding virtual memory channel
- the pData is initialized to point to the first address of the corresponding virtual memory channel.
- the first operating system uses the interface GetEmptyChannel to search for a virtual channel that meets the following two conditions in all memory channels according to the size of the target data to be sent: the idle flag Flag in the channel structure IpiHeader is not equal to 0xA5A5A5A5 (that is, the channel is in an idle state), and the channel size ChannelSize in the channel structure IpiHeader is greater than or equal to the size of the target data (that is, the memory size can meet the storage requirements of the target data).
- the metadata of the memory channel when a memory channel is occupied, the metadata of the memory channel also includes the ID of the source CPU core of the target data and the ID of the destination CPU core of the target data, and the second operating system reads the records in the metadata storage area, and determines the target virtual channel according to the read records, including: traversing the records stored in the metadata storage area, and determining whether there is a second target record, wherein the second target record indicates that the memory channel is in an occupied state, and the ID of the destination CPU core is the ID of the CUP core of the second operating system, and the ID of the source CPU core is not the ID of the CUP core of the second operating system; when the second target record exists, the memory channel indicated by the channel ID recorded in the second target record is determined as the target virtual channel.
- the target virtual channel is the virtual channel among all channels that meets the following three conditions: first, the idle flag Flag in the channel structure IpiHeader is equal to 0xA5A5A5A5 (that is, indicating that the channel is occupied); second, the TargetId in the channel structure is equal to the ID of the current CPU (that is, indicating that the destination CUP of the target data is the CPU of the second operating system); third, the TargetId in the channel structure is not equal to SrcId (that is, indicating that the target data is not sent by the CPU of the second operating system).
- the idle flag Flag is set to a multi-bit special character, for example, 0xA5A5A5A5. Since the probability of multiple bits mutating to special characters at the same time is much smaller than the probability of a single bit mutation, it can prevent the storage medium bit mutation from affecting the Flag value, thereby improving the security of communication.
- a metadata storage area stores a mapping table, wherein the mapping table contains a plurality of records, each record being used to record the occupied status of a storage unit, a first operating system reads the records in the metadata storage area, and determines at least one storage unit in the data storage area that is in an idle state and whose total space is greater than or equal to the length of the target data based on the read records, and obtaining the target virtual channel comprises: determining a preset number of storage units to be occupied by the target data; scanning each record in sequence from an initial position of the mapping table; when a preset number of consecutive target records are scanned, determining the consecutive storage units indicated by the preset number of target records, wherein the target records indicate that the storage unit is in an idle state; and determining the consecutive storage units as the target virtual channel.
- the first operating system traverses the records from an index position in the mapping table, where the index position may be the starting position of the mapping table.
- the index position may be the starting position of the mapping table.
- each record of the mapping table is queried in turn to determine whether there are continuous records of free memory pages greater than or equal to numb. If there are records that meet the above conditions, the continuous storage unit in the processor is determined through the correspondence between the record and the memory page, and the continuous storage unit is determined as the target virtual channel to write data to the target virtual channel.
- the interrupt notification message includes the first address and the preset number of the continuous storage unit
- the second operating system reads the record in the metadata storage area, and determines the target virtual channel according to the read record, including: scanning each record in sequence from the initial position of the mapping table; in the case of scanning the first address of the continuous storage unit recorded, the scanned address indicates the storage unit and the preset number minus one of the continuous storage units determined as the target virtual channel.
- a continuous storage unit refers to a continuous storage unit whose number is equal to numb.
- Each record in the mapping table also records the first address of the corresponding storage unit.
- the second operating system scans a record of the first address of a continuous storage unit whose number is equal to numb in the mapping table, it indicates that the first address of the target virtual channel has been scanned.
- the storage unit indicated by the first address and numb-1 continuous storage units after the storage unit constitute the target virtual channel.
- the second operating system obtains data from the target virtual channel to complete data interaction with the first operating system.
- the scanned continuous target records are recorded by a counter.
- the control counter is increased by one, and when the non-target record is currently scanned, the control counter is cleared.
- the relationship between the counter value and the required number of storage units is used to determine whether there are a preset number of continuous target records, that is, whether there are a preset number of continuous storage units.
- the count of the counter is recorded as cntr. If a scanned storage unit is empty, cntr is incremented by 1.
- cntr If the scanned storage unit is not empty, the accumulated number of continuous and idle storage units cntr is cleared, and the search for continuous and idle storage units continues from the address after the storage unit; until cntr is equal to numb, it means that continuous and idle storage units that meet the memory requirements have been found; if after scanning the entire mapping table, cntr does not exist that is greater than or equal to numb, it indicates that this dynamic memory application has failed and the preset number of continuous storage units does not exist.
- the method before the first operating system reads records in the metadata storage area and determines at least one storage unit in the data storage area that is in an idle state and has a total space greater than or equal to the length of the target data based on the read records, and obtains the target virtual channel, the method also includes: the first operating system sends a memory request instruction and performs a locking operation on the processor's memory, wherein the memory request instruction is used to apply for the use of the processor's memory; when the memory is locked successfully, the record in the mapping table is read.
- the memory request instruction is an instruction issued by the operating system running on the processor to request the use of the processor's memory.
- the operating system when the operating system sends the memory request instruction, it first performs a locking operation on the processor's memory. Only when the locking is successful can the memory be applied for.
- the locking operation refers to an exclusive operation of memory application. After the current operating system successfully locks, if the lock is not released, other servers do not have the permission to apply for the use of the processor's memory.
- performing a locking operation on the memory of a processor includes: determining whether the memory is currently in a locked state, wherein the locked state indicates that the memory is in a state where it is requested for use; if the memory is not currently in a locked state, performing a locking operation on the memory; if the memory is currently in a locked state, determining that locking of the memory has failed, and applying to lock the memory of the processor again after a preset time, until the memory is successfully locked, or until the number of lock applications is greater than a preset number.
- the processor Before the processor runs, it is necessary to initialize the metadata storage area and the data storage area in the processor.
- the records stored in the mapping table in the metadata storage area are initialized, and the memory management information is initialized.
- the member variable MemLock of the structure MallocMemInfo_T indicates whether the shared memory has been initialized, and the variable MemReady is 0xA5A5A5A5, indicating that the initialization operation has been completed and the memory can be dynamically applied and released normally; the member variable MemReady of the structure MallocMemInfo_T indicates whether it is locked.
- variable MemLock if the variable MemLock is read as 0, it means that no system or task is applying for memory at this time, that is, the memory is not currently locked. If the variable MemLock is read as 0xA5A5A5A5, it means that a system or task is applying for memory and needs to wait until the current application is completed before applying again. The current application for locking fails.
- a lock fails when locking the memory, the memory is locked again after a preset time.
- the memory lock is applied for times until the lock is successful.
- the preset time length may be 100 microseconds.
- the lock application fails and the number of repeated applications exceeds a preset number, indicating that the memory in the processor is in an unallocated state during the current time, the application operation is stopped.
- the preset number of times may be 3 times, and if the number of lock applications is greater than 3 times, a message indicating that the current memory is unavailable may be returned to the operating system that sent the application.
- the first operating system stores the target data to be transmitted in the corresponding target virtual channel.
- the occupancy status of the processor's memory space is updated according to the data writing status of the first operating system, that is, the target continuous memory space is changed from an unoccupied state to an occupied state.
- the lock on the memory is released.
- the method further includes: releasing the lock on the memory when a preset number of consecutive target records are not scanned.
- an interrupt notification message is sent to the second operating system by way of a software interrupt.
- sending an interrupt notification message to the second operating system via a software interrupt includes: writing the interrupt number and the ID of the CPU core of the second operating system into a preset register of the processor, and generating an interrupt notification message based on the interrupt number and the ID of the CPU core of the second operating system.
- a soft interrupt is an interrupt generated by software.
- Software can send an interrupt to the CPU core that executes itself, or to other CPU cores.
- the preset register can be the GICD_SGIR register.
- the software can write the SGI (Software Generated Interrupts) interrupt number and the destination CPU ID to the GICD_SGIR register to generate a software interrupt.
- the SGI interrupt number is a soft interrupt number reserved for inter-core communication.
- interrupts 8 to 15 are used to represent the inter-core interrupt vector table.
- the first operating system is an RTOS operating system
- the second operating system is a Linux operating system
- Table 1 a feasible allocation scheme of the vector table is shown in Table 1:
- an interrupt notification message is sent to the second operating system by way of a hardware interrupt.
- a hardware interrupt refers to an interrupt generated by a hardware device, which may be a private peripheral interrupt or a shared peripheral interrupt.
- a hard interrupt is an interrupt introduced by hardware outside the CPU and is random.
- a soft interrupt is an interrupt introduced by software running in the CPU executing an interrupt instruction and is pre-set. This embodiment does not limit the method of generating an interrupt notification message.
- a shared memory method which includes the following steps:
- Step 101 receiving a memory application instruction and performing a locking operation on the memory of the processor, wherein the memory application instruction is used to apply for the use of the memory of the processor.
- the memory request instruction is an instruction issued by the operating system running on the processor to apply for the use of the processor's memory.
- the operating system sends a memory request.
- the processor's memory is first locked. Only when the lock is successful can the memory be applied for.
- the locking operation refers to the exclusive operation of memory application. After the current operating system successfully locks, if the lock is not released, other servers do not have the permission to apply for the use of the processor's memory.
- the method before performing a locking operation on the memory of the processor, the method also includes: determining whether the memory is currently in a locked state, wherein the locked state represents that the memory is in a state where it is requested for use; and if the memory is not currently in a locked state, performing a locking operation on the memory.
- the processor's memory can only be locked by one system or task in the same time period. Therefore, the current operating system can only perform a locking operation on the memory when it detects that the current memory is not in a locked state.
- whether the memory is in a locked state is determined by determining whether the preset variable stored in the memory is a preset value. If the preset variable is not a preset parameter value, it indicates that the memory is not in a locked state and no other system or task is applying for memory space, and the locking is successful. Otherwise, if the preset variable is a preset parameter, it indicates that the memory is in a locked state at the current moment, and there are other systems or tasks other than the operating system applying for memory space, and the locking fails.
- the shared memory method after determining whether the memory is currently in a locked state, it also includes: if the memory is currently in a locked state, determining that locking the memory has failed; if locking the memory has failed, applying to lock the processor's memory again after a preset time period until the memory is successfully locked, or until the number of lock applications is greater than a preset number.
- the memory lock is applied again after waiting for a preset time until the lock is successful.
- the preset time may be 100 microseconds.
- the lock application fails and the number of repeated applications exceeds a preset number, indicating that the memory in the processor is in an unallocated state during the current time, the application operation is stopped.
- the preset number of times may be 3 times, and if the number of lock applications is greater than 3 times, a message indicating that the current memory is unavailable may be returned to the operating system that sent the application.
- Step 102 when the memory is locked successfully, read the occupied state of the memory, and determine whether there is free target memory space in the memory according to the occupied state of the memory, wherein the size of the target memory space is greater than or equal to the size of the memory requested by the memory request instruction.
- the operating system applies for the memory in the processor and, optionally, scans the information used to record the memory occupation status to determine whether there is a target memory space, that is, to determine whether there is unoccupied, continuous memory space in the processor that can meet the memory usage requirements.
- Meeting the memory usage requirements means that the size of the memory space is greater than or equal to the memory size applied for by the operating system.
- discontinuous memory space can also be used.
- a pointer can be added after a non-minimum memory block to point to the next minimum memory block obtained by the application.
- data reading and writing across data blocks can be realized according to the storage address and the pointer. This embodiment does not limit the form of the target memory space.
- Step 103 when the target memory space exists in the memory, the address information of the target memory space is fed back to the sending end of the memory application instruction, the occupied state of the memory is updated, and the lock on the memory is released.
- the sending end refers to the operating system that sends the memory request instruction. It should be noted that since the operating system sends and receives data by using shared memory when communicating between cores, and uses the address returned by the requested memory to access data during the data sending and receiving process, it is necessary to determine the address information of the requested memory space.
- address information of the target continuous space is sent to the operating system, and the operating system stores the data to be transmitted in the corresponding memory space according to the address information.
- the occupancy status of the processor's memory space is updated according to the data writing status of the operating system, that is, the target memory space is changed from an unoccupied state to an occupied state, and the locking operation before the dynamic memory application is released so that other operating systems can apply to use the processor's memory space.
- the memory includes a metadata storage area and a data storage area
- the data storage area is configured to store business data
- the metadata storage area stores a mapping table
- the mapping table is configured to record the occupied state of the data storage area, reading the occupied state of the memory, and judging whether there is free target memory space in the memory according to the occupied state of the memory includes: reading records in the mapping table from the metadata storage area, and judging whether there is target memory space in the data storage area according to the records in the mapping table.
- the occupied state of the memory is queried by querying the records in the mapping table.
- the metadata storage area stored in the processor is obtained, and the mapping table in the metadata storage area is identified.
- the occupied state of the data storage area is read to determine whether there is continuous, idle memory space in the data storage area that meets the memory usage requirements.
- a data storage area is composed of multiple memory pages
- a mapping table contains multiple records
- each record is used to record the occupied status of a memory page
- the records in the mapping table are read from the metadata storage area, and it is judged whether there is a target memory space in the data storage area according to the records in the mapping table, including: determining a preset number of memory pages requested by a memory request instruction; scanning each record in sequence from an initial position of the mapping table; and determining that there is a target memory space in the memory when a continuous preset number of target records are scanned, wherein the target record indicates that the memory page is in an idle state.
- the data storage area is divided into multiple allocation units according to the same memory size, and each allocation unit is recorded as a memory page.
- the memory space of the data storage area is A bytes
- the divided allocation unit is B bytes
- the data storage area contains a total of A/B memory pages
- the records in the mapping table are also memory page records.
- Each memory page record is used to record the occupied status of a memory page.
- the number of memory page records in the mapping table is the same as the number of memory pages in the data storage area.
- the data storage area is a dynamically allocated memory block area
- the metadata storage area includes a dynamically allocated memory mapping table area, wherein the mapping table area is divided into the same number of records according to the number of memory pages in the data storage area, and the record is recorded as a memory page record, and all memory page records are combined into a mapping table, and all memory page records in the mapping table have a one-to-one correspondence with all memory pages in the data storage area, and each memory page record indicates the allocation status of the corresponding memory page, that is, whether the memory page is occupied.
- the business data for collaboration with the operating system needs to occupy consecutive memory pages in the processor, it is necessary to first determine the preset number of memory pages in the memory request instruction. Since the memory space of each memory page is the same, the preset number of consecutive memory pages required can be calculated based on the required memory space size, which is recorded as numb.
- the memory page records are traversed from the index position in the mapping table, and the index position may be the starting position of the mapping table.
- the index position may be the starting position of the mapping table.
- each memory page record of the mapping table is queried in turn to determine whether there are continuous memory page records with a number of free memory pages greater than or equal to numb. If there are memory page records that meet the above conditions, it is determined that there is a target memory space in the processor through the correspondence between the memory page records and the memory pages.
- the method after scanning each record in sequence from the initial position of the mapping table, the method also includes: after scanning all records in the mapping table and there are no continuous preset number of target records, determining that there is no target memory space in the memory.
- mapping table starting from the starting position of the mapping table, query the memory page records of the mapping table to determine whether there is continuous space with a number of memory pages greater than or equal to numb. If no continuous, preset number of free memory pages are found after scanning the entire mapping table, it indicates that the target memory space does not exist.
- the number of scanned target records is recorded by a counter.
- the control counter is increased by one.
- the control counter is cleared, wherein the non-target record indicates that the memory page is occupied.
- cntr the count of the counter is recorded as cntr. If a scanned memory page is empty, cntr is incremented by 1.
- cntr is cleared to zero, and then Continue to search for continuous empty memory pages from the address after the memory page; until cntr is equal to numb, it means that continuous and idle memory pages that meet the memory requirements have been found; if cntr is less than numb during the process of scanning the entire mapping table, it means that this dynamic memory application has failed and the target memory space does not exist.
- the address information of the target memory space is fed back to the sender of the memory application instruction, including: determining the last scanned target record among a continuous preset number of target records, and feeding back the first address of the memory page indicated by the last scanned target record to the sender.
- the scanning method can be selected to scan from the first position of the mapping table or from the last position of the mapping table.
- the scanning method is to scan from the last position of the mapping table, when the value cntr displayed by the counter is greater than or equal to the preset number numb, the first address of the memory page corresponding to the last memory page scanned is recorded, and the status of these memory pages is set to non-empty in the memory page record, and the first address is used as the first address of the entire continuous memory page of this memory application instruction.
- the address is fed back to an operating system that issues a memory request instruction, and the operating system performs a data write operation on the memory according to the address information.
- the initial position is the first position in the mapping table
- the address information of the target memory space is fed back to the sender of the memory application instruction, including: determining the first scanned target record among a continuous preset number of target records, and feeding back the first address of the memory page indicated by the first scanned target record to the sender.
- the scanning method is to scan from the first position in the mapping table
- the value cntr displayed by the counter is greater than or equal to the preset number numb
- the address of the first memory page record scanned is used as the first address and sent to the operating system that issued the memory request instruction.
- the operating system writes data to the memory according to the address information.
- the first target record in the scanned continuous target records is stored through a preset variable.
- the preset variable refers to the variable in the mapping table used to store the address information of the initial position, and it is recorded as offset.
- the value cntr displayed by the counter is increased by 1.
- the address information currently stored in offset is used as the address of the first target record.
- the method further includes: releasing the lock on the memory if there is no free target memory space in the memory.
- the mapping table After scanning the memory page records in the mapping table, if it is detected that the preset number of continuous and free memory pages is not included, that is, the target memory space is not included, it indicates that there is not enough space memory pages in the processor's memory for the operating system to use, and this dynamic memory application fails, and the lock on the memory is released.
- the memory includes a metadata storage area and a data storage area, the data storage area is configured to store business data, and the metadata storage area stores memory management information.
- Determining whether the memory is currently in a locked state includes: reading the memory management information stored in the metadata storage area, and determining whether the memory management information includes preset information, wherein the preset information indicates that the memory is in a locked state; if the memory management information includes the preset information, determining that the memory is not currently in a locked state; and if the memory management information does not include the preset information, determining that the memory is currently in a locked state.
- the memory management information of the metadata storage area it is used to determine whether the memory management information contains preset information, wherein the preset information is used to characterize whether the memory is in a locked state; if the memory management information does not contain the preset information, it indicates that the current memory is in an unlocked state, otherwise it is in a locked state.
- memory management information includes first field information and second field information, the first field information is used to describe whether the memory is in a locked state, and the second field information is used to describe whether the memory is initialized.
- the method Before receiving a memory application instruction, the method also includes: initializing the first field information and the second field information stored in the data storage area.
- the metadata storage area and data storage area in the processor need to be initialized.
- the memory management information is composed of first field information and second field information, that is, the first field information indicates whether it is locked, and the second field information is used to indicate whether initialization is completed.
- the memory management information is configured as follows:
- the member variable MemLock (second field information) of the structure MallocMemInfo_T indicates whether the shared memory has been initialized
- the member variable MemReady (first field information) of the structure MallocMemInfo_T represents whether it is locked.
- the variable MemLock is 0, indicating that no system or task is applying for memory at this time, that is, it is not locked.
- MemLock is 0xA5A5A5A5, indicating that a system or task is applying for memory, and other systems or tasks will apply after this application is completed;
- the variable MemReady is 0xA5A5A5A5, indicating that the initialization operation has been completed, and memory can be dynamically applied and released normally.
- updating the occupied state of the memory includes: changing the state of the memory page corresponding to the target memory space recorded in the mapping table to an occupied state.
- the memory page records in the mapping table area of the metadata storage area are updated according to the correspondence between the memory pages and the memory page records, so that the state changes from an unoccupied state to an occupied state.
- a communication method between operating systems includes the following steps:
- Step 201 receiving a memory application instruction of a first operating system, and performing a locking operation on the memory of a processor, wherein the memory application instruction is used to apply for use of the memory of the processor;
- whether the locking is successful is determined by judging whether the preset variable stored in the memory is a preset value. If the preset variable is not a preset parameter value, it indicates that no other system or task is applying for memory space, and the locking is successful; otherwise, if the preset variable is a preset parameter, it indicates that at the current moment, there are other systems or tasks other than the operating system applying for memory space, and the locking fails.
- Step 202 when the memory is locked successfully, read the occupied state of the memory, and determine whether there is free target memory space in the memory according to the occupied state of the memory, wherein the size of the target memory space is greater than or equal to the size of the memory requested by the memory request instruction;
- the information used to record the memory occupied status is scanned to determine whether there is a target memory space, that is, to determine whether there is an unoccupied and continuous memory space in the processor.
- Step 203 if the target memory space exists in the memory, the address information of the target memory space is fed back to the first operating system, the occupied state of the memory is updated, and the lock on the memory is released;
- the address information of the target continuous space is sent to the operating system, and the operating system stores the data to be transmitted in the corresponding memory space according to the address information.
- the occupation state of the processor's memory space is updated according to the data writing status of the operating system, that is, the target memory space is changed from an unoccupied state to an occupied state, and the locking operation before the dynamic memory application is released.
- Step 204 in response to the storage operation of the first operating system, storing the target data in the target memory space, and sending the address information of the continuous memory space to the second operating system;
- the first operating system will apply for a target memory space for the target data storage value to be transferred, and send address information of the target memory space to a second operating system cooperating with the first operating system, notifying the second operating system to acquire data.
- Step 205 receiving an acquisition instruction sent by the second operating system based on the address information, and sending the target data stored in the target memory space to the second operating system.
- the second operating system after the second operating system receives the address information of the target memory space, it issues a data acquisition instruction, and the embedded system receives the instruction and sends the target data stored in the target memory space to the second operating system.
- the second operating system when the first operating system uses physical addresses to perform data read and write operations and the second operating system uses virtual addresses to perform data read and write operations, the second operating system converts the address information of the target memory space into a virtual address, and uses the virtual address to access the memory to read the target data from the target memory space.
- the address returned by the dynamic memory application When using shared memory to send and receive data in inter-core communication, the address returned by the dynamic memory application will be used.
- different systems may use different address systems.
- the real-time operating system is the first operating system and the non-real-time operating system is the second operating system.
- the physical address can be directly used to access the shared memory.
- the non-real-time operating system the physical address cannot be directly used to access the shared memory. Therefore, the mapped virtual address needs to be used.
- the second operating system After the second operating system receives the address information of the target memory space, it converts it through the address information offset, maps it to a virtual address, and operates according to the virtual address.
- the shared memory virtual base address vBase under the non-real-time operating system (the real physical address of the shared memory is assumed to be 0x96000000); the shared memory physical base address pBase (i.e. 0x96000000) under the real-time operating system.
- the address returned by the dynamically requested memory in the non-real-time operating system is also the virtual address vData.
- Offset vData-vBase
- the memory includes a metadata storage area and a data storage area
- the data storage area is composed of multiple memory pages
- each memory page is used to store business data
- the metadata storage area stores a mapping table
- the mapping table contains multiple records
- each record is used to record the occupied status of a memory page, reading the occupied status of the memory, and judging whether there is free target memory space in the memory according to the occupied status of the memory, including: determining a preset number of memory pages applied for by a memory application instruction; scanning each record in sequence from an initial position of the mapping table; and determining that there is target memory space in the memory when a continuous preset number of target records are scanned, wherein the target record indicates that the memory page is in an idle state.
- obtain the metadata storage area stored in the processor identify the mapping table in the metadata storage area, traverse each memory page record starting from the index position in the mapping table, query each memory page record in the mapping table in turn, and determine whether there are continuous memory page records that record free memory pages greater than or equal to a preset number.
- identify the mapping table in the metadata storage area traverse each memory page record starting from the index position in the mapping table, query each memory page record in the mapping table in turn, and determine whether there are continuous memory page records that record free memory pages greater than or equal to a preset number.
- feeding back the address information of the target memory space to the sender of the memory application instruction includes: determining the last scanned target record among a continuous preset number of target records, and feeding back the first address of the memory page indicated by the last scanned target record to the sender.
- the scanning method can be selected to scan from the first position of the mapping table or from the last position of the mapping table.
- the scanning method is to scan from the last position of the mapping table, the first address of the corresponding memory page is recorded for the last memory page scanned, and these memory pages are set to be non-empty, and the first address is used as the first address of the entire continuous memory page of this memory application instruction.
- the address is fed back to the operating system that issued the memory application instruction, and the operating system performs a data write operation on the memory according to the address information.
- a method for sharing memory comprising: before the operating system issues a memory application instruction, in order to prevent application conflicts caused by multiple operating systems applying for the processor's memory space at the same time, it is necessary to apply for a locking operation and determine whether the locking is successful; when the judgment result indicates that the dynamic application for memory locking is successful, the number of consecutive memory pages that need to be allocated is calculated according to the memory size in the issued memory application instruction, and recorded as nmemb; if the judgment result indicates that the application for locking fails, the application is reissued after waiting for a period of time (which can be 100 microseconds) until the application is successful. If the number of failed lock applications is greater than a preset number (the preset number can be three times), the memory application is exited.
- the metadata storage area of the processor is initialized, and the last position of the mapping table is recorded as offset, and the number of continuous memory pages required is calculated according to the space size of the required memory in the memory application instruction, and the number of memory pages is recorded as nmemb, and a counter for recording the number of memory pages is set, recorded as cmemb, and then the mapping table of the metadata storage area in the processor is obtained, and the entire mapping table is scanned from the offset position of the mapping table, and the continuous empty memory is found through the correspondence between the memory page records stored in the mapping table and the memory pages in the data storage area.
- memory pages that meet the requirements are marked as occupied in the corresponding mapping table, the first address of the last memory page found is used as the first address of the entire continuous memory page dynamically applied for, the lock of the dynamically applied memory is released, and the dynamic memory application is successful.
- the size can be dynamically adjusted.
- an updated memory application instruction can be issued again, and the memory can be locked. If the locking is successful, if the updated memory application instruction requires an increase in the memory space applied for, it is determined whether the required memory space exists after the applied target continuous memory. If so, the application is successful. If the updated memory application instruction requires a decrease in the memory space applied for, part of the memory space is released.
- This embodiment divides the storage area into multiple storage areas, uses the index position to dynamically apply for the space according to the actual required size, releases it after use, and can dynamically adjust the size when it is found that the space is insufficient after the dynamic application, so as to improve the flexibility and efficiency of shared memory.
- FIG7 is a schematic diagram of a business data interaction process according to an embodiment of the present application.
- the first operating system generates business data during operation and determines that the business data is required by the second operating system or needs to be sent to the second operating system.
- the first operating system stores the business data in the storage space and sends an eighth interrupt request to the second operating system.
- the second operating system responds to the eighth interrupt request to read the business data from the storage space and perform subsequent processing.
- the first operating system may, but is not limited to, have different operating mechanisms, such as: controlling the first operating system to run periodically based on the processor; or, in response to a received wake-up request, controlling the first operating system to run based on the processor; or, based on the degree of match between the operating services generated on the processor and the first operating system, controlling the first operating system to run based on the processor.
- the operation mechanism of the first operating system may include, but is not limited to, periodic operation and triggered operation.
- Periodic operation may also be called polling mode
- triggered operation may also be called trigger mode, which may include, but is not limited to, two modes: one may be request triggering, where the wake-up request triggers the wake-up operation of the first operating system. The other may be conditional triggering, where the operation service and the first operating system are triggered. The matching degree between the two operating systems is used to trigger the wake-up operation of the first operating system.
- the duration of a single operation cycle and the interval duration between two operation cycles may be the same or different.
- the first operating system may be, but is not limited to, in a dormant state, and the processor core allocated to the first operating system is used by the second operating system. If the duration of a single operation cycle is the same as the interval duration between two operation cycles, the first operating system and the second operating system alternately occupy the same duration of the processor core allocated to the first operating system. If the duration of a single operation cycle is different from the interval duration between two operation cycles, the first operating system and the second operating system alternately occupy different durations of the processor core allocated to the first operating system.
- the duration occupied by the first operating system may be greater than the duration occupied by the second operating system, or the duration occupied by the second operating system may be greater than the duration occupied by the first operating system.
- different system functions can, but are not limited to, use different operating mechanisms to run the first operating system, so as to more flexibly find an operating mechanism that better matches the current operating scenario and system function to run the first operating system and improve the processing efficiency of operating business.
- FIG. 8 is a schematic diagram of a first operating system operation process according to an embodiment of the present application.
- the round-robin mode can be a polling scheduling mode based on a time slice, and the RTOS can be woken up periodically according to the set time.
- (T0, T1) (Tn, T(n+1)) where n is a non-zero positive integer, that is, the dual systems alternately occupy the same time of CPU core 0.
- RTOS schedules CPU core 0 to run its process
- (T1, T2) time slice Linux schedules CPU core 0 to run its process.
- RTOS is in a dormant state, and the subsequent time slices are divided according to the cycle by analogy.
- the wake-up request may be, but is not limited to, initiated by a device connected to the first operating system, or may be, but is not limited to, initiated by the second operating system.
- FIG. 9 is a schematic diagram 2 of the operation process of a first operating system according to an embodiment of the present application.
- the trigger mode can be started by an interrupt initiated by a device in the RTOS bus domain.
- the RTOS bus domain connects device 0 to device N.
- the RTOS is in a dormant state, assuming that device 0 triggers an interrupt to the RTOS at a certain moment, the RTOS is immediately awakened. After awakening, the RTOS first triggers the interrupt to preempt CPU core 0 to Linux.
- Linux After receiving the interrupt, Linux first releases CPU core 0 and saves the scene (pushes the running data into the stack). Then the RTOS system schedules CPU core 0 to process the operation business indicated by the interrupt triggered by device 0. If it is currently in polling mode, the subsequent processing process is the same as the above polling mode, which will not be repeated here.
- the processor core can be directly released, and the first operating system can use the processor core to process the operating business allocated by the second operating system after being awakened.
- the service running on the first operating system may include, but is not limited to, a service for generating a hardware interface signal.
- a process for generating a hardware interface signal is provided, and the process includes the following steps:
- Step 11 Obtain a request command through the first operating system.
- the request command may be a command for generating a hardware interface signal.
- the hardware interface signal may be a PECI signal
- the request command is a PECI request command based on the PECI protocol.
- the hardware interface signal may also be a hardware interface signal of other protocol types, for example, an HDMI (high definition multimedia interface) signal, an RGMII (reduced gigabit media independent interface) signal, an SGMII (serial gigabit media independent interface) signal, a GPIO (general-purpose input/output) signal, an SPI (serial peripheral interface) signal, and the like.
- the request command may also be a request command of other protocol types, for example, when the hardware interface signal is a GPIO signal, the request command is a GPIO request command. This application does not specifically limit the optional types of request commands and hardware interface signals.
- Step 12 Determine multiple logical bit information corresponding to the request command.
- the first operating system can analyze and obtain a plurality of logic bit information corresponding to the request command. There is a sequence between multiple logical bit information.
- the first operating system can generate a waveform signal (i.e., a hardware interface signal) corresponding to the request command through the multiple logical bit information corresponding to the request command, thereby transmitting the information contained in the request command to other devices through the hardware interface signal.
- the request command includes at least one field, each field can be represented by a logic bit 0 or 1, on this basis, the corresponding conversion relationship between each field and the logic bit 1 or 0 is the logic bit information corresponding to the field, and in the case where the request command corresponds to multiple fields, the request command corresponds to multiple logic bit information.
- each logic bit can be represented by a combination of a high-level signal and a low-level signal.
- a high-level signal of a first preset duration and a low-level signal of a second preset duration can be used to combine and represent, and for logic bit 1, a high-level signal of a second preset duration and a low-level signal of a first preset duration can be used to form a representation, wherein the first preset duration and the second preset duration are different.
- each logic bit contains both a high-level signal and a low-level signal
- each logic bit is actually represented by a waveform signal (the transformation between the high and low level signals is presented as a waveform)
- the request command corresponds to multiple logic bit information, that is, multiple logic bits
- the hardware interface signal corresponding to the request command is a waveform signal obtained by combining the waveform signals corresponding to each logic bit information.
- Step 13 Generate a hardware interface signal corresponding to the request command according to the multiple logic bit information and the timer.
- the timer in step 13 may be a timing program in the first operating system, or a register on the chip where the first operating system is located, wherein the timer may at least provide a timing function and a counting function.
- the present application uses the timing function and the counting function of the timer, and combines multiple logic bit information to generate a hardware interface signal corresponding to the request command.
- the related art in order to realize the PECI communication between the BMC chip and components such as the CPU, the related art requires the BMC chip itself to have the hardware logic design of the PECI controller, which leads to the problem of high design cost of the BMC chip.
- the hardware logic design of the PECI controller in order to generate a PECI signal on the BMC chip, the hardware logic design of the PECI controller must be implemented on the BMC chip in advance, while in the present application, only the first operating system is required to generate a PECI signal on the BMC chip, and there is no need to implement the hardware logic design of the PECI controller on the BMC chip, thereby reducing the design difficulty and design cost of the BMC chip.
- the method of generating a hardware interface signal corresponding to a request command by the first operating system is adopted. First, the request command is obtained through the first operating system, and then multiple logical bit information corresponding to the request command is determined. Finally, the hardware interface signal corresponding to the request command is generated based on the multiple logical bit information and the timer.
- the hardware interface signal corresponding to the request command is generated by the first operating system, thereby realizing the technical effect of simulating the generation of the hardware interface signal by software, and further achieving the purpose of hardware logic design without the chip itself having relevant hardware interface signals, which can not only reduce the chip design difficulty, but also reduce the chip design cost.
- this embodiment achieves the purpose of generating hardware interface signals by using a software system without having to perform hardware logic design of the hardware interface signals on the chip, thereby reducing the difficulty of chip design, and further solving the technical problem in related technologies that the chip itself needs to have a hardware logic design of the controller, resulting in a high chip design cost.
- request data is obtained, wherein the first operating system and the second operating system run on the same processor, the request data is generated by the second operating system, and the service response speed of the second operating system is lower than the service response speed of the first operating system.
- the first operating system parses the request data to obtain a request command.
- the requested data can be stored in the target memory (i.e., the storage space on the processor) through the second operating system, and after the requested data is stored, the first request is triggered through the second operating system, wherein the first request is used to notify the first operating system to read the requested data from the target memory, and the target memory is a memory that can be accessed by both the first operating system and the second operating system.
- the target memory i.e., the storage space on the processor
- the first operating system may also receive response data corresponding to the hardware interface signal, wherein the transmission form of the response data is the same as the transmission form of the hardware interface signal. Secondly, the first operating system also adjusts the data structure of the response data to a second data structure.
- a second request is triggered through the first operating system, wherein the second request is used to notify the second operating system to read the response data.
- PECI signal Take the first operating system as RTOS system, the second operating system as Linux system, and the hardware interface signal as PECI signal as an example.
- the upper-layer application involved in PECI business in the Linux system (such as fault diagnosis, CPU temperature acquisition, etc.) first initiates PECI request commands according to needs.
- These request commands include but are not limited to basic Ping() commands, commands to obtain CPU temperature, and commands to read MSR register (Machine Specific Register) information, etc.
- MSR register Machine Specific Register
- the Linux system writes the target address, read/write length, command code, para parameter and other request data of each request command into the target memory according to the PECI protocol specification, and after all the request data are written into the target memory, the Linux system generates a first request to notify the RTOS system.
- the first request may be an SGI interrupt request (software generated interrupt, a communication interrupt request between processor cores).
- the second operating system stores the request data in the target memory in the form of a first data structure, wherein the first data structure includes at least a device address, a write length, a read length, a command code, and a request parameter, the device address is used to characterize the address of the target device, the target device is a device that generates response data based on a hardware interface signal, the command code is used to distinguish different request commands, the write length is used to characterize the number of bytes from the command code to the end of the request data, the read length is used to characterize the number of bytes in the request data including the completion code and the read data, and the request parameters are used to characterize the parameters of the request command.
- the first data structure includes at least a device address, a write length, a read length, a command code, and a request parameter
- the device address is used to characterize the address of the target device
- the target device is a device that generates response data based on a hardware interface signal
- the command code is used to distinguish different request
- the RTOS system receives the response data from the PECI bus, and then completes the data parsing to convert the signal form of the response data from the form of the hardware interface signal to the form of the software signal, for example, identifying the waveform changes between the high-level signal and the low-level signal in the hardware interface signal, thereby obtaining the corresponding logical bit information, and obtaining the software signal data based on the logical information.
- the parsed response data is adjusted by the command parameter structuring module and written into the target memory.
- the RTOS system triggers the second request to notify the Linux system.
- the Linux system detects the second request, actively reads the parsed response data stored in the target memory, and returns the data to the upper application after processing.
- the second request can be an SGI interrupt request.
- the target memory can also be other memories, such as random access memory (RAM), flash memory, etc.
- RAM random access memory
- flash memory etc.
- the first operating system may convert the voltage of the hardware interface signal to obtain a target hardware interface signal.
- the first operating system may input the hardware interface signal into the voltage conversion device to obtain a target hardware interface signal output by the voltage conversion device.
- the voltage conversion device may be a CPLD, and the CPLD may be connected to a target device, wherein the target device may be a CPU in a server.
- the first operating system and the second operating system of the combined embedded system realize the interaction of data in the embedded system through inter-core interrupts and shared memory, build a waveform generation function module for request commands in the RTOS system, and realize the communication of hardware interface signals between the embedded system and external devices through software simulation.
- the high real-time characteristics of the RTOS system are fully utilized to ensure the accuracy of the timing when simulating the request command waveform, which is flexible and efficient. It can significantly reduce the difficulty of chip design.
- the hardware interface signal is generated by software simulation, it provides more possibilities for the optimized design between the communication function and other business functions in the embedded system.
- the controller specially set to realize the hardware interface signal communication in the chip is omitted, the design cost and manufacturing cost of the chip can be reduced.
- the service running on the first operating system may include, but is not limited to, a serial port switching service.
- a serial port switching process is provided, which includes the following steps:
- Step 21 When it is detected that the second operating system receives the serial port switching instruction, the second operating system sends the serial port switching instruction to the first operating system.
- the second operating system can detect whether the serial port switching instruction initiated by the user is received.
- the serial port switching instruction needs to include information of the target serial port to be switched to, for example, the serial port switching instruction includes the serial port number of the target serial port to be switched to.
- the format of the serial port switching instruction can be ⁇ switch_command_app-n number-t sleep_time>, where switch_command_app represents the switching instruction program, -n represents the target serial port number for switching, number can take the value of 1, 2, or 3, -t represents how long to sleep after the instruction is initiated before executing the switching action, and sleep_time is in seconds.
- serial ports that can currently be switched can be numbered so that when switching serial ports is subsequently performed, the target serial port can be switched by the serial port number.
- the serial ports that can currently be switched include: BMC Linux system serial port, server BIOS (Basic Input Output System) serial port and SMART NIC (network interface controller) serial port.
- BMC Linux system serial port server BIOS (Basic Input Output System) serial port
- SMART NIC network interface controller
- 1 can represent the BMC Linux system serial port
- 2 represents the server BIOS serial port
- 3 represents the SMART NIC serial port.
- Step 22 Execute serial port switching according to the serial port switching instruction through the first operating system.
- the second operating system when detecting that the second operating system receives the serial port switching instruction, the second operating system will immediately send the serial port switching instruction to the first operating system.
- the first operating system and the second operating system can be run in two processor cores respectively, and then the first operating system and the second operating system use inter-core communication, which can help improve the reliability of signal transmission.
- the response speed of the first operating system to instructions is much faster than the response speed of the second operating system to instructions, so the first operating system can quickly respond to the serial port switching instruction and complete the switching work in a very short time.
- the serial port switching software function is implemented by replacing CPLD or FPGA with the first operating system and the second operating system running in the same processor.
- the second operating system receives a serial port switching instruction
- the second operating system forwards the serial port switching instruction to the first operating system
- the first operating system implements serial port switching according to the serial port switching instruction, thereby avoiding the need to connect various serial ports through CPLD or FPGA in the related technology, and then use the switch structure in CPLD or FPGA to implement serial port switching, reducing hardware costs
- the technical method proposed in this solution can not only effectively reduce the serial port switching cost, but also effectively improve the efficiency of serial port switching.
- the serial port switching instruction at least includes: the serial port number of the target serial port.
- the following steps are included: obtaining the parsing rules of the serial port switching instruction from the target memory through the first operating system; parsing the serial port number of the target serial port in the serial port switching instruction according to the parsing rules, and determining the device corresponding to the serial port number, wherein the target serial port is the serial port of the device, and the target serial port is connected to the chip.
- Executing serial port switching according to the serial port switching instruction through the first operating system includes: determining the serial port address of the device through the first operating system; and mapping the target serial port to the target output interface of the chip according to the serial port address.
- the first operating system may parse the serial port switching instruction, and then obtain the device corresponding to the target serial port.
- the parsing rules for the serial port switching instructions can be customized according to the different chips or server motherboards, and the parsing rules can be stored in the target memory, which can be a storage medium such as an electrically erasable programmable read-only memory (EEPROM) or a non-volatile memory (Flash). It should be noted that the target memory can be deployed in the chip or not. By storing the parsing rules in the target memory, the security of the data is improved, and the parsing rules can be customized according to the different chips or server motherboards, so that the programmability and scalability are better.
- EEPROM electrically erasable programmable read-only memory
- Flash non-volatile memory
- the first operating system After the first operating system receives the serial port switching instruction, it reads the parsing rule of the serial port switching instruction from the target memory, and then uses the parsing rule to parse the serial port number of the target serial port in the serial port switching instruction to obtain the device corresponding to the serial port number.
- the first operating system can map the target serial port to the target output interface of the chip through the serial port address of the device. After mapping the serial port address of the device to the target output interface, the device can be accessed through the target output interface.
- serial port switching instruction and parsing rules can be set according to the model of the chip used and the types of the first operating system and the second operating system.
- the chip includes: a serial data bus. Before determining the serial port address of the device through the first operating system, the method also includes: determining multiple devices connected to the serial port of the serial data bus; mapping the serial port of each device to the memory of the chip through the serial data bus to obtain the serial port address of each device.
- the above chip also includes a serial data bus, and the TX and RX of the serial ports of multiple devices are currently connected to the serial data bus.
- the current serial ports include the BMC Linux system serial port (UART1), the server BIOS serial port (UART2), and the SMART NIC serial port (UART3).
- UART Universal Asynchronous Receiver/Transmitter.
- the serial port data bus will map the TX and RX data of different serial ports of UART1, UART2 and UART3 to different address spaces in the BMC memory, that is, the above-mentioned mapping of the serial port of each device to the memory of the chip through the serial data bus.
- UART1 TX and RX buffer are the serial port address of serial port UART1
- UART2 TX and RX buffer are the serial port address of serial port UART2
- UART3 TX and RX buffer are the serial port address of serial port UART3.
- the first operating system selects three different memory segments mapped by UART (one out of three), and exchanges the data of one of the memory segments to the customer, thereby achieving the purpose of simulating the CPLD hardware serial port switching circuit.
- the target output interface after the target serial port is mapped to the target output interface of the chip according to the serial port address, if the target output interface is connected to the target smart network card, it includes: detecting through the smart network card whether an access request to the target serial port is received; if an access request to the target serial port is received, forwarding the access request to the target serial port through the smart network card.
- the target output interface of the chip can also be connected to a target smart network card, and then the smart network card can detect whether a user's access request to the target serial port is received. If an access request to the target serial port is received, the serial port of the device can be directly accessed through the target smart network card to realize the SOL (Serial over LAN, a specification of a data packet format and protocol) function. Through the above steps, the efficiency of serial port access to the device is improved.
- SOL Serial over LAN, a specification of a data packet format and protocol
- the following steps are also included: obtaining the execution result of the serial port switching instruction through the first operating system, wherein the execution result is one of the following: switching success and switching failure; sending the execution result to the second operating system through the first operating system.
- the execution result of the serial port switching instruction is received through the second operating system, wherein the execution result is sent from the first operating system to the second operating system, and the execution result is one of the following: serial port switching success and serial port switching failure.
- the first operating system After the first operating system switches the serial port, it will obtain the execution result of the serial port switching instruction, and then feed back the execution result of the serial port switching instruction to the second operating system to inform the second operating system of the success or failure of the serial port.
- the second operating system after receiving the execution result of the serial port switching instruction through the second operating system, it also includes: if the execution result is execution failure, repeatedly executing the step of sending the serial port switching instruction to the first operating system through the second operating system until the execution result is successful, or the number of serial port switching executions exceeds the preset number. If the number of serial port switching executions exceeds the preset number, a prompt signal is triggered through the second operating system, wherein the prompt signal is used to prompt that the serial port switching fails.
- the execution result of the serial port switching instruction is execution failure, then it is necessary to repeat the step of sending the serial port switching instruction to the first operating system through the second operating system until the execution result is successful, or the number of serial port switching executions exceeds the preset number of times, which can be set to 3 times. If the number of serial port switching executions exceeds the preset number of times, the corresponding second operating system triggers a prompt signal to prompt that the serial port switching has failed, so that this situation can be handled in a timely manner.
- the method Before detecting that the first operating system has received a serial port switching instruction, the method also includes: after the second operating system is started, the second processor core triggers a first interrupt and sends a first signal to the first operating system; the first operating system detects the operating status of multiple serial ports in the chip according to the first signal to obtain a detection result; the first processor core triggers a second interrupt and sends the detection result to the second operating system through a second signal; and the second operating system receives the detection result to determine the number of serial ports in the chip that are operating normally.
- the second processor core After the second processor core triggers the first interrupt and sends the first signal to the first operating system, it is detected whether the first operating system receives the first signal; if the first operating system receives the first signal, the first operating system detects the operating status of the plurality of serial ports in the chip, and obtains to the test results.
- the second processor core triggers the first interrupt (IPI interrupt, IPI, inter-processor interrupts) to send a first signal to the first operating system.
- the first operating system can know from the first signal that the second operating system has been started normally and can interact normally with the second operating system.
- the first operating system will detect the operating status of multiple serial ports in the chip according to the first signal to determine whether all serial ports are operating normally.
- the first processor core triggers a second interrupt to send the detection result to the second operating system through a second signal.
- the second operating system determines the number of switchable serial ports (i.e., the number of serial ports that are operating normally) through the detection result, so as to subsequently switch the serial ports.
- the first operating system starts to block and wait for the serial port switching instruction issued by the second operating system.
- the first operating system is RTOS
- the second operating system is Linux
- the first operating system runs on CPU0
- the second operating system runs on CPU1.
- the preparation steps before serial port switching include: when the Linux system on CPU1 starts to a specific stage, CPU1 will trigger an IPI interrupt to notify the RTOS system on CPU0 that Linux has started normally and can interact normally with Linux on CPU1. After the RTOS system receives the IPI interrupt from CPU1, it will start the serial port switching controller program to check whether UART1, UART2, and UART3 are normal. Then CPU0 triggers another IPI interrupt to notify the Linux operating system on CPU1 that the RTOS system has been started. At the same time, the reported information includes the number of switchable serial ports owned by the RTOS operating system on CPU0. Then the RTOS operating system on CPU0 starts to block and wait for the switching instruction issued by the operating system on CPU1.
- a serial port switching instruction is sent to the first operating system through the service terminal; and the first operating system executes the serial port switching according to the serial port switching instruction.
- the second operating system Since the second operating system has many functions and a large business volume, it may run abnormally or need to be restarted.
- the serial port switching instruction can be directly sent to the first operating system through the service terminal to ensure that the first operating system performs the serial port switching normally.
- the service terminal can be a terminal on the server where the chip is located.
- the first operating system and the second operating system running in the same processor are used to replace the CPLD or FPGA to implement the serial port switching software function.
- the second operating system receives the serial port switching instruction
- the second operating system forwards the serial port switching instruction to the first operating system.
- the first operating system implements the serial port switching according to the serial port switching instruction, thereby avoiding the use of hardware to implement serial port switching and reducing hardware costs.
- the serial port switching can be completed quickly in a very short time. Therefore, the above process can not only effectively reduce the serial port switching cost, but also effectively improve the efficiency of serial port switching.
- the matching degree between the current operating business and the first operating system can be, but is not limited to, indicating whether the operating business generated on the processor is suitable for processing by the first operating system, and the suitable operating business is allocated to the first operating system for processing, thereby realizing the reasonable allocation of the operating business and improving the processing efficiency of the operating business.
- the first operating system can be controlled to run based on the processor in the following manner but is not limited to: detecting business information of the current operating business generated on the processor; and when it is detected that the match between the business information and the first operating system is higher than a match threshold, controlling the first operating system to run the current operating business based on the processor.
- the matching degree between the operating business and the first operating system may be represented by, but not limited to, the matching degree between the business information of the operating business and the first operating system, and the business information may be, but not limited to, any dimension with processing requirements, such as: business response speed, business resource occupancy rate, business coupling degree, business importance, etc.
- the matching degree between the service information and the first operating system is higher than the matching degree threshold, which can indicate that the operating service is suitable for running on the first operating system.
- the matching degree threshold can be, but is not limited to, dynamically adjusted according to the current resource usage or operating requirements of the first operating system. This makes the first operating system more adaptable and flexible.
- the business information of the current operation business generated on the processor can be detected in the following manners, but is not limited to: The target response speed of the current operating business and/or the target resource occupancy is determined, wherein the business information includes: the target response speed and/or the resource occupancy, the target response speed is the response speed that the processor needs to achieve for the current operating business, and the target resource occupancy is the amount of resources that the processor needs to provide for the current operating business; when the target response speed is less than or equal to the speed threshold, and/or the target resource occupancy is less than or equal to the occupancy threshold, it is determined that the degree of match between the business information and the first operating system is higher than the matching threshold.
- the service information may include, but is not limited to: target response speed, and/or resource occupancy, wherein the target response speed is the response speed that the processor needs to achieve for the current operation service, and the target resource occupancy is the amount of resources that the processor needs to provide for the current operation service.
- the demand of the current operation service for the processor response speed may be considered separately, or the demand of the current operation service for the available resources on the processor may be considered separately. The two may also be considered in combination to allocate the current operation service.
- the following methods may be used to allocate operating services and processing resources to each operating system, but are not limited to:
- the embedded system Allocating a group of services to be allocated to corresponding operating systems in the embedded system according to a dynamic resource allocation rule, wherein the dynamic resource allocation rule includes dynamically allocating resources according to at least one of the following: service response speed, service resource occupancy rate, service coupling degree, and service importance, the embedded system includes a first operating system and a second operating system, the first operating system and the second operating system run on a processor, and the response speed of the first operating system is higher than that of the second operating system;
- the resource allocation result is used to indicate a processing resource corresponding to each service to be allocated in the group of services to be allocated in the processing resources of the processor, and the processing resources of the processor include a processor core;
- the processing resources of the processor are allocated to the first operating system and the second operating system according to the operating system corresponding to each service to be allocated and the resource allocation result.
- a group of services to be allocated can be obtained, that is, services to be allocated to the first operating system and the second operating system. Since different services to be allocated may differ in terms of response speed, service resource occupancy rate, service coupling degree with other services, service importance, etc., a resource dynamic allocation rule can be pre-configured, and the resource dynamic allocation rule can include a rule for performing service allocation, and the service is allocated to the corresponding operating system so that the processing resources of the corresponding operating system execute the service allocated to itself.
- the resource dynamic allocation rule may include dynamically allocating resources according to at least one of the following: service response speed, service resource occupancy rate, service coupling degree, service importance, and different allocation rules may have corresponding priorities, for example, the priorities are in descending order: service importance, service coupling degree, service response speed, and service resource occupancy rate.
- a group of services to be allocated (or tasks to be allocated, and different services to be allocated may correspond to different processes) can be allocated to the corresponding operating system in the embedded system to obtain a service allocation result.
- the first operating system can be an operating system with clear and fixed time constraints. All processing processes (task scheduling) need to be completed within the fixed time constraints, otherwise the system will fail. It can be a real-time operating system (RTOS), such as FreeRTOS, RTLinux, etc., and can also be a real-time operating system in other embedded systems.
- RTOS real-time operating system
- the second operating system does not have this feature.
- the second operating system generally adopts a fair task scheduling algorithm. When the number of threads/processes increases, it is necessary to share CPU time. Task debugging is uncertain.
- Non-real-time operating system such as contiki, HeliOS, Linux (full name GNU/Linux, a set of freely disseminable Unix-like operating systems), etc.
- Linux full name GNU/Linux, a set of freely disseminable Unix-like operating systems
- non-real-time operating system in other embedded systems.
- the Linux system is a multi-user, multi-tasking, multi-threaded and multi-CPU operating system based on POSIX (Portable Operating System Interface).
- the business assigned to the first operating system is usually a real-time business.
- Real-time business refers to business that needs to be scheduled within a specified time. The business requires the processor to process it at a fast enough speed, and the processing result can control the production process or respond quickly to the processing system within the specified time.
- the control of the robot arm in industrial control is a real-time business. The system needs to take timely measures before detecting the robot arm's misoperation, otherwise it may cause serious consequences.
- the business assigned to the second operating system is usually a non-real-time business.
- Non-real-time business refers to business that is not sensitive to scheduling time and has a certain tolerance for scheduling delays, for example, reading sensor data from a temperature sensor in a server.
- a real-time operating system is one that can receive and process external events or data at a sufficiently fast speed, and the processing results can control the production process or respond quickly to the processing system within a specified time, dispatching all available resources.
- the operating system that completes real-time business and controls the coordinated operation of all real-time businesses has the characteristics of timely response and high reliability.
- corresponding processing resources can be allocated to each to-be-allocated service according to the service allocation result, and a resource allocation result corresponding to a group of to-be-allocated services can be obtained.
- the services allocated to the first operating system can be allocated processing resources of the first operating system
- the services allocated to the second operating system can be allocated processing resources of the second operating system.
- the unallocated processing resources can be allocated to some services.
- the processing resources of the processor can be dynamically allocated in units of time slices. Taking into account the frequent switching of the operating systems to which the processing resources belong and the fact that the business processing time is not necessarily an integer multiple of the time slice, which leads to the extension of the response time of some businesses, the processing resources can be allocated to the first operating system and the second operating system in units of processor cores. That is, the processor cores of the processor are allocated to the corresponding operating systems in units of the entire processor core. The number of processor cores allocated to each operating system is an integer, and the processor cores allocated to different operating systems are different.
- the processing resources of the processor can be allocated to the first operating system and the second operating system.
- the unallocated processing resources of the processor can be allocated to the operating system corresponding thereto, and the unallocated processing resources can be determined based on the correspondence between the unallocated processing resources and the to-be-allocated service and the correspondence between the to-be-allocated service and the operating system.
- the allocation of the processing resources of the processor to the first operating system and the second operating system can be performed by a resource adaptive scheduling module (e.g., a core adaptive scheduling module), which can be a software module running on the first operating system or the second operating system.
- the resource adaptive scheduling module can be implemented by software in the Linux system, which can complete the actual scheduling action of the processing resources of the processor (e.g., the processor hard core resources) according to the output of the business management module and the output of the resource dynamic allocation module.
- M cores out of (M+N) cores are scheduled to the real-time operating system, and N cores are scheduled to the non-real-time operating system.
- heterogeneous operating systems can be run on different hard cores of the same processor, so that the entire processor system has the ability to process real-time and non-real-time services in parallel.
- processor hard core resources for example, processor cores
- the processor resource utilization rate can be significantly improved.
- heterogeneous means that the types of operating systems running on the same multi-core processor of the embedded system are different, and multi-system means that there are multiple operating systems running on the same multi-core processor of the embedded system, and these operating systems are running simultaneously in the time dimension.
- the above process further includes: generating a rule structure by reading a rule configuration file, wherein the rule structure is used to record the resource dynamic allocation rule.
- the resource dynamic allocation rules can be configured based on the rule configuration file.
- a rule structure for recording the resource dynamic allocation rules can be generated.
- the rule configuration file can be a load balancing policy file (payload_balance.config).
- the load balancing policy file can be used to configure the classification method of various running services (or processes), the evaluation principle of the real-time level, etc. Different parameters can be used to configure the resource dynamic allocation rules in the load balancing policy file.
- An example of a load balancing policy configuration file is as follows:
- classification kinds 2 //A value of 1 means that the processes are classified according to attributes such as important and non-important. Otherwise, the processes are classified according to the preset classification method (such as real-time and non-real-time);
- real-time grade evaluation 2//A value of 1 means that the average CPU usage in the past statistical minutes is used as the process real-time grade evaluation principle; otherwise, it means that the preset priority is used as the process real-time grade evaluation principle;
- the dynamic resource allocation rules can be stored in a load balancing policy module.
- the load balancing policy module can be a software module running under the first operating system or the second operating system (for example, a software module running under the Linux system), which can provide policy guidance for the service management module, including classification methods for various services (or processes) running in the system, evaluation principles for real-time levels, etc.
- the service management module can divide and manage services in the system according to their real-time levels, and optionally guide the resource adaptive scheduling module to reallocate processor resources. Exemplarily, it can perform actual classification of services based on the output of the load balancing policy module to generate a real-time classification policy. List of business and non-real-time business.
- the rules based on which the business management module performs business management can be dynamically configured, and optional rules can be set based on existing rules. Multiple rules with the same function can be set in the business management module, but there is no contradiction between the rules, that is, the current rule used in the rules with the same function can be determined based on the rule selection conditions such as the configuration time of the rule and the priority of the rule, so as to avoid contradictions between the rules.
- the above configuration file load_balance.config describes a possible situation.
- the classification_kinds variable indicates the optional classification standard (for example, by the importance or real-time nature of the business) and the classification category (for example, important business and general business, real-time business and non-real-time business, etc.), and the real-time_grade_evaluation variable indicates the real-time evaluation standard (which can be based on the average CPU occupancy rate in the past statistic_minutes minutes or the preset business priority).
- the real-time grade type is defined by the user and can be defined as high, normal, and low, or it can be subdivided into more types.
- the output of the load balancing strategy module is the configured classification method, real-time level evaluation principle, etc. When implemented in software, it can be an optional configuration file (such as the load_balance.config file) or a structure variable. These files or structure variables can ultimately be accessed by the business management module to obtain the optional load balancing strategy.
- an optional configuration file such as the load_balance.config file
- a structure variable can ultimately be accessed by the business management module to obtain the optional load balancing strategy.
- a rule structure is generated to record the resource dynamic allocation rule, thereby improving the convenience of information configuration.
- the above process also includes: obtaining a rule update configuration file through an external interface of the second operating system, wherein the rule update configuration file is used to update the configured dynamic resource allocation rules; and updating the rule structure using the rule update configuration file to update the dynamic resource allocation rules recorded in the rule structure.
- the rule structure can be in a fixed format, that is, it is not allowed to be modified during the operation of the embedded system, or it can be in a flexibly configurable format, that is, it can be configured and changed through a configuration file in a specific format.
- a rule update configuration file can be obtained, and the rule update configuration file is used to update the configured resource dynamic allocation rules; using the rule update configuration file, the rule structure can be updated, thereby updating the resource dynamic allocation rules recorded in the rule structure.
- the configuration file in a specific format may be read through an external interface of the first operating system or the second operating system.
- dynamic resource scheduling of the embedded system may be mainly the responsibility of the second operating system.
- the rule update configuration file may be obtained through the external interface of the second operating system.
- the load balancing policy module may be in a fixed format, or may be configured through an external interface of the Linux system.
- a configuration file (load_balance.config) in a specific format as described above may be defined, and configuration changes may be made through file reading and writing.
- the external interface is the external interface of the multi-core processor, which can be a network interface, SPI (Serial Peripheral Interface) controller interface, UART (Universal Asynchronous Receiver/Transmitter) serial port, etc., as long as it can obtain data from the outside world.
- SPI Serial Peripheral Interface
- UART Universal Asynchronous Receiver/Transmitter
- the configuration file can be loaded from the Web (World Wide Web) interface through the network interface; the configuration file can be read from the SPI Flash (flash memory) of the board through the SPI controller; the configuration file can be obtained from the serial port data receiving and sending software tool on another PC (Personal Computer) through the UART serial port.
- a group of to-be-allocated services can be allocated to corresponding operating systems in an embedded system according to dynamic resource allocation rules in the following manner, but is not limited to: to-be-allocated services whose service response speed requirements are greater than or equal to a set response speed threshold in a group of to-be-allocated services are allocated to a first operating system, and to-be-allocated services whose service response speed requirements are less than a set response speed threshold in a group of to-be-allocated services are allocated to a second operating system.
- the pending services can be allocated to the corresponding operating system based on the service response speed requirements of the pending services.
- the service response speed can be used to evaluate the real-time level of the service. The higher the service response speed requirement, the more sensitive it is to the scheduling time and response speed of the operating system. The higher the real-time level, the higher the service response speed requirement.
- the service needs the operating system to process it at a sufficiently fast speed, and the processing results can control the production process or respond quickly to the processing system within the specified time.
- the service with low service response speed requirements has a certain tolerance for scheduling delays.
- the services to be allocated whose service response speed requirement is greater than or equal to the set response speed threshold they are sensitive to the scheduling time and response speed of the operating system, and such services to be allocated can be allocated to the first operating system (for example, real-time services are allocated to the real-time operating system).
- the services to be allocated whose service response speed requirement is less than the set response speed threshold they are not sensitive to the response speed and scheduling time, and therefore, such services to be allocated can be allocated to the second operating system (for example, non-real-time services are allocated to the non-real-time operating system).
- the service response speed requirement can be indicated by an indication parameter of the service response speed
- the set response speed threshold can be a millisecond-level response speed threshold or a second-level response speed threshold, for example, 100ms, 200ms, 1s, etc., and there is no limitation on the set response speed threshold in this embodiment.
- a first service list corresponding to the first operating system and a second service list corresponding to the second operating system can be output, the first service list being used to record services allocated to the first operating system, and the second service list being used to record services allocated to the second operating system, that is, the service allocation result includes the first service list and the second service list, and the output first service list and the second service list can be used to perform a dynamic scheduling process of the processor's processing resources.
- the real-time level of system services is classified to obtain a list of real-time services and non-real-time services. Assume that there are 20 services in total, of which real-time services are services 1 and 2, and non-real-time services are services 3 to 20.
- the business management module can classify the current business to be executed.
- the business management module classifies these businesses according to the output of the load balancing module.
- different businesses will be assigned to different operating systems (RTOS system and Linux system) for execution.
- RTOS system and Linux system operating systems
- the business management module will continue to divide the business and divide and manage the existing businesses in real time according to the load balancing strategy.
- the business management module can be a resident process in the Linux system. It is always running and manages and divides the currently running processes.
- a group of to-be-allocated services can be allocated to corresponding operating systems in an embedded system according to dynamic resource allocation rules in the following manner, but is not limited to: to-be-allocated services whose service resource occupancy rate in a group of to-be-allocated services is less than a first occupancy rate threshold are allocated to a first operating system, and to-be-allocated services whose service resource occupancy rate in a group of to-be-allocated services is greater than or equal to the first occupancy rate threshold are allocated to a second operating system.
- the services to be allocated can be allocated to the corresponding operating system based on the service resource occupancy rate of the services to be allocated.
- the service resource occupancy rate can be the average proportion of the services to the processing resources per unit time (for example, the CPU occupancy rate per minute).
- the service resource occupancy rate affects the response speed of the service and the response speed of subsequent services. Therefore, the real-time level of the service can be evaluated based on the service resource occupancy rate. The higher the service resource occupancy rate, the greater the impact on the scheduling time and response speed of the operating system, and the lower the real-time level. For services with low service resource occupancy rate, the impact on the scheduling time and response speed of the operating system is not large, and the real-time level is higher.
- the impact on the scheduling time and response speed of the operating system is not significant, and such services to be allocated can be allocated to the first operating system.
- the impact on the scheduling time and response speed of the operating system is greater, and therefore, such services to be allocated can be allocated to the second operating system.
- the first occupancy rate threshold can be configured as needed, and it can be 10%, 15%, 20% or other thresholds, and at the same time, the first occupancy rate threshold can be dynamically adjusted.
- a group of services to be allocated may be allocated to a corresponding operating system in the embedded system according to a resource dynamic allocation rule in at least one of the following ways, but not limited to:
- the services to be allocated which have a service coupling degree with the allocated services of the second operating system greater than or equal to a second coupling degree threshold, in a group of services to be allocated, are allocated to the second operating system.
- the services to be allocated can be allocated to the corresponding operating systems based on the service coupling degree of the services to be allocated.
- the service coupling degree can be used to indicate the degree of association between the services to be allocated and the allocated services in each operating system. If the service coupling degree of a service to be allocated is high with the allocated services of a certain operating system, it is not appropriate to allocate it to another operating system. Therefore, the services to be allocated can be allocated to the corresponding operating systems based on the service coupling degree between the services to be allocated and the allocated services in each operating system.
- the business coupling can be evaluated by the association between the input and output of the business.
- the business coupling can be represented by different coupling levels. If there is no relationship between the input and output of the business, the coupling level is low (or other coupling levels indicating no association between the businesses). If the execution of a business depends on the output of another application (the business cannot start without the output as input), the coupling level between the businesses is high. If the execution of a business uses the output of another application, but the output does not hinder the normal execution of the business (the output can be obtained when the business executes the corresponding operation, and the corresponding operation is not a core operation), the coupling level between the businesses is medium.
- the business coupling can also be represented by a numerical value. The business coupling can be evaluated by one or more coupling conditions (for example, the association between input and output), and the numerical value corresponding to the satisfied coupling condition is determined as the numerical value of the business coupling.
- to-be-allocated businesses in a group of to-be-allocated businesses whose business coupling degree with the allocated businesses of the first operating system is greater than or equal to a first coupling degree threshold then such to-be-allocated businesses can be allocated to the first operating system; and if there are to-be-allocated businesses in a group of to-be-allocated businesses whose business coupling degree with the allocated businesses of the second operating system is greater than or equal to the first coupling degree threshold, then such to-be-allocated businesses can be allocated to the second operating system.
- the service management module is also responsible for service decoupling evaluation and management, that is, finding out from all real-time services the services that can be separated out and handed over to the real-time operating system for running, so that the hardware resource dynamic allocation module can reallocate processor resources. For services that cannot be separated out and handed over to the real-time operating system for running, if they have a high degree of service coupling with non-real-time services, they can be allocated to the non-real-time operating system.
- the reallocation strategy is open.
- One possible strategy is: when the system runs for the first time, processor cores are allocated according to the ratio of the number of services allocated to the real-time operating system and the non-real-time operating system by the business management module. During subsequent operations, resource allocation is adjusted according to the core resource occupancy rate of each system in the dual system. From this perspective, the reallocation process and the core preemption and release process are mutually coordinated processes.
- a group of services to be allocated may be allocated to corresponding operating systems in the embedded system according to a resource dynamic allocation rule in the following manner, but is not limited to:
- the target operating system can perform hard-core level security protection isolation on the to-be-assigned business containing sensitive information.
- the target operating system is the operating system with a low frequency of interaction with the user object between the first operating system and the second operating system, or the operating system with a fast response speed, for example, the first operating system.
- the business processing module is responsible for the optional hard-core level security isolation of system services, that is, important sensitive services (which do not want to be exposed to users) are divided into real-time services, and eventually these services can be unloaded from non-real-time operating systems to real-time operating systems, thereby achieving security protection.
- the different services divided by the business processing module can be organized in the form of structures during software implementation.
- sensitive services refer to: security-related services, such as user passwords, identity information, and other services involving user personal privacy.
- the hard-core level means that the business is isolated at the processor core level, that is, sensitive business is allocated to the real-time operating system (the core occupied by the real-time operating system is different from that of the non-real-time operating system, so it belongs to the core level isolation).
- the frequency and degree of interaction between the real-time operating system and the user are relatively weak, so it is difficult for users as users to "detect" the sensitive data generated by the business running on it.
- user identity authentication management, security encryption and other businesses belong to the above-mentioned important sensitive businesses.
- the above-mentioned businesses are forcibly divided into real-time businesses. When the hardware resources are dynamically allocated later, the above-mentioned businesses can be realized in the real-time operating system, which has a safe isolation effect.
- the resource allocation result corresponding to a group of services to be allocated may be determined in the following manner, but is not limited to:
- a mapping table of a group of services to be allocated and the processing resources of the processor is generated according to the allocation results of the group of services to be allocated and combined with the resource utilization of the processing resources of the first operating system and the resource utilization of the processing resources of the second operating system.
- the allocation result of a group of services to be allocated is used to indicate the corresponding relationship between the services to be allocated and the operating system.
- the services to be executed allocated to an operating system are usually executed using the processing resources of the operating system. If the amount of services allocated to a certain operating system is too large and there are currently unallocated processing resources, the unallocated processing resources can also be allocated to the services to be allocated to the certain operating system. Therefore, according to the allocation result of a group of services to be allocated, combined with the resource utilization of the processing resources of the first operating system and the resource utilization of the processing resources of the second operating system, a mapping table of a group of services to be allocated and the processing resources of the processor can be generated to indicate the processing resources allocated to each service to be allocated.
- each service to be allocated has a mapping relationship with only one processor core, while the same processor core can have a mapping relationship with multiple services to be allocated, and different services can have a mapping relationship with the same processor core by occupying different time slices of the same processor core.
- the same processor core is occupied by only one service, that is, it is only used to execute one service.
- Different services allocated to an operating system can determine the time slices that occupy the same processor resource according to the allocation time, service response speed requirements or other methods.
- the resource dynamic allocation module dynamically adjusts the processor resources according to the output results of the business management module, forms a mapping table between different businesses and actual hardware resources, and optimizes the deployment structure of different hardware resources under heterogeneous operating systems to achieve the purpose of improving the utilization rate of hardware resources in the whole system.
- the above resource dynamic allocation process is managed and configured by the software in the second operating system.
- the processor cores that have been scheduled to the first operating system include: core 1
- the processor cores that have been scheduled to the second operating system include: core 2, core 3 and core 4, there are 6 services to be allocated, the real-time services are service 1 and service 2, and the non-real-time services are service 3 to service 6.
- the corresponding processor cores are allocated to the 6 services, core 1 is allocated to service 1, core 5 is allocated to service 2, core 2 is allocated to service 3, core 3 is allocated to service 4, core 4 is allocated to service 5, and core 6 is allocated to service 6.
- the processing resources of the processor may be allocated to the first operating system and the second operating system according to the operating system corresponding to each to-be-allocated service and the resource allocation result in the following manner, but not limited to: when it is determined according to the resource allocation result that there is a corresponding to-be-allocated service among the processing resources of the processor, the unallocated processing resources are allocated to the to-be-allocated service corresponding to the unallocated processing resources; The operating system to which the service is assigned.
- the unallocated processing resources can be allocated to the operating system to which the unallocated business corresponding to the unallocated processing resources is allocated.
- the resource adaptive scheduling module can complete the actual scheduling action of the processing resources of the processor according to the result of the dynamic allocation of hardware resources.
- the resource adaptive scheduling module schedules a part of the processor cores to execute the services assigned to the first operating system, such as the M cores of core group 1, and schedules the remaining processor cores to run the services assigned to the second operating system, such as the N cores of core group 2.
- the unallocated core 4 can be allocated to the first operating system, and the unallocated cores 5 and 6 can be allocated to the Linux system.
- the entire scheduling process can be dominated by the second operating system.
- unallocated processor resources are scheduled to the corresponding operating system based on the resource allocation result, so that the utilization rate of the processor resources can be improved.
- the first operating system After the first operating system is finished running, it can be controlled to enter a dormant state. For example, the first operating system is controlled to go into a dormant state after the running is finished.
- the end of the operation of the first operating system may be the end of an operation cycle, the completion of a wake-up request process, or the completion of a current operation service process.
- the processor core allocated to the first operating system can be occupied by the second operating system, thereby improving resource utilization.
- the second operating system is notified to allow the processor core used by the first operating system to be occupied, wherein the second operating system is used to add the target processor core used by the first operating system to the scheduling resource pool of the second operating system during the hibernation of the first operating system, and the scheduling resource pool includes other processors in the processor except the target processor core.
- the method of notifying the second operating system that it is allowed to occupy the processor core used by the first operating system may include, but is not limited to, sending an interrupt request to the second operating system. After the first operating system goes into sleep mode, an interrupt request is sent to the second operating system to notify it that it is allowed to occupy the processor core used by the first operating system. In response to the interrupt request, the second operating system adds the target processor core used by the first operating system to the scheduling resource pool for scheduling and use.
- the operation business executed on the second operating system can be monitored, but is not limited to. If an abnormal operation business is monitored, the first operating system can take over the operation of the abnormal operation business, thereby avoiding the impact of the abnormal operation business on the entire processing process and improving the success rate and efficiency of the business operation. For example: monitor the operation business executed on the second operating system; when abnormal operation business is detected in the operation business executed on the second operating system, take over the abnormal operation business through the first operating system.
- the processor or the first operating system can monitor the second operating system, but is not limited to the first operating system.
- the monitored abnormal operation business (such as the operation business of the business thread hanging) can be taken over by the first operating system.
- the monitored abnormal operation business can be assigned to the operating system with a higher matching degree with the abnormal operation business among the multiple operating systems to take over.
- the manner of monitoring the operation service executed on the second operating system may include but is not limited to monitoring of heartbeat signals, monitoring of service logs, etc. For example, if monitoring detects that an abnormal log is generated, it is determined that the operation service is abnormal.
- the operating services executed on the second operating system may be monitored in the following manner but is not limited to: receiving a heartbeat signal of each operating service executed on the second operating system; and determining an operating service whose frequency of the heartbeat signal does not meet the corresponding target frequency as an abnormal operating service.
- each operating service executed on the second operating system will generate a heartbeat signal.
- the heartbeat signals of different operating services have different frequencies.
- the heartbeat signal of each operating service executed on the second operating system is connected to the monitoring party of the operating service, such as the processor or the first operating system.
- the frequency of the connected heartbeat signal is compared with the target frequency corresponding to the operating service.
- the operating service whose heartbeat signal frequency does not meet the corresponding target frequency is taken over as an abnormal operating service.
- whether the frequency of the heartbeat signal and the corresponding target frequency are consistent can be determined by, but not limited to, comparing whether the two are completely consistent. If they are completely consistent, it is determined to be consistent, and if they are not completely consistent, it is determined not to be consistent. Given a certain error range, the frequency of the heartbeat signal is compared to see whether it falls within the error range of the target frequency to determine whether the two are consistent. If they fall within the error range, it is determined to be consistent; if not, it is determined to be inconsistent.
- the abnormal operation service on the second operating system may be restarted in the following manner but is not limited to: sending a restart instruction to the second operating system, wherein the restart instruction is used to instruct to restart the abnormal operation service.
- the restart instruction is used to instruct to restart the abnormal operation service.
- the second operating system may initialize the abnormal operation service until it is restarted.
- the first operating system can return the taken-over abnormal operation service to the second operating system.
- the first operating system can save the current running scene of the abnormal operation service to the shared memory and send an interrupt request to the second operating system.
- the second operating system reads the current running scene of the abnormal operation service from the shared memory and loads it into the abnormal operation service it is running, so that the abnormal operation service can continue to run, thereby improving the service operation efficiency.
- FIG10 is a schematic diagram of a system abnormality monitoring process according to an embodiment of the present application.
- the first operating system receives the heartbeat signal of the operating service executed on the second operating system, detects the abnormal operating service whose frequency of the heartbeat signal does not meet the target frequency, and the first operating system takes over the abnormal operating service on the second operating system to continue to execute.
- the first operating system sends a restart instruction to the second operating system, so that the second operating system can restart the abnormal operating service.
- the dual system may be started in the following manner but is not limited to: booting the first operating system; booting the second operating system.
- the first operating system is booted first, and then the second operating system is booted.
- the first operating system can be, but is not limited to, an operating system with a faster and simpler startup process.
- the first operating system can execute some urgent or helpful operating services for the startup of the second operating system, thereby improving the startup efficiency of the operating system, or improving the processing efficiency of operating services.
- the first operating system and the second operating system may be, but are not limited to, started successively, the first operating system may be, but are not limited to, started faster than the second operating system, the first operating system may also be, but are not limited to, have simpler conditions required for starting than the second operating system, and after the first operating system is started, services that can meet the conditions required for starting the second operating system or can speed up the startup of the second operating system can be run, thereby enabling multiple systems to start and run services more efficiently and quickly.
- the first operating system can run services that can control the chip environment parameters to meet the startup requirements of the second operating system (such as: fan operation, parameter control, etc.), so that the chip environment parameters can quickly reach the environment for the second operating system to start and run, thereby improving the startup efficiency and operation efficiency of the operating system.
- the second operating system such as: fan operation, parameter control, etc.
- the first operating system may be, but not limited to, booted by a boot program of the first operating system
- the second operating system may be, but not limited to, booted by a boot program of the second operating system.
- both may be booted successively by the same boot program.
- the first operating system can be booted in the following manner but is not limited to: the chip is powered on, and the first processor core allocated to the first operating system in the processor is awakened by the processor; and the first operating system boot program is executed by the first processor core to boot the first operating system.
- the first processor core of the first operating system can be determined based on, but not limited to, the processor core of the processor where the first operating system is located.
- the processor where the first operating system is located can include, but is not limited to, multiple processor cores (processor core 0 to processor core N), and one or more processor cores (such as processor core 0) among the multiple processor cores can be allocated to the first operating system as the first processor core of the first operating system.
- the boot program of the first operating system may be, but is not limited to, stored in a specific storage space on the chip specifically for starting the first operating system.
- the first processor core of the first operating system may be, but is not limited to, configured to execute a boot program of the first operating system, and may be, but is not limited to, start the first operating system by executing the boot program of the first operating system.
- the first operating system can be booted by executing a boot program of the first operating system through the first processor core in the following manner but is not limited to: executing a secondary program loader through the first processor core, wherein the boot program of the first operating system includes a secondary program loader; and loading the first operating system through the secondary program loader.
- the boot program of the first operating system may include but is not limited to a secondary program loader, and the first processor core may load the first operating system by executing but is not limited to a secondary program loader (SPL).
- SPL secondary program loader
- the second operating system can be booted in the following manner but is not limited to: waking up the second processor core assigned to the second operating system through the secondary program loader; and booting the second operating system by executing the boot program of the second operating system through the second processor core.
- the second processor core of the second operating system can be determined based on the processor core of the processor where the second operating system is located, but is not limited to.
- the processor where the second operating system is located can include, but is not limited to, multiple processor cores (processor core 0 to processor core N), and one or more processor cores (processor core 1 to processor core N) among the multiple processor cores can be allocated to the second operating system as the second processor core of the second operating system.
- the second processor core of the second operating system may be awakened according to the secondary program loader, but is not limited to, for example, after the first operating system is loaded using the secondary program loader, the second processor core of the second operating system may be awakened by the secondary program loader, but is not limited to.
- the second processor core of the second operating system may be awakened by the secondary program loader, but is not limited to.
- the second operating system may be started by executing a boot program of the second operating system using the second processor core but is not limited to the second processor core.
- the second operating system can be booted by executing the boot program of the second operating system through the second processor core in the following manner but is not limited to: executing a universal boot loader through the second processor core, wherein the boot program of the second operating system includes a universal boot loader; and loading the second operating system through the universal boot loader.
- the second processor core may, but is not limited to, load the second operating system by executing a universal boot loader
- the universal boot loader may, but is not limited to, include U-Boot (Universal Boot Loader).
- the second-level program loader may be executed through the first processor core in the following manner but is not limited to: a security boot check is performed on the code of the second-level program loader through the boot memory on the chip; when the check result is normal, the second-level program loader is executed through the first processor core.
- the boot program of the operating system may include but is not limited to a secondary program loader, and may use but is not limited to using the boot program of the operating system as the above-mentioned boot memory, and verifying the code of the secondary program loader included in the boot program of the operating system through the boot memory.
- the secondary program loader of the first operating system (the secondary program loader may be but is not limited to SPL) may be obtained based on the boot program of the first operating system (the boot program may be but is not limited to BootROM), and the code of the secondary program loader may be verified based on the boot memory of the first operating system (the boot memory may be but is not limited to BootROM).
- the process of the boot memory performing a security boot check on the code of the secondary program loader may be but is not limited to: the boot memory reads the code and verification code of the secondary program loader, operates on the code of the secondary program loader through an agreed operation method (such as a hash operation) to obtain an operation value, and then compares the operation value with the read verification code. If the two are consistent, the check result is normal; if the two are inconsistent, the check result is abnormal.
- an agreed operation method such as a hash operation
- the secondary program loader can also perform a security boot check on the code of the universal boot loader.
- the secondary program loader reads the code and verification code of the universal boot loader, and operates the code of the universal boot loader through an agreed operation method (such as hash operation, which can be the same or different from the operation method of the boot memory to check the secondary program loader) to obtain an operation value, and then compares the operation value with the read verification code. If the two are consistent, the inspection result is normal, and if the two are inconsistent, the inspection result is abnormal. If the inspection result is normal, the second operating system is loaded through the universal boot loader.
- an agreed operation method such as hash operation, which can be the same or different from the operation method of the boot memory to check the secondary program loader
- an example of starting a first operating system and a second operating system is provided.
- the first processor core as CPU-0 and the second processor cores as CPU-1 to CPU-N as an example
- the first operating system and the second operating system can be started in the following manner but are not limited to:
- Operating system the chip is powered on; the first processor core CPU-0 of the first operating system in the processor is awakened; the boot program of the first operating system is executed using the first processor core CPU-0, which may be but not limited to a secondary program loader; a security boot check is performed on the code of the secondary program loader through the boot memory on the chip (which may be but not limited to BootROM); if the check result is normal, the first operating system is loaded by executing the secondary program loader (which may be but not limited to SPL) through the first processor core; the second processor cores CPU-1 to CPU-N of the second operating system are awakened through the secondary program loader; the second operating system is loaded by executing the universal boot loader (which may be
- the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course by hardware, but in many cases the former is a better implementation method.
- the technical solution of the present application, or the part that contributes to the prior art can be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), and includes a number of instructions for a terminal device (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods of each embodiment of the present application.
- a storage medium such as ROM/RAM, magnetic disk, optical disk
- a terminal device which can be a mobile phone, computer, server, or network device, etc.
- FIG 11 is a schematic diagram of an embedded system according to an embodiment of the present application.
- the embedded system may include: a chip and at least two operating systems, wherein the chip includes a processor 1102, a hardware controller 1104, a first bus 1106 and a second bus 1108, wherein the bandwidth of the first bus 1106 is higher than the bandwidth of the second bus 1108, and the first bus 1106 is configured as a multi-master and multi-slave mode, and the second bus 1108 is configured as a one-master and multi-slave mode; at least two operating systems run based on the processor 1102; at least two operating systems communicate through the first bus 1106; at least two operating systems control the hardware controller through the second bus 1108.
- the above-mentioned chip can be a BMC chip;
- the above-mentioned processor can be a multi-core processor, and the above-mentioned hardware controller can be configured to control external devices connected to the corresponding external interface;
- the above-mentioned first bus is configured as a multi-master and multi-slave mode, which can be a bus used for communication between multiple processor cores of the processor, for example, AHB (Advanced High Performance Bus), and the above-mentioned second bus is configured as a one-master and multiple-slave mode, which can be a bus used by the processor to control the hardware controllers, for example, APB (Advanced Peripheral Bus), and the bandwidth of the first bus is higher than the bandwidth of the second bus.
- APB Advanced Peripheral Bus
- the embedded system may include at least two operating systems, at least two operating systems run based on a processor, and the processing resources of the processor are dynamically allocated to the at least two operating systems.
- the processing resources of the processor include a processor core.
- the at least two operating systems communicate through a first bus, and at least two operating systems control a hardware controller through a second bus.
- the hardware controller may include one or more controllers corresponding to chip peripherals, including but not limited to at least one of the following: I2C, USB (Universal Serial Bus), UART, ADC (Analog to Digital Converter), JTAG (Joint Test Action Group), RTC (Real_Time Clock), GPIO (General Purpose Input/Output), WDT (Watch
- the external interface may include one or more external interfaces, including but not limited to external interfaces corresponding to any of the above controllers.
- an example of a BMC chip can be shown in Figure 12, and the hardware of the BMC chip can include but is not limited to a SOC sub-module and a BMC out-of-band sub-module, wherein the SOC sub-module mainly includes an ARM core (ARM Core 1, ARM Core 2, ..., ARM Core X), which can also include but is not limited to a DDR (Double Data Rate) 4 controller (memory controller), a MAC (Media Access Control Address) controller (network controller), a SD (Secure Digital) Card/eMMC (Embedded Multi Media Card) controller (storage controller), a PCIe RC (Root Complex) controller, a SRAM (Static Random-Access Memory) and an SPI controller.
- ARM Core 1 ARM Core 1
- ARM Core 2 mainly includes an ARM core
- the SOC sub-module mainly includes an ARM core (ARM Core 1, ARM Core 2, ..., ARM Core X), which can also include but is not
- the cores and controllers are interconnected via the second bus to achieve interaction between the cores and controllers.
- the ARM cores are connected to the first bus (for example, they can be connected via an AXI (Advanced eXtensible Interface) bridge), and the communication between the cores is achieved via the first bus.
- the SOC submodule also implements the interconnection and intercommunication between the first bus and the second bus (for example, via This provides a physical path for the SOC submodule to access the peripherals on the second bus.
- the DDR4 controller can be connected to other components or devices through the DDR4PHY (Physical Layer) interface, the MAC controller is connected to other components or devices through the RGMII (Reduced Gigabit Media Independent Interface), the SD card/eMMC controller is connected to other components or devices through the SD interface, and the PCIe RC controller is connected to other components or devices through the PCIe PHY interface.
- DDR4PHY Physical Layer
- RGMII Reduced Gigabit Media Independent Interface
- the SD card/eMMC controller is connected to other components or devices through the SD interface
- the PCIe RC controller is connected to other components or devices through the PCIe PHY interface.
- the BMC out-of-band submodule mainly includes controllers corresponding to chip peripherals such as PWM, GPIO, FanTech (fan speed control), mailbox, etc., through which PECI communication with BMC (such as using GPIO to simulate PECI), fan control and other out-of-band management functions can be realized.
- the BMC out-of-band submodule can, but is not limited to, interact with the SOC submodule through the second bus.
- the BMC chip realizes the interconnection between the ARM core, storage unit and controller hardware resources on the chip through the first bus and the second bus.
- the dynamic balanced scheduling of processor resources mainly involves the scheduling of the ARM core resources of the BMC chip, and inter-core communication refers to the communication between ARM cores.
- the Linux system first sends an inter-core interrupt (interrupt number 9) to core 1 through the first on-chip bus on a core from cores 2 to N. If the RTOS system is in an idle state and allows preemption at this time, core 1 replies to the inter-core interrupt (interrupt number 10) through the first bus and releases the peripheral controller resources (such as PWM/PECI) currently mapped by core 1.
- the Linux system receives the inter-core interrupt 10, initiates the preemption process, adds core 1 to the Linux SMP scheduling, and obtains the control of the PWM/PECI peripherals, which can be controlled through the second bus.
- At least two operating systems include a first operating system and a second operating system, wherein the chip loads a communication value into a first bus, and the first bus sends a communication signal carrying the communication value to a communication register corresponding to the second operating system to achieve communication between the first operating system and the second operating system, wherein the communication value is used to indicate the communication content between the first operating system and the second operating system.
- the chip loads the control value to the second bus, and the second bus sends the control signal carrying the control value to the register corresponding to the hardware controller to realize the operating system's control over the hardware controller, wherein the control value is used to indicate the operating system's control content over the hardware controller.
- the operating system controls the hardware controller by accessing (such as performing read and write operations) the registers of each hardware controller.
- the operating system accesses the registers of the hardware controller by, but not limited to, reading or writing the register addresses of each hardware controller, and the addresses of these registers may be, but not limited to, unique and determined during chip design.
- the operating system can implement a specific function (such as the communication function between the above-mentioned operating systems or the control function of the operating system on the hardware controller) by writing a specific value (i.e., the above-mentioned communication register or the register corresponding to the hardware controller) to a specific address (i.e., the above-mentioned communication register or the register corresponding to the hardware controller).
- control value of 00 means that the air conditioner accelerates by one gear
- control value of 01 means that the air conditioner decelerates by one gear
- the various operating systems and the operating systems and hardware controllers can communicate, control, etc. through the bus, but are not limited to interactions.
- the read and write operations of the operating system on the registers of the hardware controllers will eventually be converted into control signals of the first bus (or the second bus) on the hardware controllers.
- This part of the conversion work and the control process of the first bus (or the second bus) on the hardware controllers can be, but are not limited to, automatically implemented by the internal hardware of the chip.
- the implementation process follows the bus specification. Among them, during the operation of the first bus (or the second bus), on the one hand, physical signals related to the control and bus protocol can be transmitted, and on the other hand, valid data can be transmitted to each hardware controller through its physical data channel.
- the first bus system may include, but is not limited to, three parts: a master module, a slave module, and an infrastructure.
- the transmission on the entire first bus is initiated by the master module and responded to by the slave module.
- the infrastructure may include, but is not limited to, an arbiter, a multiplexer from the master module to the slave module, a multiplexer from the slave module to the master module, a decoder, a dummy slave module, and a dummy master module.
- the master will first send a request to the arbiter.
- the arbiter decides when to let the master obtain the right to access the bus. After obtaining the right, the master will send data and control signals to the arbiter.
- the arbiter determines the corresponding slave path through address resolution and then sends the request to the corresponding destination. Similarly, the response data will be parsed by the decoder and then returned to the corresponding master. This multiplexing mechanism realizes many-to-many access.
- the second bus can be connected to the first bus system, and the transaction is transmitted on the main bus through the bridge structure.
- the data request can only be sent from the master to the slave. After receiving the request, the slave returns the corresponding response data to the master. This process can realize one-to-many access, and the access does not involve the arbitration and decoder parsing operations in the first bus.
- the first bus is configured as a multi-master and multi-slave mode
- the second bus is configured as a one-master and multi-slave mode.
- the first bus in the multi-master and multi-slave mode can use relatively more complex logic circuits and bus protocols to more efficiently complete the communication between systems.
- the second bus in the one-master and multi-slave mode can use relatively simple logic circuits and bus protocols to reduce the complexity of the structure while completing the system's control over the hardware controller, thereby reducing the power consumption of the entire embedded system.
- the configuration and coordination of multiple modes on the bus can further improve the operating performance of the embedded system.
- the first operating system and the second operating system are run based on the processor, and the communication between the operating systems and the control of the hardware controller are realized through buses with different functions. Since the first operating system and the second operating system are both run based on the same processor, the increase and deployment of hardware devices are avoided, the system cost is reduced, and the processor resources are reasonably used to support the operation between the systems. Therefore, the technical problem of low operating efficiency of the operating system can be solved, and the technical effect of improving the operating efficiency of the operating system can be achieved.
- At least two operating systems include a first operating system and a second operating system, wherein the first operating system controls a target hardware controller to run a target operation service based on a processor; the first operating system releases the target hardware controller through a second bus when the target operation service runs to a target service state; and the second operating system controls the target hardware controller to run the target operation service through the second bus.
- the target operation service is run by the target hardware controller, and the first operating system controls the target hardware controller based on the processor.
- the second operating system can take over the target operation service by taking over the target hardware controller.
- the takeover process of the target operation service is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- the first operating system When the target operation service runs to the target service state, the first operating system writes a specific value corresponding to the disabled target hardware controller (i.e., the control value) to the register of the target hardware controller to achieve the purpose of disabling the target hardware controller.
- the specific value to be written is automatically loaded into the data channel of the second bus by the chip hardware, and finally the control of the hardware controller is realized in hardware mode (i.e., the release operation is realized).
- the second operating system writes a specific value corresponding to the target operation service (i.e., the control value) to the register of the target hardware controller to achieve the purpose of controlling the target hardware controller to run the target operation service.
- the specific value to be written is automatically loaded into the data channel of the second bus by the chip hardware, and finally the control of the hardware controller is realized in hardware mode (i.e., the operation of the target operation service is realized).
- the second operating system sends a first interrupt request to the first operating system via the first bus, wherein the first interrupt request is used to request to take over the target hardware controller; the first operating system releases the target hardware controller via the second bus in response to the first interrupt request; or, the first operating system releases the target hardware controller via the second bus when the business attributes of the target operating business reach the target business attributes.
- the second operating system may actively request to take over the target hardware controller and thus take over the target operation service, and the first operating system may also actively release the target hardware controller and thus release the target operation service.
- the process of releasing and taking over the target hardware controller is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- the second operating system writes a specific value corresponding to the first interrupt request (i.e., the above communication value) into the interrupt register to send the first interrupt request to the first operating system.
- the above specific value to be written is automatically loaded into the data channel of the first bus by the chip hardware, and finally the interrupt request function is realized in hardware.
- the first operating system determines whether the target hardware controller is taken over by the second operating system in response to the first interrupt request; the first operating system releases the target hardware controller through the second bus if the target hardware controller is taken over by the second operating system.
- the first operating system may determine whether the second operating system takes over the target hardware controller.
- the determination process is similar to that in the above embodiment and will not be described in detail here.
- the first operating system sends a second interrupt request to the second operating system through the first bus when the second operating system does not take over the target hardware controller, wherein the second interrupt request is used to indicate that the second operating system is refused to take over the target hardware controller.
- the process in which the first operating system refuses the second operating system to take over the target hardware controller is the same as in the above embodiment. Similar, I will not go into details here.
- the first operating system sends a third interrupt request to the second operating system, wherein the third interrupt request is used to indicate that the first operating system has released the target hardware controller; the second operating system responds to the third interrupt request to control the target hardware controller to run the target operation service through the second bus.
- the process of the first operating system notifying the second operating system that the target hardware controller has been released is similar to that in the above embodiment, and is not described in detail here.
- the second operating system writes a specific value corresponding to the target operation service (i.e., the control value) to the register of the target hardware controller to achieve the purpose of controlling the target hardware controller to run the target operation service.
- the specific value to be written is automatically loaded into the data channel of the second bus by the chip hardware, and finally the control of the hardware controller is realized in hardware.
- At least two operating systems include a first operating system and a second operating system, wherein the first operating system runs based on a target processor core in a processor; the first operating system releases the target processor core when running to a target system state; and the second operating system adds the target processor core to a scheduling resource pool of the second operating system, wherein the scheduling resource pool includes the processor core in the processor allocated to the second operating system.
- the process of at least two operating systems occupying the target processor core is similar to that in the above-mentioned embodiment, which is not described in detail here.
- the second operating system sends a fourth interrupt request to the first operating system through the first bus, wherein the fourth interrupt request is used to request occupation of a target processor core; the first operating system releases the target processor core in response to the fourth interrupt request; or, the first operating system releases the target processor core when the system properties reach the target system properties.
- the second operating system may actively seize the target processor core, and the first operating system may also actively release the target processor core.
- the process of preempting and releasing the target processor core is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- the first operating system determines whether the target processor core is occupied by the second operating system in response to the fourth interrupt request; the first operating system releases the target processor core if the target processor core is occupied by the second operating system.
- the first operating system may determine whether the target processor core is occupied by the second operating system. This process is similar to that in the aforementioned embodiment and will not be described in detail here.
- the first operating system sends a fifth interrupt request to the second operating system through the first bus when the target processor core is not occupied by the second operating system, wherein the fifth interrupt request is used to indicate that the second operating system is denied from occupying the target processor core.
- the process in which the first operating system refuses the second operating system to occupy the target processor core is similar to that in the above-mentioned embodiment, and is not described in detail here.
- the first operating system sends a sixth interrupt request to the second operating system, wherein the sixth interrupt request is used to indicate that the first operating system has released the target processor core; the second operating system adds the target processor core to the scheduling resource pool in response to the sixth interrupt request.
- the process of the first operating system notifying the second operating system that the target processor core has been released is similar to that in the above-mentioned embodiment, and is not described in detail here.
- At least two operating systems include a first operating system and a second operating system, wherein a target processor core in a processor has been added to a scheduling resource pool of the second operating system, wherein the scheduling resource pool includes processor cores in the processor allocated to the second operating system; the second operating system releases the target processor core when the first operating system is awakened; and the first operating system runs based on the target processor core.
- the process of the first operating system waking up and using the target processor core is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- the second operating system releases the target processor core when detecting that the first operating system is awakened; or, When the first operating system is awakened, it sends a seventh interrupt request to the second operating system, wherein the seventh interrupt request is used to request the second operating system to release the target processor core; the second operating system releases the target processor core in response to the seventh interrupt request.
- the second operating system actively releases the target processor core when the first operating system wakes up, or the first operating system actively requests it to release the target processor core. This process is similar to that in the above embodiment and will not be described in detail here.
- At least two operating systems include a first operating system and a second operating system
- the chip also includes a storage space.
- the at least two operating systems control the storage space through a first bus, wherein the first operating system generates business data based on the operation of the processor; the first operating system stores the business data in the storage space through the first bus, and sends an eighth interrupt request to the second operating system through the first bus, wherein the eighth interrupt request is used to request the second operating system to read the business data from the storage space; the second operating system reads the business data from the storage space in response to the eighth interrupt request.
- the first operating system and the second operating system can, but are not limited to, implement the interaction of business data between systems through the transmission of storage space and interrupt requests.
- the process of interaction of business data between systems is similar to that in the aforementioned embodiment and will not be elaborated here.
- the first operating system writes a specific value to a specific address of the storage controller to store the business data in the storage space.
- the specific value to be written is automatically loaded into the data channel of the first bus by the chip hardware, and finally the control of the storage controller and the storage of business data are realized in hardware (that is, the transmission of valid data through its physical data channel is realized).
- the first operating system runs periodically based on the processor; or, the first operating system runs based on the processor in response to a received wake-up request; or, the first operating system runs based on the processor according to the matching degree between the current operating business generated on the processor and the first operating system.
- the operation mechanism of the first operating system is similar to that in the aforementioned embodiment and will not be described in detail here.
- the first operating system hibernates after the operation ends; the second operating system adds the target processor core used by the first operating system to the scheduling resource pool of the second operating system during the hibernation of the first operating system, wherein the scheduling resource pool includes other processors in the processor except the target processor core.
- the process of the second operating system occupying the target processor core during the hibernation of the first operating system is similar to that in the above embodiment, and will not be described in detail here.
- At least two operating systems communicate via a communication protocol deployed by the first bus; or, at least two operating systems communicate via the first bus, the second bus, and a communication hardware controller in the hardware controller.
- At least two operating systems may communicate via, but are not limited to, a communication protocol deployed by the first bus, that is, inter-core communication may be implemented via, but is not limited to, software.
- At least two operating systems may also communicate through, but not limited to, the first bus, the second bus and a communication hardware controller in the hardware controller, that is, inter-core communication may be implemented through, but not limited to, hardware.
- At least two operating systems communicate by sending an inter-processor interrupt request via a first bus; alternatively, one of the at least two operating systems sends a system interrupt request to the first bus; the first bus forwards the system interrupt request to the second bus; the second bus sends the system interrupt request to a mailbox hardware module controlled by a communication hardware controller; the mailbox hardware module sends the system interrupt request to another operating system of the at least two operating systems via the second bus and the first bus.
- the preemption and release of processor resources and the interaction of business data between different operating systems can be completed but not limited to through inter-core interrupts, for example, SGI (Software Generated Interrupt, software triggered interrupt, inter-core interrupt in Linux system), one operating system can send a resource preemption request (for example, core preemption request) or a resource release request (for example, core release request) to another operating system through IPI (Inter-Processor Interrupt) to request the preemption or release of processing resources.
- inter-core interrupts for example, SGI (Software Generated Interrupt, software triggered interrupt, inter-core interrupt in Linux system
- one operating system can send a resource preemption request (for example, core preemption request) or a resource release request (for example, core release request) to another operating system through IPI (Inter-Processor Interrupt) to request the preemption or release of processing resources.
- IPI Inter-Processor Interrupt
- inter-core communication may also be achieved through, but not limited to, a mailbox channel mailbox connected to a mailbox controller in an out-of-band submodule.
- the at least two operating systems include a first operating system and a second operating system, wherein the first operating system monitors the operating services executed on the second operating system through a first bus; In case of abnormal operation, the abnormal operation is taken over through the first bus.
- the process of the first operating system monitoring the abnormal operation service on the second operating system is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- Each operation service of the second operating system writes a value to a specific address of the storage controller at a certain frequency, and the first operating system reads the specific address of the storage controller to achieve the purpose of monitoring the operation service executed on the second operating system.
- the specific address of the storage controller that needs to be read is automatically loaded into the address channel of the first bus by the chip hardware, and the specific address of the storage controller is read in hardware.
- the read value is returned to the first operating system in the form of hardware from the data channel of the first bus, and finally the operation service executed on the second operating system is monitored.
- the first operating system taking over the abnormal operation service may be the control of the hardware controller corresponding to the abnormal operation service.
- the first operating system writes a specific value to the register of the hardware controller of the abnormal operation service to control the hardware controller.
- the specific value to be written is automatically loaded into the data channel of the first bus by the chip hardware, and finally the control of the hardware controller and the takeover of the abnormal operation service are realized in hardware.
- the first operating system receives a heartbeat signal of an operating service executed on the second operating system via the first bus; the first operating system takes over the operating service whose frequency of the heartbeat signal does not meet the corresponding target frequency as an abnormal operating service via the first bus.
- the process in which the first operating system monitors abnormal operation services on the second operating system by monitoring the frequency of the heartbeat signal is similar to that in the above-mentioned embodiment and is not described in detail here.
- the first operating system reads the value of the specific address of the storage controller to achieve the purpose of receiving the heartbeat signal of the operation service executed on the second operating system.
- the specific address of the storage controller that needs to be read is automatically loaded into the address channel of the first bus by the chip hardware, and the specific address of the storage controller is read in hardware.
- the read value is returned to the first operating system in the form of hardware from the data channel of the first bus, and finally the heartbeat signal of the operation service executed on the second operating system is received.
- the first operating system after taking over the abnormal operation service, sends a restart instruction to the second operating system through the first bus, wherein the restart instruction is used to instruct to restart the abnormal operation service.
- the process of restarting the abnormal operation service of the first operating system after taking over the abnormal operation service of the second operating system is similar to that in the above-mentioned embodiment, and is not described in detail here.
- the first operating system After taking over the abnormal operation service, the first operating system writes a specific value to the specific address of the storage controller to restart the abnormal operation service of the second operating system.
- the above-mentioned specific value to be written is automatically loaded into the data channel of the first bus by the chip hardware, and the value update of the specific address of the storage controller is realized in hardware.
- the second operating system reads the above-mentioned specific value and parses it, and then restarts the corresponding abnormal operation service.
- the chip further includes: a memory storing a boot module, and after the chip is powered on, the boot module is run to boot one of at least two operating systems, and the other operating systems of the at least two operating systems are started by the boot module.
- the boot process of multiple operating systems is similar to that in the aforementioned embodiment and will not be described in detail here.
- At least two operating systems include a first operating system and a second operating system, wherein the first operating system controls a target hardware controller based on a processor to run a target operation business; the first operating system releases the target hardware controller through a second bus when the target operation business runs to a target business state; the second operating system controls the target hardware controller through the second bus to run the target operation business; the first operating system runs based on a target processor core in a processor; the first operating system releases the target processor core when running to a target system state; the second operating system adds the target processor core to a scheduling resource pool of the second operating system, wherein the scheduling resource pool includes processor cores in the processor allocated to the second operating system; the chip also includes a storage space, and at least two operating systems control the storage space through a first bus, wherein the first operating system generates business data based on the process of running the processor; the first operating system stores the business data in the storage space through the first bus, and sends an eighth interrupt request to the second operating system through the first bus, where
- the operating system can take over the hardware controller and seize the processor core. This process is similar to that in the above-mentioned embodiment and will not be described in detail here.
- the above-mentioned embedded system can run on the above-mentioned BMC chip.
- the embedded system includes: a first operating system, a second operating system, a controller and a processor, wherein the first operating system and the second operating system are run based on the processor, and the controller is configured to detect the running state of the first operating system during the running process, and control the processor resources used by the first operating system according to the running state.
- the first operating system and the second operating system are run based on the processor, the controller detects the running state of the first operating system during the running process, and controls the processor resources used by the first operating system according to the running state. Since the first operating system and the second operating system are both based on the same processor, the increase and deployment of hardware devices are avoided, the system cost is reduced, and the processor resources used by the operating system can be controlled during the operation of the operating system, so as to reasonably use the processor resources to support the operation between systems. Therefore, the technical problem of low operating efficiency of the operating system can be solved, and the technical effect of improving the operating efficiency of the operating system can be achieved.
- the first operating system and the second operating system may be similar to those in the aforementioned embodiment, the first operating system and the second operating system are run based on a processor, and the controller may be a software module running under the first operating system or the second operating system.
- the processing logic of the controller can be deployed on the processor but is not limited to it, and can also be deployed on the first operating system, or can also be divided into a first control unit and a second control unit according to functions and deployed on the first operating system and the second operating system respectively, so as to realize processor resource control, operation business management and business interaction and other functions between systems.
- the controller is configured to at least one of the following: detect a business state of a first operating system based on a target operating business run by a processor, wherein the running state includes a business state; detect a system state of the first operating system, wherein the running state includes a system state, and the first operating system runs based on a target processor core in the processor.
- the detection of the service status and the system status by the controller is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- a controller is configured to release a target operating business upon detecting that the business state is a target business state, wherein the processor resources include the target operating business; a second operating system is used to run the target operating business; and/or, the controller is configured to release a target processor core upon detecting that the system state is a target system state, wherein the processor resources include the target processor core; and the second operating system is used to add the target processor core to a scheduling resource pool of the second operating system, wherein the scheduling resource pool includes processor cores in the processor allocated to the second operating system.
- the process of the controller controlling the target operation service and releasing the target processor core is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- the embedded system also includes: a first business interaction thread running on a first operating system, and a second business interaction thread running on a second operating system, wherein the controller is configured to determine that the detected business state is a target business state when a first interrupt request sent by the second business interaction thread to the first business interaction thread is obtained, wherein the first interrupt request is used to request to take over the target operation business; or, the controller is configured to determine that the detected business state is the target business state when the business attributes of the target operation business reach the target business attributes.
- the interaction process between operating systems may be controlled, but is not limited to, by service interaction threads deployed respectively on each operating system.
- the process of detecting the service status is similar to that in the above-mentioned embodiment and will not be described in detail here.
- the controller is configured to: respond to the first interrupt request, determine whether the target operation service is taken over by the second operating system; and release the target operation service if the target operation service is taken over by the second operating system.
- the controller's determination process of whether the second operating system takes over the target operating service is similar to that in the above-mentioned embodiment, and is not described in detail here.
- the embedded system further includes: a first service interaction thread running on the first operating system, and a second service interaction thread running on the second operating system, wherein the first service interaction thread is used to send a second interrupt request to the second service interaction thread without the second operating system taking over the target operation service, wherein the second interrupt request is used to indicate that the second operating system is refused to take over the target operation service.
- Target operation business a first service interaction thread running on the first operating system, and a second service interaction thread running on the second operating system, wherein the first service interaction thread is used to send a second interrupt request to the second service interaction thread without the second operating system taking over the target operation service, wherein the second interrupt request is used to indicate that the second operating system is refused to take over the target operation service.
- the process of rejecting the second operating system from taking over the target operating service is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- the embedded system also includes: a first business interaction thread running on a first operating system, and a second business interaction thread running on a second operating system, wherein the first business interaction thread is used to send a third interrupt request to the second business interaction thread, wherein the third interrupt request is used to indicate that the target hardware controller has been released; and the second operating system is used to control the target hardware controller to run the target operation business in response to the third interrupt request.
- the notification process of the released target hardware controller is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- the embedded system also includes: a first business interaction thread running on a first operating system, and a second business interaction thread running on a second operating system, wherein the controller is configured to determine that the system state is detected to be a target system state when a fourth interrupt request sent by the second business interaction thread to the first business interaction thread is obtained, wherein the fourth interrupt request is used to request occupation of a target processor core; or, the controller is configured to determine that the system state is detected to be a target system state when the system properties of the first operating system reach the target system properties.
- the detection process of the system status is similar to that in the above-mentioned embodiment and will not be described in detail here.
- the controller is configured to: in response to the fourth interrupt request, determine whether the target processor core is occupied by the second operating system; and release the target processor core if the target processor core is occupied by the second operating system.
- the controller's determination process of whether the target processor core is occupied by the second operating system is similar to that in the aforementioned embodiment, and is not described in detail here.
- the embedded system also includes: a first business interaction thread running on a first operating system, and a second business interaction thread running on a second operating system, wherein the first business interaction thread is used to send a fifth interrupt request to the second business interaction thread when the target processor core is not occupied by the second operating system, wherein the fifth interrupt request is used to indicate that the second operating system is denied from occupying the target processor core.
- the process of denying the second operating system from occupying the target processor core is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- the embedded system also includes: a first business interaction thread running on a first operating system, and a second business interaction thread running on a second operating system, wherein the first business interaction thread is used to send a sixth interrupt request to the second business interaction thread, wherein the sixth interrupt request is used to indicate that the first operating system has released the target processor core; and the second operating system is used to respond to the eighth interrupt request to add the target processor core to the scheduling resource pool.
- the notification process of the released target processor core is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- the controller is further configured to: when a target processor core in the processor has been added to a scheduling resource pool of a second operating system and the first operating system is awakened and running, detect whether the target processor core is released, wherein the scheduling resource pool includes processor cores in the processor allocated to the second operating system; and when it is detected that the second operating system has released the target processor core when the first operating system is awakened, run the first operating system based on the target processor core.
- the operation process when the first operating system is awakened is similar to that in the above-mentioned embodiment and will not be described in detail here.
- the embedded system also includes: a first business interaction thread running on a first operating system, and a second business interaction thread running on a second operating system, wherein the first business interaction thread is used to send a seventh interrupt request to the second business interaction thread when it is detected that the target processor core has not been released, wherein the seventh interrupt request is used to request the second operating system to release the target processor core; and the second operating system is used to release the target processor core in response to the seventh interrupt request.
- the negotiation process of the second operating system releasing the target processor core is similar to that in the above-mentioned embodiment, and is not described in detail here.
- the embedded system further includes: a first service interaction thread running on the first operating system, a first service interaction thread running on the second A second business interaction thread on the second operating system, wherein the first business interaction thread is used to obtain business data generated by the first operating system during the operation of the processor; store the business data in the storage space on the processor; send an eighth interrupt request to the second business interaction thread, wherein the eighth interrupt request is used to request the second operating system to read the business data from the storage space; the second operating system is used to respond to the eighth interrupt request to read the business data from the storage space.
- FIG. 13 is a schematic diagram of a business data communication process between operating systems according to an optional implementation of the present application.
- Linux and RTOS have business interaction capabilities, which can be but not limited to being realized through inter-core communication, such as using a communication architecture based on shared memory, using mailbox as a hardware module, and its function is to transmit the pointer of the memory from a core where Linux is located to the core where RTOS is located, and the sending and receiving of the pointer adopts an independent mailbox channel.
- Shared memory Shared Memory can be accessed by all cores, and the shared memory space can come from a fixed storage area of the system memory DDR.
- the Linux core first writes the data into the shared memory, and then the mailbox passes the interrupt request to the RTOS core. After the RTOS core receives the interrupt request, it can directly read the data from the Shared Memory. Since the whole process does not involve data copying operations, the communication efficiency is high, which is particularly suitable for large data volume transmission.
- the inter-system business interaction thread running on Linux i.e., the second business interaction thread
- RTOS thread the inter-system business interaction thread running on RTOS thread for short
- the heterogeneous multi-system inter-core communication process may include, but is not limited to, the following steps:
- Step 1 the Linux thread copies data to the specified location 1 in the shared memory Share Memory.
- Step 2 The Linux thread writes the address 1 of the specified location 1 in the shared memory Share Memory and the interrupt request and other information into channel A of the hardware module mailbox.
- Step 3 the RTOS thread receives the interrupt request and address 1 in channel A of the hardware module mailbox.
- Step 4 The RTOS thread reads the data stored at address 1 from the shared memory.
- Step 5 the RTOS thread copies the data to the specified location 2 of the shared memory Share Memory.
- Step 6 the RTOS thread writes the address 2 of the specified location 2 in the shared memory Share Memory and the interrupt request and other information into channel B of the hardware module Mailbox.
- Step 7 the Linux thread receives the interrupt request and address 2 in channel B of the hardware module mailbox.
- Step 8 the Linux thread reads data from address 2 in the shared memory.
- the controller is also configured to: control the first operating system to run periodically based on the processor; or, in response to a received wake-up request, control the first operating system to run based on the processor; or, based on the degree of match between the operating service generated on the processor and the first operating system, control the first operating system to run based on the processor.
- the controller's wake-up control process for the first operating system is similar to that in the aforementioned embodiment, and is not described in detail here.
- the controller is configured to: detect business information of a current operating business generated on a processor; and control the first operating system to run the current operating business based on the processor when it is detected that the matching degree between the business information and the first operating system is higher than a matching degree threshold.
- the process of determining the matching degree between the operating service and the first operating system by the controller is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- the controller is configured to: detect the target response speed of the current operation business, and/or the target resource occupancy, wherein the business information includes: the target response speed, and/or the resource occupancy, the target response speed is the response speed that the processor needs to achieve for the current operation business, and the target resource occupancy is the amount of resources that the processor needs to provide for the current operation business; when the target response speed is less than or When the speed threshold is equal to the speed threshold, and/or the target resource occupancy is less than or equal to the occupancy threshold, it is determined that the matching degree between the service information and the first operating system is higher than the matching degree threshold.
- the processing process of the controller for the service information is similar to that in the above-mentioned embodiment, which is not described in detail here.
- the controller is further configured to: control the first operating system to hibernate after the operation is completed.
- the sleep control process of the controller for the first operating system is similar to that in the above-mentioned embodiment, which is not described in detail here.
- the embedded system also includes: a first business interaction thread running on a first operating system, and a second business interaction thread running on a second operating system, wherein the first business interaction thread is used to notify the second business interaction thread that it is allowed to occupy the processor core used by the first operating system; and the second operating system is used to add the target processor core used by the first operating system to the scheduling resource pool of the second operating system during the hibernation of the first operating system, and the scheduling resource pool includes other processors in the processor except the target processor core.
- the process of the second operating system occupying the processor core during the hibernation of the first operating system is similar to that in the above embodiment, and will not be described in detail here.
- the embedded system also includes: a service takeover thread running on the first operating system, wherein the service takeover thread is used to monitor the operating services executed on the second operating system; when abnormal operating services are monitored in the operating services executed on the second operating system, the service takeover thread takes over the abnormal operating services.
- a service takeover thread is deployed on the first operating system to monitor the operating services executed on the second operating system.
- the monitoring process of the operating service executed on the second operating system is similar to that in the above-mentioned embodiment, and will not be described in detail here.
- the service takeover thread is used to: receive a heartbeat signal of each operating service executed on the second operating system; and determine an operating service whose frequency of the heartbeat signal does not meet the corresponding target frequency as an abnormal operating service.
- the process in which the service takeover thread monitors the abnormal operation service through the frequency of the heartbeat signal is similar to that in the above-mentioned embodiment and is not described in detail here.
- the service takeover thread is further used to: after taking over the abnormal operation service through the first operating system, send a restart instruction to the second operating system, wherein the restart instruction is used to instruct to restart the abnormal operation service.
- the process in which the service takeover thread controls the second operating system to restart the abnormal operation service is similar to that in the above-mentioned embodiment and is not described in detail here.
- the embedded system further includes: a boot module configured to boot the first operating system; and boot the second operating system.
- the boot process of multiple operating systems is similar to that in the aforementioned embodiment and will not be described in detail here.
- FIG14 is a schematic diagram of a service management process in an embedded system according to an optional implementation of the present application.
- n+1 CPU cores are deployed on the processor of the embedded system, namely core 0, core 1, ..., core n.
- Core 0 is allocated to RTOS, and cores 1 to n are allocated to Linux, wherein core 0 is a dynamically configurable CPU core, that is, RTOS can release core 0 for Linux scheduling under certain circumstances mentioned above, and Linux can also seize core 0 under certain mechanisms mentioned above and schedule the resources of core 0 to run its own tasks.
- RTOS can include a task scheduler and various threads (such as real-time control threads, task management threads, and system-to-system business interaction threads, etc.).
- the task scheduler is set to schedule and manage each thread, and each thread scheduling can be performed by polling or thread priority.
- FIG15 is a schematic diagram of a task scheduling process according to an optional implementation mode of the present application.
- the task scheduler allocates time slices to each real-time thread when the polling method is adopted. For example, real-time thread A, real-time thread B and real-time thread C are respectively allocated time slices. The time slice after the real-time thread C belongs to the empty scheduling state.
- the task scheduler can wake up the timer to start and allocate the time slice after the real-time thread C to Linux, which schedules business thread 1 and business thread 2 to occupy core 0.
- the real-time control thread is used to process the high-real-time threads in the RTOS.
- the task takeover thread is designed mainly to ensure the robustness of the system and the continuity of the business. Once Linux fails to run the traditional business thread due to some reason, the RTOS will take over the business through the task takeover thread, reset Linux, and return the business to Linux after Linux runs normally.
- the inter-system business interaction thread is used for the inter-core communication function between the RTOS and Linux.
- Linux system it includes traditional service threads, inter-core scheduler and inter-system service interaction threads.
- Traditional service threads are used to process a large number of complex non-real-time services in the system (such as traditional service thread A, traditional service thread B and traditional service thread C, etc.).
- the inter-core scheduler is set to complete the preemption and scheduling of core 0.
- the inter-system service interaction thread is used to realize the communication between Linux and RTOS.
- the above embedded system may, but is not limited to, adopt the following operation process:
- Step a the system is powered on, first the RTOS is booted, then the Linux system is booted, the RTOS occupies CPU core 0, and the Linux system occupies the remaining cores 1 to n.
- Step b after the RTOS system is started, its task scheduler allocates time slices to the threads that need to be scheduled according to the polling time slice strategy. If there are idle time slices, they are recorded in the idle time slice linked list and the wake-up register (i.e., timer) is configured; otherwise, the idle time slice recording and wake-up register operation are not performed.
- the wake-up register i.e., timer
- Step c the RTOS system starts the inter-system business interaction thread, waits for the interaction process, and uses the above inter-core communication mechanism during the actual interaction.
- step d the Linux system starts normally, traditional services are scheduled, and the inter-core scheduler and task takeover thread are in a silent state.
- Step e the Linux system starts the inter-core scheduler.
- the startup process involves two situations.
- the first situation is when the RTOS task scheduler finds that there are no threads to be scheduled within a scheduling cycle, it triggers the inter-core interrupt of releasing core 0 to the Linux system.
- the RTOS pushes the running data into the stack and then enters the sleep state.
- the above interrupt will trigger Linux to start the inter-core scheduler.
- the scheduler receives the interrupt, it notifies the Linux system to take over core 0.
- the module responsible for scheduling balance in Linux will assign threads to core 0 for scheduling.
- the second situation is when the Linux system detects that its CPU occupancy rate is too high, it will start the Linux inter-core scheduler and trigger the inter-core interrupt of preempting core 0 to the RTOS. After receiving the interrupt, the RTOS will push the running data into the stack and then enter the sleep state. At the same time, the Linux system takes over core 0 for scheduling.
- Step f Once the Linux system fails for some reason and the traditional service thread cannot run, the RTOS will take over the service through the task takeover thread, and then reset the Linux system. When the Linux system runs normally, the taken-over service will be returned to the Linux system.
- the RTOS real-time system is used to replace traditional hardware devices such as CPLD, EC chip, real-time control chip, etc., to achieve real-time management and control of the embedded system.
- the embedded heterogeneous system architecture of general embedded system plus real-time operating system effectively improves the current situation of insufficient real-time business processing capabilities of traditional embedded systems.
- the workload of traditional embedded systems is significantly reduced and the system operation efficiency is improved.
- the computing power of the embedded CPU is fully utilized, effectively improving the utilization rate of CPU resources in the embedded system.
- RTOS real-time system to replace traditional hardware logic devices such as CPLD, EC chip, real-time control chip is the saving of hardware costs.
- CPLD complex logic device
- EC chip real-time control chip
- real-time control chip because it is implemented by software, it has higher flexibility and expansion capabilities than the traditional implementation method based on hardware devices.
- FIG. 16 is a second schematic diagram of an optional embedded system of the embodiment of the present application. As shown in FIG. 16, the embedded system may include:
- a first operating system and a second operating system are run on the processor, and the response speed of the first operating system is higher than that of the second operating system;
- the service management module is configured to allocate a group of services to be allocated to corresponding operating systems according to a resource dynamic allocation rule, wherein the resource dynamic allocation rule includes dynamically allocating resources according to at least one of the following: service response speed, service resource occupancy rate;
- a resource dynamic allocation module configured to determine a resource allocation result corresponding to a group of services to be allocated, wherein the resource allocation result is used to indicate a processing resource in a processor corresponding to each service to be allocated in the group of services to be allocated, and the processing resource of the processor includes a processor core;
- the resource adaptive scheduling module is configured to allocate the processing resources of the processor to the first operating system and the second operating system according to the operating system corresponding to each service to be allocated and the resource allocation result.
- the first operating system and the second operating system may be similar to those in the aforementioned embodiment and will not be elaborated here.
- the business management module, the resource dynamic allocation module and the resource adaptive scheduling module may be software modules running under the first operating system or the second operating system.
- the processing resources of the processor are allocated to the first operating system and the second operating system, which solves the problem of low overall utilization of core resources due to the majority of processing resources of the multi-core processor being idle in the related art, and improves the utilization of processing resources.
- a running control device of an operating system is also provided, and the device is configured to implement the above-mentioned embodiment and optional implementation mode, and the descriptions that have been made are not repeated.
- the term "module” can implement a combination of software and/or hardware of a predetermined function.
- the device described in the following embodiments is preferably implemented in software, the implementation of hardware, or a combination of software and hardware is also possible and conceived.
- FIG. 17 is a structural block diagram of an operation control device of an operating system according to an embodiment of the present application. As shown in FIG. 17 , the device includes:
- a first detection module 1702 is configured to detect an operating state of the first operating system during operation, wherein the first operating system and the second operating system are operated based on a processor;
- the control module 1704 is configured to control the processor resources used by the first operating system according to the running state.
- the first operating system and the second operating system are run based on the processor, the first detection module detects the running state of the first operating system during the running process, and the control module controls the processor resources used by the first operating system according to the running state. Since the first operating system and the second operating system are both based on the same processor, the increase and deployment of hardware devices are avoided, the system cost is reduced, and the processor resources used by the operating system can be controlled during the operation of the operating system, so as to reasonably use the processor resources to support the operation between systems. Therefore, the technical problem of low operating efficiency of the operating system can be solved, and the technical effect of improving the operating efficiency of the operating system is achieved.
- the first detection module is configured to be at least one of the following:
- a system state of a first operating system is detected, wherein the operating state includes a system state, and the first operating system runs based on a target processor core in the processor.
- the first detection module is configured to be at least one of the following:
- the service state is a target service state
- releasing the target operation service wherein the processor resources include the target operation service, and the second operating system is used to run the target operation service
- the target processor core When it is detected that the system state is the target system state, the target processor core is released, wherein the processor resources include the target processor core, and the second operating system is used to add the target processor core to the scheduling resource pool of the second operating system, and the scheduling resource pool includes the processor core allocated to the second operating system in the processor.
- the above device further comprises:
- a first determining module is configured to determine that the detected service state is a target service state when a first interrupt request sent by the second operating system to the first operating system is obtained, wherein the first interrupt request is used to request to take over the target operating service;
- the second determining module is configured to determine that the detected service state is the target service state when the service attribute of the target operation service reaches the target service attribute.
- the first detection module is configured to:
- the target operating service is released.
- the above device further comprises:
- the first sending module is configured to, after determining whether the second operating system takes over the target operation business, send a second interrupt request to the second operating system if the second operating system does not take over the target operation business, wherein the second interrupt request is used to indicate a refusal to allow the second operating system to take over the target operation business.
- the above device further comprises:
- the second sending module is configured to send a third interrupt request to the second operating system after releasing the target operation service when the service state is detected to be the target service state, wherein the third interrupt request is used to indicate that the target operation service has been released, and the second operating system is used to run the target operation service in response to the third interrupt request.
- the above device further comprises:
- a third determining module is configured to determine that the detected system state is a target system state when a fourth interrupt request sent by the second operating system to the first operating system is obtained, wherein the fourth interrupt request is used to request to occupy a target processor core;
- the fourth determining module is configured to determine that the detected system state is the target system state when the system property of the first operating system reaches the target system property.
- the third determining module is configured to:
- the target processor core is released.
- the above device further comprises:
- the third sending module is configured to, after determining whether the target processor core is occupied by the second operating system, send a fifth interrupt request to the second operating system when the target processor core is not occupied by the second operating system, wherein the fifth interrupt request is used to indicate that the second operating system is denied from occupying the target processor core.
- the above device further comprises:
- the fourth sending module is configured to send a sixth interrupt request to the second operating system after releasing the target processor core when detecting that the system state is the target system state, wherein the sixth interrupt request is used to indicate that the first operating system has released the target processor core, and the second operating system is used to respond to the sixth interrupt request to add the target processor core to the scheduling resource pool.
- the above device further comprises:
- a second detection module configured to detect whether the target processor core in the processor has been released when the target processor core in the processor has been added to the scheduling resource pool of the second operating system and the first operating system is awakened and running, wherein the scheduling resource pool includes the processor core in the processor allocated to the second operating system;
- the running module is configured to run the first operating system based on the target processor core when it is detected that the second operating system has released the target processor core when the first operating system is awakened.
- the above device further comprises:
- the fifth sending module is configured to send a seventh interrupt request to the second operating system after detecting whether the target processor core is released, if it is detected that the target processor core is not released, wherein the seventh interrupt request is used to request the second operating system to release the target processor core, and the second operating system is used to respond to the seventh interrupt request to release the target processor core.
- the above device further comprises:
- An acquisition module is configured to acquire business data generated during the operation of the first operating system based on the processor
- a storage module configured to store business data into a storage space on the processor
- the sixth sending module is configured to send an eighth interrupt request to the second operating system, wherein the eighth interrupt request is used to request the second operating system to read business data from the storage space, and the second operating system is used to read business data from the storage space in response to the eighth interrupt request.
- the above device further comprises:
- a first control module is configured to control the first operating system to run periodically based on the processor; or,
- a response module configured to respond to the received wake-up request and control the first operating system to run based on the processor
- the second control module is configured to control the first operating system to run based on the processor according to the matching degree between the operating business generated on the processor and the first operating system.
- the second control module is configured to:
- the first operating system When it is detected that the matching degree between the service information and the first operating system is higher than the matching degree threshold, the first operating system is controlled to run the current operating service based on the processor.
- the second control module is configured to:
- the service information includes: a target response speed and/or a resource occupancy, wherein the target response speed is a response speed that the processor needs to achieve for the current operation service, and the target resource occupancy is a resource amount that the processor needs to provide for the current operation service;
- the target response speed is less than or equal to the speed threshold, and/or the target resource occupancy is less than or equal to the occupancy threshold, it is determined that the matching degree between the service information and the first operating system is higher than the matching degree threshold.
- the above device further comprises:
- the third control module is configured to control the first operating system to hibernate after the operation is completed.
- the above device further comprises:
- the notification module is configured to notify the second operating system to allow the occupation of the processor core used by the first operating system after controlling the first operating system to sleep after the operation ends, wherein the second operating system is used to add the target processor core used by the first operating system to the scheduling resource pool of the second operating system during the hibernation of the first operating system, and the scheduling resource pool includes other processors in the processor except the target processor core.
- the above device further comprises:
- a monitoring module configured to monitor the operation business executed on the second operating system
- the takeover module is configured to take over the abnormal operation business through the first operating system when abnormal operation business is detected in the operation business executed on the second operating system.
- the monitoring module is configured to:
- An operation service whose frequency of the heartbeat signal does not conform to the corresponding target frequency is determined as an abnormal operation service.
- the above device further comprises:
- the seventh sending module is configured to send a restart instruction to the second operating system after taking over the abnormal operation service through the first operating system, wherein the restart instruction is used to instruct to restart the abnormal operation service.
- the above device further comprises:
- a first boot module is configured to boot a first operating system
- the second boot module is configured to boot the second operating system.
- the above modules can be implemented by software or hardware. For the latter, it can be implemented in the following ways, but not limited to: the above modules are all located in the same processor; or the above modules are located in different processors in any combination.
- An embodiment of the present application further provides a chip, wherein the chip includes at least one of a programmable logic circuit and an executable instruction, and the chip runs in an electronic device and is configured to implement the steps in any of the above method embodiments.
- the embodiment of the present application further provides a BMC chip, wherein the BMC chip may include: a storage unit and a processing unit connected to the storage unit.
- the storage unit is configured to store a program
- the processing unit is configured to run the program to execute the steps in any of the above method embodiments.
- An embodiment of the present application also provides a mainboard, wherein the mainboard includes: at least one processor; at least one memory configured to store at least one program; when the at least one program is executed by the at least one processor, the at least one processor implements the steps in any one of the above method embodiments.
- An embodiment of the present application also provides a server, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface, and the memory communicate with each other through the communication bus, and the memory is configured to store computer programs; the processor is configured to implement the steps in any of the above method embodiments when executing the program stored in the memory to achieve the same technical effect.
- the communication bus of the above-mentioned server may be a PCI (Peripheral Component Interconnect) bus or an EISA (Extended Industry Standard Architecture) bus, etc.
- the communication bus may be divided into an address bus, a data bus, a control bus, etc.
- the communication interface is set for communication between the above-mentioned server and other devices.
- the memory may include RAM (Random Access Memory) or NVM (Non-Volatile Memory), such as at least one disk storage.
- the memory may also be at least one storage device located away from the aforementioned processor.
- the above-mentioned processor may be a general-purpose processor, including a CPU (Central Processing Unit), NP (Network Processor), etc.; it may also be a DSP (Digital Signal Processing), ASIC (Application Specific Integrated Circuit), FPGA (Field Programmable Gate Array) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
- scalability has become the most basic feature. Only with high scalability can better utilization in the future be guaranteed.
- scalability also includes software scalability. Compared with computers, the functions of servers are still very complex. Therefore, not only hardware configuration, but also software configuration is very important. If you want to achieve more functions, it is unimaginable without comprehensive software support.
- the server since the server needs to process a large amount of data to support the continuous operation of the business, the server also has a very important feature, that is, high stability. If the data transmission of the server cannot run stably, it will undoubtedly have a great impact on the business development.
- the solution of the present application controls the processor resources used by the first operating system according to the detected running state of the first operating system during operation, so that the server can reasonably allocate processor resources, and then rely on the allocated resources to perform more reasonable performance expansion.
- the operation of the first operating system is controlled according to the operating business and/or processor core allocated to the first operating system, so that the server can reasonably schedule and control whether it is expanding software resources or hardware resources, thereby improving the scalability of the server.
- the operation of the server can be made more stable, thereby improving the stability of the server.
- An embodiment of the present application further provides a non-volatile readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the steps of any of the above method embodiments when running.
- the above-mentioned non-volatile readable storage medium may include, but is not limited to: a USB flash drive, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, a magnetic disk or an optical disk, and other media that can store computer programs.
- An embodiment of the present application further provides an electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
- the electronic device may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
- modules or steps of the present application can be implemented by a general-purpose computing device, they can be concentrated on a single computing device, or distributed on a network composed of multiple computing devices, they can be implemented by program codes executable by the computing device, so that they can be stored in a storage device and executed by the computing device, and in some cases, they can be The steps shown or described may be performed in a different order than that shown here, or they may be made into individual integrated circuit modules, or multiple modules or steps may be made into a single integrated circuit module to implement.
- the present application is not limited to any particular combination of hardware and software.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Hardware Redundancy (AREA)
- Programmable Controllers (AREA)
Abstract
本申请实施例提供了一种操作系统的运行控制方法和装置,以及嵌入式系统和芯片,其中,该嵌入式系统包括:芯片和至少两个操作系统,其中,芯片包括处理器、硬件控制器、第一总线和第二总线,其中,第一总线的带宽高于第二总线带宽,且第一总线被配置为多主多从模式,第二总线被配置为一主多从模式;至少两个操作系统基于处理器运行;至少两个操作系统通过第一总线进行通信;至少两个操作系统通过第二总线实现对硬件控制器的控制。通过本申请,解决了操作系统的运行效率较低的问题,进而达到了提高操作系统的运行效率的效果。
Description
本申请实施例涉及计算机领域,特别是涉及一种操作系统的运行控制方法和装置,以及嵌入式系统和芯片。
当前的服务器、个人电脑、工控机等设备多采用操作系统加硬件器材,比如CPLD(Complex Programmable Logic Device,复杂可编程逻辑器件)、EC(Embeded Controller,嵌入式控制器)芯片,或控制芯片等硬件逻辑器件的系统架构来实现设备的控制。然而,采用CPLD、EC芯片和控制芯片等硬件逻辑器件必然会导致系统成本增加,而且上述硬件逻辑器件的增加使得系统之间需要跨器件进行交互,严重影响了操作系统运行的效率。
针对相关技术中,操作系统的运行效率较低等问题,尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种操作系统的运行控制方法和装置,以及嵌入式系统和芯片,以至少解决相关技术中操作系统的运行效率较低的问题。
根据第一方面,提供了一种嵌入式系统,包括:芯片和至少两个操作系统,其中,
芯片包括处理器、硬件控制器、第一总线和第二总线,其中,第一总线的带宽高于第二总线带宽,且第一总线被配置为多主多从模式,第二总线被配置为一主多从模式;至少两个操作系统基于处理器运行;至少两个操作系统通过第一总线进行通信;至少两个操作系统通过第二总线实现对硬件控制器的控制。
根据第二方面,提供了另一种嵌入式系统,包括:第一操作系统,第二操作系统,控制器和处理器,其中,第一操作系统和第二操作系统基于处理器运行,控制器被设置为检测第一操作系统在运行过程中的运行状态,并根据运行状态控制第一操作系统所使用的处理器资源。
根据第三方面,提供了一种操作系统的运行控制方法,包括:
检测第一操作系统在运行过程中的运行状态,其中,第一操作系统和第二操作系统基于处理器运行;
根据运行状态控制第一操作系统所使用的处理器资源。
根据第四方面,提供了一种操作系统的运行控制装置,包括:
第一检测模块,被设置为检测第一操作系统在运行过程中的运行状态,其中,第一操作系统和第二操作系统基于处理器运行;
控制模块,被设置为根据运行状态控制第一操作系统所使用的处理器资源。
根据第五方面,还提供了一种芯片,其中,芯片包括可编程逻辑电路以及可执行指令中的至少之一,芯片在电子设备中运行,被设置为实现上述任一项方法实施例中的步骤。
根据第六方面,还提供了一种BMC芯片,其中,包括:存储单元以及与存储单元连接的处理单元,存储单元被设置为存储程序,处理单元被设置为运行程序,以执行上述任一项方法实施例中的步骤。
根据第七方面,还提供了一种主板,其中,包括:至少一个处理器;至少一个存储器,被设置为存储至少一个程序;当至少一个程序被至少一个处理器执行,使得至少一个处理器实现上述任一项方法实施例中的步骤。
根据第八方面,还提供了一种服务器,其中,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;存储器,被设置为存放计算机程序;处理器,被设置为执行存储器上所存放的程序时,实现上述任一项方法实施例中的步骤。
根据第九方面,还提供了一种非易失性可读存储介质,非易失性可读存储介质中存储有计算机程序,其中,计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
根据第十方面,还提供了一种电子设备,包括存储器和处理器,存储器中存储有计算机程序,处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
通过本申请,第一操作系统和第二操作系统基于处理器运行,检测第一操作系统在运行过程中的运行状态,根据该运行状态对第一操作系统所使用的处理器资源进行控制。由于第一操作系统和第二操作系统均是基于同一个处理器运行,避免了硬件器件的增加和部署,降低了系统成本,并且可以在操作系统运行的过程中对其使用的处理器资源进行控制,从而合理利用处理器资源支持系统之间的运行,因此,可以解决操作系统的运行效率较低的技术问题,达到了提高操作系统的运行效率的技术效果。
图1是根据本申请实施例的一种操作系统的运行控制方法的硬件环境示意图;
图2是根据本申请实施例的操作系统的运行控制方法的流程图;
图3是根据本申请实施例的一种操作业务接管过程的示意图;
图4是根据本申请实施例的一种处理器核心占用过程的示意图;
图5是根据本申请实施例的一种处理器资源控制过程的示意图一;
图6是根据本申请实施例的一种处理器资源控制过程的示意图二;
图7是根据本申请实施例的一种业务数据交互过程的示意图;
图8是根据本申请实施例的一种第一操作系统运行过程的示意图一;
图9是根据本申请实施例的一种第一操作系统运行过程的示意图二;
图10是根据本申请实施例的一种系统异常监控过程的示意图;
图11是根据本申请实施例的一种嵌入式系统的示意图一;
图12是根据本申请实施例的一种可选的BMC芯片的结构框图;
图13是根据本申请可选的实施方式的一种操作系统间的业务数据通信过程的示意图;
图14是根据本申请可选的实施方式的一种嵌入式系统中业务管理过程的示意图;
图15是根据本申请可选的实施方式的一种任务调度过程的示意图;
图16是本申请实施例的可选的嵌入式系统的示意图二;
图17是根据本申请实施例的操作系统的运行控制装置的结构框图。
下文中将参考附图并结合实施例来详细说明本申请的实施例。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
本申请实施例所提供的方法实施例可以在服务器、计算机终端、设备终端或者类似的运算装置中执行。以运行在服务器上为例,图1是根据本申请实施例的一种操作系统的运行控制方法的硬件环境示意图。如图1所示,服务器可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)和被设置为存储数据的存储器104,在一个示例性实施例中,上述服务器还可以包括被设置为通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述服务器的结构造成限定。例如,服务器还可包括比图1中所示更多或者更少的组件,或者具有与图1所示等同功能或比图1所示功能更多的不同的配置。
存储器104可被设置为存储计算机程序,例如,应用软件的软件程序以及模块,如本发明实施例中的操作系统的运行控制方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至服务器。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输设备106被设置为经由一个网络接收或者发送数据。上述的网络可选实例可包括服务器的通信供应商提供的无线网络。在一个实例中,传输设备106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输设备106可以为射频(Radio Frequency,简称为RF)模块,其被设置为通过无线方式与互联网进行通讯。
在本实施例中提供了一种操作系统的运行控制方法,应用于上述硬件环境中,图2是根据本申请实施例的操作系统的运行控制方法的流程图,如图2所示,该流程包括如下步骤:
步骤S202,检测第一操作系统在运行过程中的运行状态,其中,第一操作系统和第二操作系统基于处理器运行;
步骤S204,根据运行状态控制第一操作系统所使用的处理器资源。
通过上述步骤,第一操作系统和第二操作系统基于处理器运行,检测第一操作系统在运行过程中的运行状态,根据该运行状态对第一操作系统所使用的处理器资源进行控制。由于第一操作系统和第二操作系统均是基于同一个处理器运行,避免了硬件器件的增加和部署,降低了系统成本,并且可以在操作系统运行的过程中对其使用的处理器资源进行控制,从而合理利用处理器资源支持系统之间的运行,因此,可以解决操作系统的运行效率较低的技术问题,达到了提高操作系统的运行效率的技术效果。
其中,上述步骤的执行主体可以为服务器,设备,主板,芯片,处理器,嵌入式系统等,但不限于此。
可选地,在本实施例中,上述第一操作系统和第二操作系统可以但不限于是两个异构的或者同构的操作系统,即第一操作系统和第二操作系统的类型可以相同也可以不同。
以第一操作系统和第二操作系统为异构操作系统为例,第一操作系统和第二操作系统可以是对响应时间的敏感程度不同的操作系统,比如:第一操作系统对响应时间的敏感程度高于第二操作系统。或者,第一操作系统和第二操作系统可以是对资源的占用量不同的操作系统,比如:第一操作系统对业务对资源的占用量小于第二操作系统。
上述第一操作系统和第二操作系统可以但不限于是部署在嵌入式系统的处理器上的两个异构操作系统,即嵌入式操作系统,嵌入式操作系统根据对响应时间的敏感程度可分为实时性操作系统(RTOS)和非实时性操作系统,实时性操作系统可以但不限于包括Free RTOS(Free Real-Time Operating System,免费实时操作系统)和RT Linux(Real Time Linux,实时Linux),非实时性操作系统可以但不限于包括contiki(Contiki Operating System,康提基操作系统)、HeliOS(Helix Operating System,螺旋操作系统)和Linux(Linux Operating System,Linux操作系统)等。
嵌入式系统被设置为控制、监视或者辅助操作机器和设备的装置,是一种专用的计算机系统。嵌入式系统是以应用为中心,以计算机技术为基础,软硬件可裁剪,适应应用系统对功能、可靠性、成本、体积、功耗等严格要求的专用计算机系统。从应用对象上加以定义来说,嵌入式系统是软件和硬件的综合体,还可以涵盖机械等附属装置。
嵌入式系统从硬件角度可以但不限于包括处理器,存储器和外围电路等硬件设备,上述第一操作系统和第二操作系统基于嵌入式系统的处理器运行。从软件角度可以但不限于包括底层驱动,操作系统和应用程序等,上述第一操作系统和第二操作系统即为嵌入式系统中的操作系统。
可选地,在本实施例中,上述操作系统的运行控制方法可以但不限于由嵌入式系统中实现的控制逻辑
来执行,该控制逻辑实现了对于嵌入式系统中的异构双操作系统,处理器,存储器等软硬件资源的控制,分配和调度。
上述操作系统的运行控制方法可以但不限于由第一操作系统来执行,或者由第一操作系统上设置的用于进行资源控制的功能模块来执行。
在上述步骤S202提供的技术方案中,在第一操作系统的运行过程中,其运行状态可以但不限于表示其运行情况。该运行情况可以但不限于是单维度的,或者也可以但不限于是多维度综合考虑的。比如:运行状态可以但不限于包括软硬件资源的使用情况,指令的执行情况,操作业务的运行情况等等。
可选地,在本实施例中,第一操作系统的运行过程可以但不限于指从上电到断电的整个过程,在这个过程中第一操作系统可以但不限于一直是唤醒着的,或者也可以即有唤醒阶段又有休眠阶段。
在上述步骤S204提供的技术方案中,第一操作系统所使用的处理器资源可以但不限于包括操作业务,处理器核心,处理器上的存储空间(比如内存,缓存),定时器,寄存器,输入输出接口等等。
可选地,在本实施例中,对于处理器资源的控制可以是对于其中一种的单独控制,也可以但不限于是对于多种处理器资源的协同控制。
可选地,在本实施例中,对于处理器资源的控制可以但不限于包括释放,占用,分配,回收等等操作。依据第一操作系统在运行过程中的运行状态对其使用的处理器资源进行合理的控制操作,可以提高资源的利用率,提高操作系统的运行效率。
在一个示例性实施例中,所检测的运行状态可以决定所控制的处理器资源,比如:对业务状态进行检测则可以控制操作业务的调整,对系统状态的检测可以控制处理器核心的使用。也可以根据需要控制的处理器资源设定不同的检测对象,比如:需要对操作业务进行调整则可以对业务状态进行检测,需要对处理器核心的使用进行控制则可以对系统状态进行检测。
一方面,第一操作系统运行操作业务的业务状态可以反映出第一操作系统的运行情况。或者,如果需要对操作系统上的操作业务进行控制可以对于操作业务的业务状态进行检测。例如:在上述步骤S202中,可以但不限于检测第一操作系统基于处理器所运行的目标操作业务的业务状态,其中,运行状态包括业务状态。
可选地,在本实施例中,目标操作业务可以但不限于为对系统的运行性能或者运行环境有一定要求的操作业务,比如:对运行时间有一定要求的风扇控制业务,对数据存储空间有一定要求的日志回溯业务,对响应速度有一定要求的接口切换业务以及硬件接口波形信号模拟业务等等。
可选地,在本实施例中,操作业务的业务状态可以但不限于表示操作业务在各个维度的运行情况,比如:是否被中断,是否运行到某种程度(例如:运行时长是否达到阈值,运行结果是否达到某预设结果)等等。
如果业务状态达到了目标业务状态,即操作业务运行到某种程度,则可以对业务执行与该目标业务状态匹配的控制操作,从而实现将操作业务从一个操作系统上转移至另一个操作系统上运行,操作业务的启动停止,操作业务的挂起恢复等适应当前业务状态的控制。例如:在上述步骤S204中,在检测到业务状态为目标业务状态的情况下,释放目标操作业务,其中,处理器资源包括目标操作业务,第二操作系统用于运行目标操作业务。
可选地,在本实施例中,如果目标操作业务的业务状态达到了目标业务状态,比如:被中断,或运行到某种程度(例如:运行时长达到阈值,运行结果达到某预设结果)等等,则释放第一操作系统上的目标操作业务,由第二操作系统继续运行目标操作业务。从而实现操作业务在操作系统间的交替运行,使得操作业务运行在更加适合其运行的操作系统上。
对于第一操作系统运行目标操作业务的业务状态达到目标业务状态,一方面可以是第一操作系统运行目标操作业务被第二操作系统中断,比如:在获取到第二操作系统向第一操作系统发送的第一中断请求的情况下,确定检测到业务状态为目标业务状态,其中,第一中断请求用于请求接管目标操作业务。或者,
也可以是目标操作业务的业务属性达到目标业务属性,比如:在目标操作业务的业务属性达到目标业务属性的情况下,确定检测到业务状态为目标业务状态。
可选地,在本实施例中,目标操作业务何时进行操作系统的转换可以但不限于由第二操作系统来决定,当第二操作系统决定接管目标操作业务时,可以向第一操作系统发送第一中断请求来指示由其接管目标操作业务,当获取到该第一中断请求时,可以认为第一操作系统对于目标操作业务运行的业务状态已经达到了目标业务状态,可以响应于该第一中断请求释放目标操作业务,由第二操作系统接管目标操作业务的运行。
可选地,在本实施例中,目标操作业务的业务属性可以但不限于包括运行时长,运行结果,运行负载等等。运行时长达到目标业务属性可以但不限于为运行时长达到预设时长,运行结果达到目标业务属性可以但不限于为目标操作业务运行出某个预设的运行结果,运行负载达到目标业务属性可以但不限于为目标操作业务所占用的运行资源超过或即将超过第一操作系统所能承载的范围。
可选地,在本实施例中,目标操作业务何时进行操作系统的转换也可以但不限于由目标操作业务本身的业务属性来决定,如果目标操作业务运行到业务属性达到目标业务属性的程度,则可以认为第一操作系统对于目标操作业务运行的业务状态已经达到了目标业务状态,可以由第二操作系统来接管目标操作业务的运行。
在一个示例性实施例中,对于第二操作系统接管目标操作业务的资格可以但不限于设立判定机制。比如:在获取到第一中断请求的情况下,响应第一中断请求,确定是否由第二操作系统接管目标操作业务;在由第二操作系统接管目标硬件控制器的情况下,释放目标操作业务。
可选地,在本实施例中,获取到第一中断请求时可以不立即释放第一操作系统所运行的目标操作业务,而是确定是否由第二操作系统接管目标操作业务,从而对第二操作系统接管目标操作业务的资格进行判定,如果确定了由第二操作系统接管目标操作业务,则释放第一操作系统所运行的目标操作业务。
在对第二操作系统接管目标操作业务的资格进行判定的机制中,如果第二操作系统没有接管目标操作业务的资格,则可以拒绝第二操作系统对于目标操作业务的接管。比如:在确定是否由第二操作系统接管目标操作业务之后,在不由第二操作系统接管目标操作业务的情况下,向第二操作系统发送第二中断请求,其中,第二中断请求用于指示拒绝第二操作系统接管目标操作业务。
可选地,在本实施例中,对于拒绝第二操作系统对于目标操作业务的接管可以但不限于通过系统间发送中断请求的方式来指示或通知给第二操作系统。
可选地,在本实施例中,对于不由第二操作系统接管目标操作业务的情况,也可以不发送第二中断请求,第一操作系统不释放目标操作业务,继续运行目标操作业务,第二操作系统则无法接管目标操作业务。
向第二操作系统发送第二中断请求拒绝第二操作系统接管目标操作业务之后,第一操作系统还可以继续运行目标操作业务直至满足第二操作系统接管目标操作业务的条件(比如业务属性达到目标属性)后,第一操作系统将目标操作业务释放给第二操作系统,并通知第二操作系统来接管运行。
在释放了第一操作系统上运行的目标操作业务之后,第二操作系统可以主动感知到目标操作业务已被释放,并对目标操作业务进行接管。或者,如果是由第二操作系统主动发送第一中断请求来请求接管目标操作业务的,第二操作系统可以默认只要在一定的时间内没有收到用于拒绝其接管目标操作业务的第二中断请求,则直接接管目标操作业务,从而提高目标操作业务的接管效率。
如果释放第一操作系统上运行的目标操作业务,也可以主动向第二操作系统发送中断请求来通知第二操作系统目标操作业务已被释放。比如:向第二操作系统发送第三中断请求,其中,第三中断请求用于指示已释放目标操作业务,第二操作系统用于响应第三中断请求运行目标操作业务。
在目标操作业务的业务属性达到目标业务属性的情况下,如果第一操作系统主动释放目标操作业务,可以向第二操作系统发送第三中断请求来通知第二操作系统目标操作业务已被释放,收到该第三中断请求
后由第二操作系统接管目标操作业务后续的运行。
图3是根据本申请实施例的一种操作业务接管过程的示意图,如图3所示,第二操作系统向第一操作系统发送第一中断请求来请求接管运行在第一操作系统上的目标操作业务,如果第一操作系统允许第二操作系统接管目标操作业务,则释放目标操作业务,由第二操作系统接管目标操作业务,目标操作业务在第二操作系统上运行。如果第一操作系统不允许第二操作系统接管目标操作业务,则向第二操作系统发送第二中断请求拒绝其接管目标操作业务,目标操作业务在第一操作系统上继续运行。
另一方面,第一操作系统的系统状态可以反映出第一操作系统的运行状态,依据第一操作系统的系统状态可以但不限于对第一操作系统所使用的处理器核心进行合理的控制。或者,如果需要对操作系统所使用的处理器核心进行控制可以对于操作系统的系统状态进行检测。例如:在上述步骤S202中,可以但不限于检测第一操作系统的系统状态,其中,运行状态包括系统状态,第一操作系统基于处理器中的目标处理器核心运行。
可选地,在本实施例中,目标处理器核心可以但不限于是处理器中为第一操作系统分配的被设置为运行第一操作系统的处理器核心,目标处理器核心的数量可以但不限于为一个或者多个。
可选地,在本实施例中,操作系统的系统状态可以但不限于表示操作系统在各个维度的运行情况,比如:是否被中断,是否运行到某种程度(例如:运行时长是否达到阈值,运行结果是否达到某预设结果)等等。
如果系统状态达到了目标系统状态,即操作系统运行到某种程度,则可以对操作系统所使用的处理器核心执行与该目标系统状态匹配的控制操作,从而实现处理器核心的合理分配利用。例如:在上述步骤S204中,在检测到系统状态为目标系统状态的情况下,释放目标处理器核心,其中,处理器资源包括目标处理器核心,第二操作系统用于将目标处理器核心添加至第二操作系统的调度资源池中,调度资源池中包括处理器中为第二操作系统分配的处理器核心。
可选地,在本实施例中,如果第一操作系统的系统状态达到了目标系统状态,比如:被中断,或运行到某种程度(例如:运行时长达到阈值,运行结果达到某预设结果,运行负载低于预设值)等等,则释放第一操作系统所使用的目标处理器核心,由第二操作系统使用目标处理器核心。从而实现处理器核心在操作系统间的交替使用,使得处理器核心得到更加合理的利用。
对于第一操作系统的系统状态达到目标系统状态,一方面可以是第一操作系统被第二操作系统中断,比如:在获取到第二操作系统向第一操作系统发送的第四中断请求的情况下,确定检测到系统状态为目标系统状态,其中,第四中断请求用于请求占用目标处理器核心。或者,也可以是第一操作系统的系统属性达到目标系统属性,比如:在第一操作系统的系统属性达到目标系统属性的情况下,确定检测到系统状态为目标系统状态。
可选地,在本实施例中,目标处理器核心何时进行操作系统的转换可以但不限于由第二操作系统来决定,当第二操作系统决定接管目标处理器核心时,可以向第一操作系统发送第四中断请求来指示由其接管目标处理器核心,当获取到该第四中断请求时,可以认为第一操作系统的系统状态已经达到了目标系统状态,可以响应于该第四中断请求释放目标处理器核心,由第二操作系统接管目标处理器核心,将其添加到调度资源池中使用。
可选地,在本实施例中,获取到第二操作系统向第一操作系统发送的第四中断请求后,可以将第一操作系统当前正在运行的数据压入堆栈,第一操作系统进入休眠状态,由第二操作系统占用目标处理器内核进行调度和使用。
可选地,在本实施例中,第二操作系统可以但不限于根据自身对于处理器核心资源的需求发起第四中断请求,比如:第二操作系统检测为其分配的核心的资源占用率是否高于一定阈值,或者检测为其分配的核心的资源剩余量是否足以运行下一个进程,如果资源占用率高于一定阈值或者剩余量不足以运行下一个进程,则可以认为第二操作系统需要额外的处理器核心,第二操作系统可以主动向第一操作系统发送第四
中断请求来请求占用目标处理器核心,从而降低其运行压力,或者支持下一个进程的运行。
在一个可选的实施方式中,当第二操作系统(比如Linux)检测到为其分配的核心的资源占用率较高(比如占用率高于总资源量的95%)时,可以向第一操作系统(比如RTOS)发送第四中断请求,第一操作系统(RTOS)收到第四操作系统后将其正在运行的业务现场进行保存(比如:将运行数据压入堆栈),并释放其使用的目标处理器核心,由第二操作系统(Linux)对目标处理器核心进行占用,并为目标处理器核心分配需要运行的线程,或者将其他占用率较高的处理器核心上的线程调度到目标处理器核心上运行。
可选地,在本实施例中,操作系统的系统属性可以但不限于包括系统的运行时长,运行结果,运行负载等等。系统的运行时长达到目标系统属性可以但不限于为系统的运行时长达到预设时长,系统的运行结果达到目标系统属性可以但不限于为操作系统运行出某个预设的运行结果,系统的运行负载达到目标系统属性可以但不限于为操作系统的资源占用率低于或即将低于其设定的占用率下限。
可选地,在本实施例中,目标处理器核心何时进行操作系统的转换也可以但不限于由操作系统本身的系统属性来决定,如果操作系统运行到系统属性达到目标系统属性的程度,则可以认为操作系统的系统状态已经达到了目标系统状态,可以由第二操作系统来占用目标处理器核心。
在一个示例性实施例中,对于第二操作系统来占用目标处理器核心的资格可以但不限于设立判定机制。比如:在获取到第四中断请求的情况下,响应第四中断请求,确定是否由第二操作系统占用目标处理器核心;在由第二操作系统占用目标处理器核心的情况下,释放目标处理器核心。
可选地,在本实施例中,获取到第四中断请求时可以不立即释放目标处理器核心,而是确定是否由第二操作系统占用目标处理器核心,从而对第二操作系统占用目标处理器核心的资格进行判定,如果确定了由第二操作系统占用目标处理器核心,则释放目标处理器核心由第二操作系统占用。
在对第二操作系统占用目标处理器核心的资格进行判定的机制中,如果第二操作系统没有占用目标处理器核心的资格,则可以拒绝第二操作系统对于目标处理器核心的占用。比如:在不由第二操作系统占用目标处理器核心的情况下,向第二操作系统发送第五中断请求,其中,第五中断请求用于指示拒绝第二操作系统占用目标处理器核心。
可选地,在本实施例中,对于拒绝第二操作系统对于目标处理器核心的占用可以但不限于通过系统间发送中断请求的方式来指示或通知给第二操作系统。
可选地,在本实施例中,对于不由第二操作系统占用目标处理器核心的情况,也可以不发送第二中断请求,第一操作系统不释放目标处理器核心,继续占用目标处理器核心,第二操作系统则无法占用目标处理器核心。
向第二操作系统发送第五中断请求拒绝第二操作系统占用目标处理器核心之后,第一操作系统还可以继续使用目标处理器核心处理操作业务直至满足第二操作系统占用目标处理器核心的条件(比如:系统属性达到目标系统属性)后,第一操作系统将目标操作业务释放给第二操作系统,并通知第二操作系统来接管运行。
图4是根据本申请实施例的一种处理器核心占用过程的示意图,如图4所示,第一操作系统基于目标处理器核心运行,在运行过程中第二操作系统向第一操作系统发送第二中断请求请求占用其使用的目标处理器核心,如果允许第二操作系统占用,则释放目标处理器核心,由第二操作系统占用目标处理器核心,将其添加至资源调度池中。如果不允许第二操作系统占用,则向第二操作系统发送第五中断请求拒绝。
在释放了第一操作系统使用的目标处理器核心之后,第二操作系统可以主动感知到目标处理器核心已被释放,并对目标处理器核心进行占用。或者,如果是由第二操作系统主动发送第四中断请求来请求占用目标处理器核心的,第二操作系统可以默认只要在一定的时间内没有收到用于拒绝其占用目标处理器核心的第五中断请求,则直接占用目标处理器核心,从而提高目标处理器核心的占用效率。
如果第一操作系统主动释放其所使用的目标处理器核心,可以主动向第二操作系统发送中断请求来通知第二操作系统目标处理器核心已被释放。比如:向第二操作系统发送第六中断请求,其中,第六中断请
求用于指示第一操作系统已释放目标处理器核心,第二操作系统用于响应第六中断请求将目标处理器核心添加至调度资源池中。
在系统属性达到目标系统属性的情况下,第一操作系统主动释放目标处理器核心,可以向第二操作系统发送第六中断请求来通知第二操作系统目标处理器核心已被释放,收到该第六中断请求后由第二操作系统占用目标处理器核心进行资源的调度和使用。
在一个可选的实施方式中,当第一操作系统(比如RTOS)确定在运行过程中无任何线程需要调度(比如操作系统的资源占用率低于或即将低于其设定的占用率下限)时,可以触发第一操作系统(RTOS)的主动休眠,第一操作系统(RTOS)向第二操作系统(比如Linux)发送第六中断请求,并将其运行现场进行保存(比如:将运行的数据压入堆栈)后休眠,第二操作系统(Linux)收到第六中断请求后将目标处理器内核添加到其资源调度池中进行调度和使用。
在一个可选的应用场景中,芯片中搭载了双操作系统基于多核处理器CPU运行,第一操作系统可以但不限于为RTOS,第二操作系统可以但不限于为Linux,CPU核0被分配给RTOS使用,其余核心被分配给Linux使用,图5是根据本申请实施例的一种处理器资源控制过程的示意图一,如图5所示,RTOS周期性被唤醒运行,RTOS和Linux交替占用调度CPU核0,在RTOS调度CPU核0的时间片(T4,T5)内,Linux在T4-1时刻产生一个接管CPU核0的中断(相当于上述第四中断请求),导致RTOS不得不休眠。这时RTOS将现场保存在堆栈中,进行休眠,然后将CPU核0释放给Linux接管,等待Linux调度完成后,T5-1时刻将产生RTOS抢占CPU核0的中断唤醒RTOS,RTOS从T5-1时刻又开始进入轮循模式占用调度CPU核0。
在本实施例中,系统间操作业务的接管和处理器核心的占用可以但不限于是单独的,比如:只接管操作业务,或者只占用处理器核心。也可以一起占用,即,既接管操作业务又占用处理器核心。
在一个可选的实施方式中,以操作业务为设备控制业务为例,描述第二操作系统接管第一操作系统的处理器资源。在本实施方式中,提供了一种操作系统的启动控制过程,该过程包括如下步骤:
步骤A,通过处理器的第一处理器核心上运行的第一操作系统经由第一总线对目标设备的硬件控制器进行控制,以对目标设备的运行状态进行控制。
对于如服务器、个人电脑、工控机等设备,可以配备一些特定设备执行与设备运行相关的操作。相关技术中,通常在系统上电后,这些特定设备就开始工作。而由于系统上电后,运行在处理器上的操作系统会经过一段时间才能正常接管特定设备,进行特定设备的运行状态控制,而在操作系统启动的过程中,特定设备是不可控的。
例如,在系统上电后风扇就开始工作,由于系统上电后跑在CPU上的操作系统会经过一段时间才能正常接管风扇,进行风扇转速的设置,所以在操作系统启动过程中风扇是不可控的。
例如,为了实现在操作系统启动的过程中对风扇可以控制,服务器通过采用BMC结合CPLD的控制方式,个人电脑采用EC芯片的控制方式(EC芯片根据温度调整风扇转速的功能),工控机采用定制芯片的控制方式,在服务器、个人电脑、工控机操作系统启动过程中,CPLD、EC芯片、定制芯片就会介入控制风扇的转速,等待操作系统完全启动后,风扇的控制权就会交给操作系统中的应用程序进行控制。
为了至少部分解决上述技术问题,可以采用多核多系统(例如,多核双系统)的启动控制方式,在处理器的不同处理器核心上运行嵌入式系统的不同操作系统,不同的操作系统的响应速度不同,对于第二操作系统未启动、重启或者其他无法对特定设备的运行状态进行控制的情况,可以由响应速度高于第二操作系统的第一操作系统对特定设备的运行状态进行控制,可以降低特定设备的运行状态不可控的情况,同时,由于不需要增加额外的成本,此外还具有很好的可扩展性。
在本实施方式中,在第二操作系统未启动、重启或者其他无法对目标设备的运行状态进行控制的情况下,可以通过第一操作系统经由第一总线对目标设备的硬件控制器进行控制,以对目标设备的运行状态进行控制。这里的目标设备可以是风扇,或者其他需要在系统启动是运行的设备,对于风扇,其对应的硬件
控制器为风扇控制器,例如,PWM(Pulse Width Modulation,脉冲宽度调制)控制器、FanTach(风扇转速)控制器。这里,使用第一操作系统(例如,RTOS系统)代替传统的CPLD、EC芯片、定制芯片,一方面节省硬件成本,另一方面由于设备控制是由软件实现的,可扩展性较高。
例如,基于BMC双核实现双系统,RTOS系统和Linux系统,基于多核双系统实现风扇,利用RTOS系统实时性高的特性,在Linux系统启动的过程中,可以由RTOS系统代替CPLD、EC芯片、定制芯片控制风扇,即,接管风扇控制权,以足够快的速度对风扇的运行状态进行控制。
步骤B,引导在处理器的第二处理器核心上启动第二操作系统。
在系统上电时或者第二操作系统重启时,可以引导在处理器的第二处理器核心上启动第二操作系统,以使得第二操作系统在第二处理器核心上运行。这里,在第二处理器核心上启动第二操作系统是指将第二处理器核心调度给第二操作系统,操作系统的系统文件或者镜像文件可以存储在处理器所在芯片上或者芯片以外的存储器内,例如,外部RAM(Random Access Memory,随机存取存储器)内。
步骤C,在第二操作系统启动之后,通过第二操作系统经由第一总线接管硬件控制器,以接管目标设备的控制权。
在第二操作系统启动完成之后,可以一直由第一操作系统对目标设备的运行状态进行控制,考虑到在多核处理器上运行多个操作系统需要在多个操作系统之间进行数据交互,以及方便由一个操作系统进行设备的总体控制,也可以由第二操作系统接管目标设备的控制权。例如,可以通过第二操作系统经由第一总线接管硬件控制器。第二操作系统接管目标设备的控制权的方式可以是:在第二操作系统启动之后,由第二操作系统向第一操作系统发送设备接管请求,例如,通过第二总线发送中断请求,以请求接管目标设备的硬件控制器。第一操作系统可以接收第二操作系统发送的设备接管请求,将目标设备的控制权转交给第二操作系统,还可以执行与目标设备的控制权交接相关的操作,例如,停止运行用于对目标设备的运行状态进行控制的业务(进程)。
例如,等到Linux系统完全启动后,RTOS系统将风扇的控制权转交给Linux系统,由Linux系统对风扇进行控制。上述过程可以是在系统上电之后执行的,即,采用多核双系统的启动方式,先启动RTOS系统,利于更早介入风扇控制,而等到Linux系统完全启动之后,RTOS系统将风扇的控制权转交给Linux系统进行控制。
在一个示例性实施例中,通过处理器的第一处理器核心上运行的第一操作系统经由第一总线对目标设备的硬件控制器进行控制之前,还包括:在处理器所在的芯片上电之后,通过处理器唤醒第一处理器核心;通过第一处理器核心运行第一操作系统的引导加载程序,以引导第一操作系统在第一处理器核心上启动。
整个系统按照工作时段可以划分为两个阶段,初始启动阶段和实时运行阶段,本实施例中的启动控制方法可以是在初始启动阶段或者实时运行阶段执行的。对于初始启动阶段,初始启动阶段起于系统上电,即,处理器所在的芯片上电,系统上电后会唤醒一个核心去执行操作系统的引导动作,其余核心暂时处于休眠状态,被唤醒的核心可以是第一处理器核心。
可选地,上电后系统将首先执行一个预置的核心调度策略(启动引导策略),即,由处理器的一个处理器核心执行核心调度策略,核心调度策略可以存储在SOC片(System on Chip,片上系统)上的RAM或Norflash(非易失闪存)中,该调度策略可以根据不同的设计需求进行灵活配置,其主要功能包括:指定不同操作系统需要运行的初始处理资源(处理器核心),确定异构操作系统的引导过程,芯片上电可以是指SOC芯片层面的上电。
在第一处理器核心唤醒之后,可以通过引导加载程序在第一处理器核心上引导运行第一操作系统:可以,由第一处理器核心通过引导加载程序引导第一操作系统在第一处理器核心上启动。引导加载(Boot Loader)程序可以位于电脑或其他计算机应用上,其是指用于引导操作系统加载的程序,例如,Boot Rom里的固有程序,固有程序指的是引导操作系统启动的代码,属于Boot Loader程序,Boot Rom是CPU片上
的嵌入处理器芯片内的一小块掩模ROM(Read-Only Memory,只读存储器)或者写保护闪存。
在初始启动阶段,通过引导加载程序引导操作系统对应的处理器核心上启动,可以提高操作系统启动的成功率,同时为实时运行阶段做准备。
在一个示例性实施例中,通过处理器的第一处理器核心上运行的第一操作系统经由第一总线对目标设备的硬件控制器进行控制,包括:在第一处理器核心上执行第一操作系统的第一控制任务,其中,第一控制任务用于对硬件控制器进行控制;通过第一处理器核心读取与目标设备对应的指定传感器的传感器数据;通过第一控制任务根据指定传感器的传感器数据经由第一总线向硬件控制器发送设备控制指令,以由硬件控制器按照设备控制指令对目标设备的运行状态进行控制。
操作系统对目标设备的硬件控制器进行控制可以是由在该操作系统所运行的处理器核心上的控制任务(业务)对硬件控制器进行控制执行的,这里的控制任务可以指对应的控制任务。对于目标设备的硬件控制器,可以在第一处理器核心上执行第一操作系统的第一控制任务(第一控制进程),由第一控制任务对硬件控制器进行控制。
对硬件控制器进行控制可以是基于传感器的传感器数据进行的,对于不同的目标设备,影响其运行的参数可以是不同的,对应地,所需获取的传感器数据也可以存在区别。对于目标设备,其可以是在芯片上电以后即运行的设备,与其对应的传感器为指定传感器。指定传感器的类型可以有多种,可以包括但不限于以下至少之一:温度传感器,湿度传感器,噪音传感器等。由于第一控制任务运行在第一处理器核心上,因此,可以通过第一处理器核心读取指定传感器的传感器数据。指定传感器的传感器数据可以存储在指定传感器内的存储空间中,与可以由指定传感器传输到指定的存储空间内,本实施例中对于指定传感器的传感器数据的读取位置不做限定。
读取的指定传感器的传感器数据可以是在一个时间周期内的传感器数据,也可以是自目标设备启动之后的全部传感器数据,还可以是满足其他时间限制条件的传感器数据。在获取到指定传感器的传感器数据之后,第一控制任务可以根据指定传感器的传感器数据对目标设备的运行状态进行控制。对目标设备的运行状态进行控制可以是通过以下方式实现的:通过第一控制任务向目标设备的硬件控制器发送设备控制指令,以由硬件控制器按照设备控制指令对目标设备的运行状态进行控制。
可选地,第一控制任务可以基于指定传感器的传感器数据确定出目标设备预期的运行状态;在目标设备当前的运行状态与预期的运行状态不同的情况下,可以生成上述设备控制指令,设备控制指令可以控制将目标设备的运行状态调整为预期的运行状态。上述设备控制指令可以是经由第一总线发送给目标设备的硬件控制器的。第一总线与前述实施例中类似,在此不做赘述。
通过读取指定传感器的传感器数据,并根据传感器数据对目标设备进行控制,控制其运行状态,提高了资源的利用率。
在一个示例性实施例中,通过第一控制任务根据指定传感器的传感器数据经由第一总线向硬件控制器发送设备控制指令,包括:通过第一控制任务根据指定传感器的传感器数据确定目标设备的设备运行参数的目标参数值,其中,设备运行参数为控制目标设备的运行状态的参数;通过第一控制任务将携带有目标参数值的设备控制指令经由第一总线发送给硬件控制器。
第一控制任务可以根据指定传感器的传感器数据,确定出目标设备预期的运行状态。预期的运行状态可以是通过设备运行参数的参数值进行表示的,设备运行参数可以是控制目标设备的运行状态的参数,对于不同类型的设备,其对应的设备运行参数可以是不同的。例如,对于风扇,其对应的设备运行参数可以是转速,对于其他类型的设备,设备运行参数可以是其他的运行参数。预期的运行状态可以对应于目标设备的设备运行参数的目标参数值。
在确定出目标设备的设备运行参数的目标参数值之后,可以将目标参数值携带在上述的设备控制指令中,即,通过第一控制任务将携带有目标参数值的设备控制指令发送给硬件控制器,向硬件控制器发送设备控制指令的方式可以与前述实施例中类似,在此不做赘述。
根据传感器数据确定目标设备的设备运行参数的参数值,并将确定的参数值携带在设备控制指令中,可以提高设备控制的精准度。
在一个示例性实施例中,通过第一控制任务根据指定传感器的传感器数据确定目标设备的设备运行参数的目标参数值,包括:在目标设备为风扇的情况下,通过第一控制任务根据指定传感器的传感器数据确定风扇的风扇运行参数的目标参数值。
目标设备可以是风扇,其可以是被设置为对所在的服务器或者其他设备进行散热的风扇,即,散热风扇。在此情况下,设备运行参数可以是风扇运行参数,风扇运行参数可以包括一种或多种,可以包括但不限于以下至少之一:转速,转动周期,周期切换时间,还可以是其他的运行参数。本实施例中对此不做限定。
对应地,通过第一控制任务根据指定传感器的传感器数据确定目标设备的设备运行参数的目标参数值可以是:通过第一控制任务根据指定传感器的传感器数据确定风扇的风扇运行参数的目标参数值。在得到目标参数值之后,第一控制任务将携带有目标参数值的设备控制指令经由第一总线发送给风扇的硬件控制器,从而对风扇的运行状态进行控制。
通过对风扇的运行状态进行控制,可以在如系统上电、系统重启或者其他场景下,快速对风扇的运行状态进行控制,提高风扇控制的及时性。
在一个示例性实施例中,在目标设备为风扇的情况下,通过第一控制任务根据指定传感器的传感器数据确定风扇的风扇运行参数的目标参数值,包括:在目标设备为风扇、且指定传感器为温度传感器的情况下,通过第一控制任务根据温度传感器的传感器数据确定风扇的转速的目标转速值,其中,风扇的转速与温度传感器所检测到的温度正相关。
对于目标设备为风扇的场景,指定传感器可以是温度传感器,该温度传感器的数量可以为一个或多个,温度传感器的设置位置可以根据需要进行配置,不同的温度传感器可以设置在不同的位置上。可选地,温度传感器的传感器数据用于表示温度传感器所检测到的温度,对此,第一控制任务可以根据温度传感器的传感器数据确定风扇的转速的目标转速值,这里,风扇的转速与温度传感器所检测到的温度正相关。
在温度传感器的数量为多个的情况下,可以根据每个温度传感器的传感器数据,确定多个温度传感器所检测到的最高温度,风扇的转速可以是根据多个温度传感器所检测到的最高温度确定的,相对于根据多个温度传感器所检测到的平均温度确定风扇的转速,可以保证设备运行的安全性。对于风扇的数量为多个的场景,也可以基于与每个风扇匹配的温度传感器所检测到的最高温度或者平均温度,确定每个风扇的转速。
例如,可以利用第一操作系统(例如,RTOS系统)代替CPLD、EC芯片、定制芯片等处理单元来控制风扇转速(可以是实时进行BMC风扇控制)。在系统刚上电时,可以唤醒第一处理器核心(例如,CPU0,第一处理器核心可以是被硬件唤醒的),第一处理器核心运行引导加载程序(例如,Boot Rom中的指定程序),加载第一操作系统启动,第一处理器核心将读取各种和温度相关的传感器(sensor)数据,进行风扇控制(例如,风扇转速控制),完全模拟上述处理单元完成风扇调控的功能。在进行风扇转速控制时,第一操作系统可以根据温度传感器计算PWM值,继而风扇的转速进行调整。通过上述方式,可以在第二操作系统启动的过程中,由第一操作系统对风扇的转速进行控制。
在一个示例性实施例中,引导在处理器的第二处理器核心上启动第二操作系统,包括:通过第一处理器核心执行二级程序加载器,以由二级程序加载器唤醒第二处理器核心;通过第二处理器核心运行第二操作系统的通用引导加载器,以引导第二操作系统在第一处理器核心上启动。
在本实施方式中,在进行操作系统启动时,可以将二级程序加载器(Second Program Loader,简称为SPL)加载到内部内存中,例如,SOC内部的静态随机存取存储器(Static Random-Access Memory,SRAM),而SPL可以负责将通用引导加载程序(Universal Boot Loader,简称为U-Boot)加载到随机存取
存储器(Random-Access Memory,简称为RAM)中,二级程序加载器可以引导加载第二操作系统,还可以引导加载第一操作系统。
对于第二操作系统,可以通过第一处理器核心执行二级程序加载器,以由二级程序加载器唤醒第二处理器核心;通过第二处理器核心,可以运行第二操作系统的通用引导加载器(通用引导加载程序),从而引导第二操作系统在第一处理器核心上启动。这里,通过二级程序加载器引导加载第二操作系统的引导程序,第二操作系统的引导程序可以包括通用引导加载器。
需要说明的是,二级程序加载器为通用引导加载程序第一阶段执行的代码,可负责搬运通用引导加载程序第二阶段的代码到系统内存(System RAM,也叫片外内存)中运行。通用引导加载程序是一个遵循GPL(General Public License,通用公共许可协议)协议的开源软件,可以看作是一个裸机综合例程。
例如,系统上电后,处理器首先会唤醒CPU0核,以便可以让RTOS系统尽可能快地运行起来;然后利用Boot Rom中的程序引导RTOS系统启动;RTOS系统启动的过程中,会继续通过SPL加载U-Boot,由U-Boot引导在CPU1上启动第二操作系统直到Linux系统正常启动。
需要说明的是,Boot Rom是芯片(例如,SOC芯片)内部ROM固化程序,其是U-Boot的引导代码。Boot Rom读硬件的启动信息(例如,拨码开关设置),从指定启动介质(例如,SD、MMC等)中读取uboot-spl代码(即,SPL),SPL主要负责初始化外部RAM和环境,加载真正的U-Boot镜像到外部RAM中来执行,外部RAM可以是DDR(Double Data Rate Synchronous Dynamic Random-Access Memory,双倍速率的同步动态随机存取内存),也可以是其他的RAM。
通过二级程序加载器唤醒第二处理器核心,再由第二处理器核心运行通用引导加载程序,从而引导在对应的处理器核心上第二操作系统,可以提高操作系统启动的便捷性和成功率。
作为一种可选示例,下面以RTOS系统和Linux系统为例对多核双系统的启动过程进行解释说明。
为了尽快接管风扇管理,可以尽可能使RTOS系统启动,在Linux系统启动完成之后,由在Linux系统接管风扇的控制权。多核双系统的启动过程可以包括以下步骤:
步骤1,在系统刚上电时,唤醒CPU0;
步骤2,CPU0运行Boot Rom中的指定程序,加载RTOS系统启动;
步骤3,在RTOS系统启动的过程中,唤醒CPU1去引导U-Boot,并启动第一操作系统中的风扇控制程序(FanCtrl_RTOS_APP);
步骤4,CPU1引导U-Boot可以包括SPL阶段和U-Boot阶段,通过调用SPL进入到SPL阶段;
步骤5,在SPL阶段,SPL引导U-Boot启动;
步骤6,在U-Boot阶段,加载Linux核心(CPU1~CPUN),并启动BMC业务程序以及第二操作系统中的风扇控制程序(FanCtrl_Linux_APP)。
通过本可选示例,在双系统启动及运行的过程中,通过首先启动RTOS系统对风扇进行控制,并在Linux系统启动之后,由第二操作系统接管风扇的控制权,可以保证在系统上电时快速对风扇进行控制,提高风扇控制的效率。
在一个示例性实施例中,在通过第二操作系统经由第一总线接管硬件控制器之后,还包括:在第二操作系统待重启的情况下,通过第二操作系统经由第二总线唤醒第一操作系统,并由第一操作系统经由第一总线接管硬件控制器,以接管目标设备的控制权;控制第二操作系统进行系统重启。
在由于系统崩溃、接收到reboot(重加载)命令等原因需要重启时,第二操作系统可以首先唤醒第一操作系统,由第一操作系统接管硬件控制器,以接管目标设备的控制权。唤醒第一操作系统可以是经由第二总线执行的,第一操作系统接管硬件控制器可以是经由第一总线执行的。
在第二操作系统发生重启时,通过唤醒第一操作系统接管目标设备的控制权,可以提高设备控制的可靠性。
在一个示例性实施例中,在第二操作系统待重启的情况下,通过第二操作系统经由第二总线唤醒第一
操作系统,包括:在第二操作系统待重启的情况下,通过第二操作系统经由第二总线向第一操作系统发起系统唤醒中断,其中,系统唤醒中断用于唤醒第一操作系统。
唤醒第一操作系统可以是通过核间中断实现的。如果第二操作系统待重启(例如,系统崩溃、接收到reboot命令),第二操作系统可以向第一操作系统发起系统唤醒中断,以唤醒第一操作系统。该系统唤醒中断可以是主动唤醒中断。在第一操作系统接管硬件控制器之后,可以控制第二操作系统进行系统重启,而在第二操作系统重启之后,可以重新接管硬件控制器,接管硬件控制器的过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,第一操作系统对于为其分配的处理器核心的占用可以但不限于享有更高的优先级,或者为第一操作系统分配的处理器核心当前由哪个操作系统使用可以但不限于在操作系统之间进行协商决定。如果为第一操作系统分配使用的目标处理器核心已被第二操作系统占用,在第一操作系统被唤醒运行时,可以检测目标处理器核心是否被释放,如果已被释放,则第一操作系统基于目标处理器核心运行。如果没有被释放,则可以向第二操作系统发送第七中断请求来请求第二操作系统释放目标处理器核心,第二操作系统响应该第七中断请求释放了目标处理器核心后第一操作系统基于目标处理器核心运行。比如:在处理器中的目标处理器核心已被添加至第二操作系统的调度资源池中,且,第一操作系统被唤醒运行的情况下,检测目标处理器核心是否被释放,其中,调度资源池中包括处理器中为第二操作系统分配的处理器核心;在检测到第二操作系统在第一操作系统被唤醒时已释放目标处理器核心的情况下,基于目标处理器核心运行第一操作系统。
可选地,在本实施例中,目标处理器核心已被添加至第二操作系统的调度资源池中可以但不限于表示目标处理器核心已被第二操作系统占用,如果在这种情况下第一操作系统被唤醒运行,那么第二操作系统可以主动释放目标处理器核心。也可以继续占用目标处理器核心,直到第一操作系统主动请求其释放目标处理器核心。
可选地,在本实施例中,第一操作系统检测目标处理器核心是否被释放,如果检测到目标处理器核心未被释放,则可以通过中断请求来请求第二操作系统释放目标处理器核心。比如:在检测到目标处理器核心未被释放的情况下,向第二操作系统发送第七中断请求,其中,第七中断请求用于请求第二操作系统释放目标处理器核心,第二操作系统用于响应第七中断请求释放目标处理器核心。
可选地,在本实施例中,第二操作系统可以但不限于在收到第七中断请求时直接释放目标处理器核心,也可以但不限于对于是否释放目标处理器核心进行判断,再决定是否立即将目标处理器核心释放给第一操作系统,还是继续运行得到运行结果再释放目标处理器核心给第一操作系统。
在上述可选的应用场景中,图6是根据本申请实施例的一种处理器资源控制过程的示意图二,如图6所示,在Linux调度CPU核0的时间片(T3,T4)内,此时RTOS处于休眠状态,在T3-1时刻RTOS可能由于硬件上报的中断事件导致被唤醒,Linux会将运行在CPU核0上的进程现场保留,RTOS占用CPU核0,处理完硬件上报的中断事件后,在T4-1时刻又进入休眠状态,此时RTOS上报释放CPU核0的中断给Linux,Linux继续按照设置的周期调度CPU核0,恢复现场运行进程。
操作系统运行的过程中,可以进行业务数据的交互,交互的过程可以但不限于采用存储空间和中断请求配合传输的方式来实现,操作系统之间通过存储空间来传递数据,通过中断请求来进行相互之间指令的通知。比如:获取第一操作系统基于处理器运行的过程中产生的业务数据;将业务数据存储至处理器上的存储空间;向第二操作系统发送第八中断请求,其中,第八中断请求用于请求第二操作系统从存储空间读取业务数据,第二操作系统用于响应第八中断请求从存储空间读取业务数据。
可选地,在本实施例中,第一操作系统基于处理器运行的过程中产生的业务数据被存储在处理器上的存储空间中,通过第八中断请求通知第二操作系统,由第二操作系统从存储空间中读取出业务数据,从而实现业务数据的交互。
可选地,在本实施例中,操作系统之间交互的业务数据可以但不限于是操作系统运行操作业务过程中
任何需要在系统间进行传输的数据。比如:业务的过程数据,业务的结果数据等等。
可选地,在本实施例中,处理器上的存储空间中可以但不限于为操作系统之间的交互过程配置专用的存储位置,可以称为共享内存,该共享内存可以但不限于按照操作系统再继续分配,即每个操作系统对应一段专用的共享内存。
第一操作系统所对应的共享内存的信息(比如:存储地址)可以携带在用于请求第二操作系统从存储空间读取业务数据的第八中断请求中,第二操作系统响应第八中断请求到其指示的共享内存上读取业务数据。
在本实施例中,各个中断请求可以但不限于通过软件协议的方式在系统间进行传输,或者也可以通过硬件模块进行传递。以硬件模块mailbox的形式传输中断请求为例,第一操作系统和第二操作系统之间可以建立mailbox通道,业务数据通过存储空间读写,中断请求通过mailbox通道传输。
在一个可选的实施方式中,提供了一种核间通信的方式。该方式包括如下步骤:
步骤a,第一操作系统将目标数据(可以为上述业务数据)发送至处理器内存中的目标虚拟通道(可以为上述存储空间)。
可选的,第一操作系统和第二操作系统可以是实时操作系统,也可以是非实时操作系统,第一操作系统和第二操作系统可以是单核操作系统,也可以是多核操作系统,目标数据为待发送的数据,目标虚拟通道是内存中的一段空闲存储空间,第一操作系统将目标数据发送至处理器内存中的目标虚拟通道是指第一操作系统的CPU核将待发送数据写入目标虚拟通道。
步骤b,向第二操作系统发送中断通知消息(可以为上述第八中断请求)。
可选的,第一操作系统的CPU核向第二操作系统的CPU核发送中断通知消息,中断通知消息中可以携带目标虚拟通道的地址,用于通知第二操作系统从目标虚拟通道中获取目标数据,中断通知消息可以是软件触发的,也可以是硬件触发的。
步骤c,第二操作系统响应中断通知消息,从内存中的目标虚拟通道获取目标数据。
可选的,第二操作系统的CPU核响应中断通知消息,从中断通知消息中解析目标虚拟通道的地址,再根据解析的地址定位至内存中的目标虚拟通道,从目标虚拟通道获取目标数据,实现第一操作系统和第二操作的系统之间的数据交互。
通过上述步骤,在处理器上运行的多个操作系统需要互相传输数据时,发送数据的第一操作系统将目标数据发送至处理器内存中的目标虚拟通道,并向第二操作系统发送中断通知消息,接收数据的第二操作系统响应中断通知消息从目标虚拟通道获取目标数据,解决了核间通信过程浪费资源,对操作系统的依赖性强的问题,达到减少核间通信过程对资源的浪费,对操作系统的依赖的效果。
在一个示例性实施例中,内存中包含数据存储区和元数据存储区,数据存储区被划分为多个存储单元,每个存储单元被设置为存储业务数据,元数据存储区被设置为存储数据存储区的各个存储单元的大小以及被占用状态。
可选的,目标虚拟通道由数据存储区的一个或多个存储单元构成,元数据存储区可以划分为与存储单元的数量相同的存储片,每个存储片被设置为记录一个存储单元的大小以及被占用状态,存储单元的大小可以由存储单元的首地址和末尾地址表征,也可以由首地址和存储单元的长度来表征,占用状态包含被占用状态和未被占用状态,可以由空闲标志的数值来表征。
在一个示例性实施例中,第一操作系统将目标数据发送至处理器内存中的目标虚拟通道包括:第一操作系统读取元数据存储区中的记录,根据读取的记录确定数据存储区中处于空闲状态、总空间大于等于目标数据的长度的至少一个存储单元,得到目标虚拟通道;将元数据存储区中目标虚拟通道对应的至少一个存储单元的状态设置为被占用状态,并将目标数据存储在目标虚拟通道。
需要说明的是,为了保证目标数据可以连续写入内存,写入的目标虚拟通道需要是空闲的、且大于等于目标数据的长度的存储空间,由于内存划分为元数据存储区和数据存储区,可以读取元数据存储区记录
的各个存储单元的占用状态,从中找出处于空闲状态的、可以满足数据存储需求的存储单元。
例如,每个存储单元的大小相等,若目标数据的长度大于一个存储空间的长度,则根据目标数据的长度确定所需的存储单元的数量,从中找出处于空闲状态的、连续的、数量满足数据存储需求的多个存储单元,构成目标虚拟通道。
再例如,每个存储单元的大小相等,数据存储区已预先对存储单元进行组合,得到多个大小不同的虚拟通道,每个虚拟通道由一个或多个存储单元组合而成,可以读取元数据存储区记录的各个虚拟通道的占用状态,从中找出处于空闲状态的、长度大于目标数据的长度的虚拟通道,也即目标虚拟通道。需要说明的是,当系统软件需要申请共享内存空间时候会判断需要申请的数据长度是否大于虚拟通道存放数据的最大长度,如大于虚拟通道存放数据的最大长度,系统软件可以把需要发送的数据分多次多送,保证每次发送数据的长度小于等于虚拟通道存放数据的最大长度,从而保证通信的顺利进行。
在一个示例性实施例中,第二操作系统响应中断通知消息,从内存中的目标虚拟通道获取目标数据包括:第二操作系统读取元数据存储区中的记录,根据读取的记录确定目标虚拟通道;从目标虚拟通道对应的至少一个存储单元获取目标数据,并将至少一个存储单元的状态设置为空闲状态。
也即,第二操作系统从目标虚拟通道对应的存储单元提取目标数据之后,为了不影响其他系统或任务对目标虚拟通道的使用,将目标虚拟通道对应的存储单元的状态设置为空闲状态。
在一个示例性实施例中,第一操作系统将目标数据发送至处理器内存中的目标虚拟通道包括:第一操作系统的驱动层接收目标数据,在内存中确定处于空闲状态的虚拟通道,得到目标虚拟通道;将目标虚拟通道的状态设置为被占用状态,并将目标数据存储至目标虚拟通道。
可选的,实时操作系统和非实时操作系统均具有驱动层,驱动层接收待发送的目标数据之后,调用接口在内存中寻找目标虚拟通道,为避免在写入数据的过程中其他系统申请使用目标虚拟通道,在找到目标虚拟通道之后,将目标虚拟通道的状态设置为被占用状态,再将目标数据写入目标虚拟通道。
在一个示例性实施例中,在第一操作系统中包含应用层的情况下,应用层设置有人机交互接口,在第一操作系统的驱动层在内存中确定处于空闲状态的虚拟通道之前,还包括:第一操作系统的应用层通过人机交换接口接收用户输入的待发送数据,采用预设格式封装待发送数据,得到目标数据,并调用数据写入函数通过预设通信接口将目标数据传递至驱动层,其中,预设通信接口设置在驱动层上。
可选的,应用层把需要发送的数据按照预设格式填充好,得到目标数据,然后在系统的/dev路径生成一个设备文件ipidev,应用层需要从驱动层读写数据的时候,可以先使用系统自带的open函数打开设备文件/dev/ipidev,然后就可以使用系统自带的写函数把目标数据从应用层发送到驱动层,驱动层再把数据放在共享内存中的目标虚拟通道,然后触发中断通知第二操作系统取数据。
在一个示例性实施例中,第二操作系统响应中断通知消息,从内存中的目标虚拟通道获取目标数据包括:第二操作系统基于中断通知消息触发中断处理函数,通过中断处理函数从内存中确定目标虚拟通道,并从目标虚拟通道获取目标数据。
在一个示例性实施例中,通过中断处理函数从内存中确定目标虚拟通道,并从目标虚拟通道获取目标数据包括:通过中断处理函数调用目标任务,由目标任务从内存中确定目标虚拟通道,并从目标虚拟通道获取目标数据。
可选的,中断处理函数发送任务通知唤醒负责数据提取的目标任务,目标任务先通过调用接口在共享内存中寻找目标虚拟通道,然后从目标虚拟通道中读取目标数据并进行数据解析。
在一个示例性实施例中,在第二操作系统包含应用层的情况下,内存中存储有功能标识,功能标识指示目标功能,通过中断处理函数从内存中确定目标虚拟通道,并从目标虚拟通道获取目标数据包括:通过中断处理函数从内存中确定功能标识和目标虚拟通道,并将目标虚拟通道的地址信息发送给功能标志匹配的目标应用程序,其中,目标应用程序为应用层中的目标应用程序;目标应用程序调用数据读取函数通过预设通信接口将地址信息传递至驱动层,驱动层从目标虚拟通道获取目标数据,并将目标数据传递至目标
应用层程序,其中,预设通信接口设置在驱动层,目标应用程序根据功能标识匹配的处理函数处理目标数据,以执行目标功能。
可选的,第二应用系统接收到中断通知消息之后,应用层调用对应的中断处理函数从内存中寻找目标虚拟通道,得到目标虚拟通道的地址信息,然后在系统的/dev路径生成一个设备文件ipidev,应用层需要从驱动层读写数据的时候,可以先使用系统自带的open函数打开设备文件/dev/ipidev,然后就可以使用系统自带的读函数读取目标虚拟通道中的目标数据,也即,驱动层根据目标虚拟通道的地址信息在共享内存中找到对应的目标数据,返回给应用层目标数据和目标数据的长度,在一个示例性实施例中,设置目标虚拟通道的状态为空闲。
需要说明的是,应用层的不同应用程序可以利用目标数据实现不同的功能,内存中存储有功能标识,指示应用程序通过目标数据实现的目标功能,可选的,功能标识可以为Net、Cmd,系统初始化的时候会把Net、Cmd和应用程序PID注册到驱动,驱动层根据收到的NetFn和Cmd就可以找到应用程序的PID,根据PID将数据要发送到对应的应用程序。
例如,NetFn=1,Cmd=1表示第一操作系统和第二操作系统之间互发”hello word”,在系统开始的时候会初始化一个数组,数组的一共三列,第一列NetFn,第二列Cmd,第三列对应NetFn和Cmd的处理函数记为xxCmdHandler。例如当第二操作系统收到第一操作系统发过来的消息时,从消息中获得NetFn和Cmd,判断NetFn=1,Cmd=1,就会去执行”hello word”对应的处理函数HelloCmdHandler去完成对应的功能。
在一个示例性实施例中,数据存储区中包含多个内存通道,每个内存通道由一个或多个存储单元构成,元数据存储区存储有多条记录,每条记录用于记录一个内存通道的元数据,每个内存通道的元数据至少包含内存通道的通道ID、内存通道的大小、内存通道的被占用状态,第一操作系统读取元数据存储区中的记录,根据读取的记录确定数据存储区中处于空闲状态、总空间大于等于目标数据的长度的至少一个存储单元,得到目标虚拟通道包括:遍历元数据存储区存储的记录,判断是否存在指示内存通道处于空闲状态、且内存通道的大小大于等于目标数据的长度的第一目标记录;在存在第一目标记录的情况下,将第一目标记录中记录的通道ID指示的内存通道确定为目标虚拟通道。
需要说明的是,可以将数据存储区划分为n个虚拟的内存通道,每个内存通道大小可以不等,也即,n个虚拟通道的大小依次为20*m、21*m、22*m、23*m......2n-1*m,其中,m为一个存储单元的大小,并设置以下结构体作为元数据管理内存通道:
typedef struct{
uint32_t Flag;
uint16_t ChannelId;
uint8_t SrcId;
uint8_t NetFn;
uint8_t Cmd;
uint32_t Len;
uint32_t ChannelSize;
uint8_t*pData;
uint8_t CheckSum;
}IpiHeader_T
其中,uint32_t Flag表征内存通道的状态,例如,0xA5A5A5A5表示此通道非空,否则为空;uint16_t ChannelId表示通道ID;uint8_t SrcId表示源CPU ID,源CPU是指向内存通道写入数据的CPU;uint8_t NetFn和uint8_t Cmd是功能参数;uint32_t Len为内存通道存储的数据的长度;uint32_t ChannelSize表示内存通道的大小;uint8_t*pData是指内存通道的首地址;uint8_t CheckSum是指校验和,第一操作系统
需要发送数据时,会将发送的数据通过校验和算法计算出校验值,并将校验值发送至第二操作系统,第二操作系统在接收到数据和校验值的情况下,根据接收到的数据通过相同的校验和算法算出校验值,将计算出的校验值和接收到的校验值进行比较,如果一致,说明接收到的数据有效,如果不一致,说明接收到的数据无效。每个虚拟的内存通道对应一条结构体记录,此结构体记录会按照通道ID递增的方式依次存放在共享内存的开始位置,系统上电后会初始化这些结构体记录,初始化Flag为0表示此通道为空,初始化ChannelId依次为0、1、2...n-1,初始化ChannelSize为对应虚拟内存通道的大小,初始化pData指向对应虚拟内存通道的首地址。
在一个示例性实施例中,第一操作系统在确定目标虚拟通道时,根据待发送的目标数据的大小使用接口GetEmptyChannel在所有的内存通道中的寻找满足以下两个条件的虚拟通道:通道结构体IpiHeader中的空闲标志Flag不等于0xA5A5A5A5(也即通道处于空闲状态),且通道结构体IpiHeader中的通道的大小ChannelSize大于等于目标数据的大小(也即内存大小可以满足目标数据的存储需求)。在寻找到满足上述条件的目标虚拟通道后,把该通道的状态设置为非空,也即,设置通道结构体IpiHeader中的空闲标志Flag为0xA5A5A5A5,然后把目标数据拷贝到目标虚拟通道中。
在一个示例性实施例中,在内存通道被占用的情况下,内存通道的元数据还包括目标数据的源CPU核的ID以及目标数据的目的CPU核的ID,第二操作系统读取元数据存储区中的记录,根据读取的记录确定目标虚拟通道包括:遍历元数据存储区存储的记录,判断是否存在第二目标记录,其中,第二目标记录指示内存通道处于被占用状态、且目的CPU核的ID为第二操作系统的CUP核的ID,源CPU核的ID非第二操作系统的CUP核的ID;在存在第二目标记录的情况下,将第二目标记录中记录的通道ID指示的内存通道确定为目标虚拟通道。
也即,目标虚拟通道是所有的通道中满足以下三个条件的虚拟通道:一是通道结构体IpiHeader中的空闲标志Flag等于0xA5A5A5A5(也即,指示通道处于被占用状态);二是通道结构体中的TargetId等于当前CPU的ID(也即,指示目标数据的目的CUP是第二操作系统的CPU);三是通道结构体中的TargetId不等于SrcId(也即,指示目标数据不是第二操作系统的CPU发送的)。
需要说明的是,若使用1位表示空闲标Flag,0表示通道空,1表示通道非空,如果Flag原本是0,突变为1,则系统在读取Flag之后会认为通道非空,从而导致通信异常。而本实施例中,将空闲标Flag设置为多位特殊字符,例如,0xA5A5A5A5,由于多位同时突变为特殊字符该概率大大小于一位突变概率,可以防止存储介质位突变对Flag的值造成影响,从而提高了通信的安全性。
在一个示例性实施例中,元数据存储区存储有映射表,映射表中有多条记录,每条记录用于记录一个存储单元的被占用状态,第一操作系统读取元数据存储区中的记录,根据读取的记录确定数据存储区中处于空闲状态、总空间大于等于目标数据的长度的至少一个存储单元,得到目标虚拟通道包括:确定目标数据待占用的存储单元的预设数量;从映射表的初始位置依次扫描每条记录;在扫描到连续的预设数量的目标记录的情况下,确定预设数量的目标记录指示的连续存储单元,其中,目标记录表征存储单元处于空闲状态;将连续存储单元确定为目标虚拟通道。
需要说明的是,为了便于数据的存储和提取,由于操作系统传递业务数据时需要占用内存中连续的存储单元,因此,首先需要确定内存申请指令中的存储单元的数量,由于每一存储单元的内存空间相同,因此可以通过所需内存的空间大小计算出需要的连续存储单元的预设数量,记作numb。
可选的,第一操作系统从映射表中的索引位置遍历记录,索引位置可以为映射表的起始位置,从映射表的起始位置开始,依次查询映射表的每条记录,判断是否存在连续的记录空闲内存页的大于等于numb的记录,在存在符合上述条件的记录的情况下,通过记录与内存页的对应关系,确定处理器中的连续存储单元,并将该连续存储单元确定为目标虚拟通道,以向目标虚拟通道写入数据。
在一个示例性实施例中,中断通知消息中包含连续存储单元的首地址和预设数量,第二操作系统读取元数据存储区中的记录,根据读取的记录确定目标虚拟通道包括:从映射表的初始位置依次扫描每条记
录;在扫描到记录有连续存储单元的首地址的情况下,将扫描到的地址指示的存储单元以及预设数量减一的连续存储单元确定为目标虚拟通道。
可选的,连续存储单元是指数量等于numb的连续存储单元,映射表中的每条记录还记录有对应存储单元的首地址,第二操作系统在映射表中扫描到数量等于numb的连续存储单的首地址的记录情况下,说明扫描到了目标虚拟通道的首地址,首地址指示的存储单元以及该存储单元之后的numb-1个连续存储单元构成目标虚拟通道,第二操作系统目标虚拟通道中获取数据,以完成和第一操作系统的数据交互。
在一个示例性实施例中,通过计数器对扫描到的连续的目标记录进行记录,在按照存储单元的数量从映射表的初始位置依次扫描每条记录的过程中,在当前扫描到目标记录的情况下,控制计数器加一,在当前扫描到非目标记录的情况下,控制计数器清零。
可选的,利用计数器的数值与所需存储单元数量的大小关系判断是否存在连续的预设数量的目标记录,也即是否存在预设数量的连续存储单元,可选的,将计数器的计数记作cntr,若扫描到的一个存储单元为空,则将cntr进行加1操作,若扫描到的存储单元不为空,则将累加的连续、处于空闲状态的存储单元的数量cntr清零,继续从该存储单元后一个地址处开始寻找连续、处于空闲状态的存储单元;直到cntr等于numb,表示已找到了满足内存需求的连续、处于空闲状态的存储单元;如果在扫描完整个映射表之后,不存在cntr大于等于numb,则表明本次动态申请内存失败,不存在预设数量的连续存储单元。
在一个示例性实施例中,在第一操作系统读取元数据存储区中的记录,根据读取的记录确定数据存储区中处于空闲状态、总空间大于等于目标数据的长度的至少一个存储单元,得到目标虚拟通道之前,该方法还包括:第一操作系统发送内存申请指令,并对处理器的内存执行加锁操作,其中,内存申请指令用于申请使用处理器的内存;在对内存加锁成功的情况下,读取映射表中的记录。
可选的,内存申请指令是运行在处理器上的操作系统发出的申请使用处理器的内存的指令,需要说明的是,为防止多个操作系统同时申请使用处理器的内存时导致申请冲突,因此,在操作系统发送内存申请指令时,先对处理器的内存执行加锁操作,当加锁成功后才可以申请使用内存,加锁操作是指内存申请的排他操作,在当前操作系统加锁成功之后,若没有释放加锁,其他服务器没有申请使用处理器内存的权限。
在一个示例性实施例中,对处理器的内存执行加锁操作包括:判断内存当前是否处于被加锁状态,其中,被加锁状态表征内存处于被申请使用的状态;在内存当前未处于被加锁状态的情况下,对内存执行加锁操作;在内存当前处于被加锁状态的情况下,确定对内存的加锁失败,在预设时长后再次申请对处理器的内存进行加锁,直至对内存加锁成功,或者,直至申请加锁的次数大于预设次数。
在处理器运行之前,需要将处理器中的元数据存储区以及数据存储区进行初始化操作,可选的,将元数据存储区中的映射表存储的记录进行初始化,并将内存管理信息进行初始化操作。
在进行申请内存操作前,对内存管理信息进行如下配置:
typedef struct{
uint32_t MemReady;
uint32_t MemLock;
}MallocMemInfo_T;
其中,结构体MallocMemInfo_T的成员变量MemLock表示共享内存是否已初始化完成,变量MemReady为0xA5A5A5A5,表示初始化操作已完成,可以正常动态申请和释放内存;结构体MallocMemInfo_T的成员变量MemReady表征是否被加锁。
可选的,若读取到变量MemLock为0,表示此时没有系统或任务在申请内存,也即内存当前未处于被加锁状态。若读取到变量MemLock为0xA5A5A5A5,表示有系统或任务正在申请内存,需要等此次申请完成后再申请,当前申请加锁失败。
在一个示例性实施例中,若对内存进行加锁操作时存在加锁失败的情况下,通过预设时长的等待后再
次申请内存的加锁,直至加锁成功,例如,预设时长可以为100微秒。
在一个示例性实施例中,若申请加锁失败,并且重复申请的次数超出了预设次数,表明当前时长中处理器中的内存处于不可分配状态,则停止申请操作。例如,预设次数可以为3次,在申请加锁的次数大于3次的情况下,可以向发送申请的操作系统返回当前内存不可用的消息。
可选的,在处理器的内存空间中存在可供第一操作系统使用的目标虚拟通道后,第一操作系统将需要传输的目标数据存储到相应的目标虚拟通道,在一个示例性实施例中,根据第一操作系统的数据写入情况更新处理器的内存空间的占用状态,也即将目标连续内存空间由未占用状态变为被占用状态,同时,为了使得其他系统或任务可以申请内存,释放对内存的加锁。
在一个示例性实施例中,该方法还包括:在未扫描到连续的预设数量的目标记录的情况下,释放对内存的加锁。
可选的,在对映射表中的记录进行扫描后,检测不到预设数量的、连续、处于空闲状态的存储单元,表明处理器的内存中没有足够的空间内存页供第一操作系统使用,本次动态申请内存失败,释放对内存的加锁。
在一个示例性实施例中,通过软件中断的方式向第二操作系统发送中断通知消息。
在一个示例性实施例中,通过软件中断的方式向第二操作系统发送中断通知消息包括:向处理器的预设寄存器中写入中断号和第二操作系统的CPU核的ID,并基于中断号和第二操作系统的CPU核的ID生成中断通知消息。
可选的,软中断是软件产生的中断,软件可以给执行自己的CPU核发送中断,也可以给其他的CPU核发送中断。预设寄存器可以为GICD_SGIR寄存器,可以通过软件向GICD_SGIR寄存器写入SGI(Software Generated Interrupts,软件中断)中断号、目的CPU ID,来产生一个软件中断,SGI中断号是为核间通信保留的软中断号。
在多核异构操作系统中,为了最大程度的兼容当下的资源分配方式,使用8-15号(共8个中断)表征核间中断向量表,在第一操作系统为RTOS操作系统,第二操作系统为Linux操作系统的情况下,向量表的一种可行分配方案如表1所示:
表1
在一个示例性实施例中,通过硬件中断的方式向第二操作系统发送中断通知消息。
可选的,硬件中断是指通过硬件设备产生的中断,可以为私有外设中断,也可以为共享外设中断,需要说明的是,硬中断是CPU外部的硬件引入的中断,具有随机性,软中断是CPU中运行的软件执行中断指令引入的中断,是预先设定的,本实施例不限定产生中断通知消息的方式。
在一个可选的实施方式中,提供了一种共享内存的方式。该方式包括如下步骤:
步骤101,接收内存申请指令,并对处理器的内存执行加锁操作,其中,内存申请指令用于申请使用处理器的内存。
可选的,内存申请指令是运行在处理器上的操作系统发出的申请使用处理器的内存的指令,需要说明的是,为防止多个操作系统同时申请使用处理器的内存时导致申请冲突,因此,在操作系统发送内存申请
指令时,先对处理器的内存执行加锁操作,当加锁成功后才可以申请使用内存,加锁操作是指内存申请的排他操作,在当前操作系统加锁成功之后,若没有释放加锁,其他服务器没有申请使用处理器内存的权限。
在本申请实施例提供的共享内存的方法中,在对处理器的内存执行加锁操作之前,该方法还包括:判断内存当前是否处于被加锁状态,其中,被加锁状态表征内存处于被申请使用的状态;在内存当前未处于被加锁状态的情况下,对内存执行加锁操作。
可选的,由于多个系统或多个任务在同时申请使用内存时会导致申请冲突,处理器的内存在同一时间段内只能被一个系统或任务进行加锁,因此,在检测到当前内存未处于被加锁状态的情况下,当前操作系统才能对内存执行加锁操作。
可选的,通过判断内存中存储的预设变量是否为预设值来判断内存是否处于被加锁状态,若预设变量不为预设参值数,则表明内存未处于被加锁状态,没有其他系统或任务在申请内存空间,则加锁成功;反之,预设变量为预设参数,则表明在当前时刻内存处于被加锁状态,存在除该操作系统之外的其他系统或任务在申请内存空间,则加锁失败。
在该共享内存的方法中,在判断内存当前是否处于被加锁状态之后,还包括:在内存当前处于被加锁状态的情况下,确定对内存的加锁失败;在对内存的加锁失败的情况下,在预设时长后再次申请对处理器的内存进行加锁,直至对内存加锁成功,或者,直至申请加锁的次数大于预设次数。
可选的,若对内存进行加锁操作时存在加锁失败的情况下,通过预设时长的等待后再次申请内存的加锁,直至加锁成功,例如,预设时长可以为100微秒。
在一个示例性实施例中,若申请加锁失败,并且重复申请的次数超出了预设次数,表明当前时长中处理器中的内存处于不可分配状态,则停止申请操作。例如,预设次数可以为3次,在申请加锁的次数大于3次的情况下,可以向发送申请的操作系统返回当前内存不可用的消息。
步骤102,在对内存加锁成功的情况下,读取内存的被占用状态,并根据内存的被占用状态判断内存中是否存在空闲的目标内存空间,其中,目标内存空间的大小大于等于内存申请指令申请的内存的大小。
在申请加锁成功后,操作系统对处理器中的内存进行申请,可选的,扫描用于记录内存被占用状态的信息,判断是否存在目标内存空间,也即,判断处理器中是否存在处于未被占用状态的、连续的可以满足内存使用需求的内存空间,满足内存使用需求是指内存空间的大小大于等于操作系统申请的内存大小。
需要说明的是,申请内存的时候还可以使用不连续的内存空间,可以在非一个最小内存块后的后面增加一个指针,指向下一个申请获得的最小内存块,同时,在数据读写的时候,根据存储地址和指针实现数据跨数据块的数据读写。本实施例不限定目标内存空间的形式。
步骤103,在内存中存在目标内存空间的情况下,将目标内存空间的地址信息反馈至内存申请指令的发送端,更新内存的被占用状态,并释放对内存的加锁。
发送端是指发送内存申请指令的操作系统,需要说明的是,由于操作系统在核间通信时,通过使用共享内存发送和接收数据,并在收发数据的过程中使用申请的内存返回的地址进行数据的存取,因此需要确定已申请的内存空间的地址信息。
可选的,在处理器的内存空间中存在可供操作系统使用的目标内存空间后,将该目标连续空间的地址信息发送给该操作系统,操作系统根据地址信息将需要传输的数据存储到相应的内存空间中。
在一个示例性实施例中,根据操作系统的数据写入情况更新处理器的内存空间的占用状态,也即,将目标内存空间由未占用状态变为被占用状态,并释放在动态申请内存之前的加锁操作,以使其他操作系统可以申请使用处理器的内存空间。
通过上述步骤:接收内存申请指令,并对处理器的内存执行加锁操作,其中,内存申请指令用于申请使用处理器的内存;在对内存加锁成功的情况下,读取内存的被占用状态,并根据内存的被占用状态判断内存中是否存在空闲的目标内存空间,其中,目标内存空间的大小大于等于内存申请指令申请的内存的大
小;在内存中存在目标内存空间的情况下,将目标内存空间的地址信息反馈至内存申请指令的发送端,更新内存的被占用状态,并释放对内存的加锁,解决了多个内核间共享内存的使用效率低、灵活性差以及过于依赖操作系统的问题,达到提高共享内存灵活性以及使用效率,并降低对操作系统的依赖的效果。
在该共享内存的方法中,内存中包括元数据存储区和数据存储区,数据存储区被设置为存储业务数据,元数据存储区存储有映射表,映射表被设置为记录数据存储区的被占用状态,读取内存的被占用状态,并根据内存的被占用状态判断内存中是否存在空闲的目标内存空间包括:从元数据存储区中读取映射表中的记录,并根据映射表中的记录判断数据存储区中是否存在目标内存空间。
通过查询映射表中的记录的方式查询内存的被占用状态,可选的,获取处理器中存储的元数据存储区,并识别元数据存储区中的映射表,通过遍历映射表中的记录,读取数据存储区的被占用状态,判断数据存储区中是否存在连续的、处于空闲状态的、满足内存使用需求的内存空间。
在本申请实施例提供的共享内存的方法中,数据存储区由多个内存页构成,映射表中包含多条记录,每条记录用于记录一个内存页的被占用状态,从元数据存储区中读取映射表中的记录,并根据映射表中的记录判断数据存储区中是否存在目标内存空间包括:确定内存申请指令申请的内存页的预设数量;从映射表的初始位置依次扫描每条记录;在扫描到连续的预设数量的目标记录的情况下,确定内存中存在目标内存空间,其中,目标记录指示内存页处于空闲状态。
需要说明的是,数据存储区按照相同的内存大小划分为多个分配单元,将每一个分配单元记作一个内存页,例如,数据存储区的内存空间为A字节,划分的分配单元为B字节,则该数据存储区共包含A/B个内存页,映射表中的记录也即内存页记录,每条内存页记录用于记录一个内存页的被占用状态,映射表中的内存页记录和数据存储区中的内存页的数量相同。
数据存储区即为动态分配内存块区,元数据存储区包括动态分配内存映射表区,其中,映射表区按照数据存储区划分内存页的数量划分相同数量的记录,并将该记录记作内存页记录,并将所有内存页记录组合为映射表,映射表中所有内存页记录与数据存储区的所有内存页存在一一对应关系,每一内存页记录中表示对应的内存页的分配状态,也即内存页是否被占用。
可选的,由于操作系统进行协同的业务数据需要占用处理器中连续的内存页,因此,首先需要确定内存申请指令中的内存页的预设数量,由于每一内存页的内存空间相同,因此可以通过所需内存的空间大小计算出需要的连续内存页的预设数量,记作numb。
在一个示例性实施例中,在获取到处理器的元数据存储区中的映射表后,从映射表中的索引位置遍历内存页记录,索引位置可以为映射表的起始位置,从映射表的起始位置开始,依次查询映射表的每条内存页记录,确定是否存在连续的记录空闲内存页的大于等于numb的内存页记录,在存在符合上述条件的内存页记录的情况下,通过内存页记录与内存页的对应关系,确定处理器中存在目标内存空间。
在本申请实施例提供的共享内存的方法中,在从映射表的初始位置依次扫描每条记录之后,该方法还包括:在扫描完毕映射表中的所有记录,且不存在连续的预设数量的目标记录的情况下,确定内存中不存在目标内存空间。
可选的,从映射表的起始位置开始,查询映射表的内存页记录确定是否存在连续、并且内存页数量大于等于numb的空间,若扫描完毕整个映射表后仍未发现存在连续的、预设数量的空闲内存页,则表明不存在目标内存空间。
在本申请实施例提供的共享内存的方法中,通过计数器对扫描到的目标记录的数量进行记录,在从映射表的初始位置依次扫描每条记录的过程中,在当前扫描到目标记录的情况下,控制计数器加一,在当前扫描到非目标记录的情况下,控制计数器清零,其中,非目标记录指示内存页处于被占用状态。
可选的,通过利用计数器的数值与所需内存页数量的大小关系判断是否存在连续的预设数量的目标记录,也即是否存在目标内存空间,可选的,将计数器的计数记作cntr,若扫描到的一页内存页为空,则将cntr进行加1操作,若扫描到的内存页不为空,则将累加的连续、空闲状态的内存页的数量cntr清零,继
续从该内存页后一个地址处开始寻找连续的空的内存页;直到cntr等于numb,表示已找到了满足内存需求的连续、处于空闲状态的内存页;如果在扫描完整个映射表的过程,cntr小于numb,则表明本次动态申请内存失败,不存在目标内存空间。
在本申请实施例提供的共享内存的方法中,在初始位置为映射表中的最后一个位置的情况下,将目标内存空间的地址信息反馈至内存申请指令的发送端包括:确定连续的预设数量的目标记录中最后扫描到的目标记录,将最后扫描到的目标记录指示的内存页的首地址反馈至发送端。
可选的,当对映射表进行扫描时,扫描方式可以选择从映射表的第一个位置扫描或者从映射表的最后一个位置开始扫描,当扫描方式为从映射表的最后一个位置扫描时,在计数器显示的数值cntr大于等于预设数量numb时,将扫描到的最后一个内存页记录对应的内存页的首地址,并再内存页记录中将这些内存页的状态置为非空,将首地址作为本次内存申请指令的整个连续内存页的首地址。
在一个示例性实施例中,将该地址反馈至发出内存申请指令的操作系统,由操作系统根据地址信息对内存进行数据写入操作。
在本申请实施例提供的共享内存的方法中,初始位置为映射表中的第一个位置,将目标内存空间的地址信息反馈给内存申请指令的发送端包括:确定连续的预设数量的目标记录中首个扫描到的目标记录,将首个扫描到的目标记录指示的内存页的首地址反馈至发送端。
可选的,当扫描方式为从映射表的第一个位置扫描时,在计数器显示的数值cntr大于等于预设数量numb的情况下,将扫描到的第一个内存页记录的地址作为首地址,将其发送到发出内存申请指令的操作系统中,由操作系统根据地址信息对内存进行数据写入操作。
在本申请实施例提供的共享内存的方法中,在从映射表的初始位置依次扫描每条记录的过程中,通过预设变量存储扫描到的连续的目标记录中的首个目标记录。
可选的,预设变量是指映射表中用于存储初始位置的地址信息的变量,并将其记作offset,每扫描到一个空闲、连续的内存页时,计数器显示的数值cntr进行加1操作,在计数器显示的数值cntr大于等于预设数量numb的情况下,将offset当前存储的地址信息作为首个目标记录的地址。
在本申请实施例提供的共享内存的方法中,在读取内存的被占用状态,并根据内存的被占用状态判断内存中是否存在空闲的目标内存空间之后,该方法还包括:在内存中不存在空闲的目标内存空间的情况下,释放对内存的加锁。
可选的,在对映射表中的内存页记录进行扫描后。检测到不包含预设数量的、连续、空闲的内存页,也即不包含目标内存空间时,表明处理器的内存中没有足够的空间内存页供该操作系统使用,本次动态申请内存失败,释放对内存的加锁。
在本申请实施例提供的共享内存的方法中,内存中包括元数据存储区和数据存储区,数据存储区被设置为存储业务数据,元数据存储区存储有内存管理信息,判断内存当前是否处于被加锁状态包括:读取元数据存储区存储的内存管理信息,判断内存管理信息是否包含预设信息,其中,预设信息表征内存处于被加锁状态;在内存管理信息包含预设信息的情况下,确定内存当前未处于被加锁状态;在内存管理信息不包含预设信息的情况下,确定内存当前处于被加锁状态。
对于判断处理器的内存是否处于被加锁状态,需要利用元数据存储区中的内存管理信息进行判断,可选的,在获取到元数据存储区的内存管理信息时,利用根据判断内存管理信息中是否包含预设信息,其中,预设信息是用于表征内存是否处于被加锁状态;若内存管理信息中未包含预设信息,则表明当前内存处于未被加锁状态,反之则处于被加锁状态。
在本申请实施例提供的共享内存的方法中,内存管理信息包括第一字段信息和第二字段信息,第一字段信息用于描述内存是否处于被加锁状态,第二字段信息用于描述内存是否初始化完成,在接收内存申请指令之前,该方法还包括:初始化数据存储区存储的第一字段信息和第二字段信息。
在嵌入式系统运行之前,需要将处理器中的元数据存储区以及数据存储区进行初始化操作,可选的,
将元数据存储区中的映射表存储的内存页记录进行初始化,并将内存管理信息进行初始化操作。
可选的,内存管理信息由第一字段信息以及第二字段信息组成,也即:,第一字段信息表征是否被加锁,第二字段信息用于表征是否初始化完成,在进行申请内存操作前,对内存管理信息进行如下配置:
typedef struct{
uint32_t MemReady;
uint32_t MemLock;
}MallocMemInfo_T;
其中,结构体MallocMemInfo_T的成员变量MemLock(第二字段信息)表示共享内存是否已初始化完成,结构体MallocMemInfo_T的成员变量MemReady(第一字段信息)表征是否被加锁,其中,变量MemLock为0,表示此时没有系统或任务在申请内存,也即未被加锁,MemLock为0xA5A5A5A5表示有系统或任务正在申请内存,其他系统或者任务等此次申请完成后再申请;变量MemReady为0xA5A5A5A5,表示初始化操作已完成,可以正常动态申请和释放内存。
在本申请实施例提供的共享内存的方法中,更新内存的被占用状态包括:将映射表中记录的目标内存空间对应的内存页的状态变更为被占用状态。
可选的,在操作系统需要占用目标内存空间的情况下,通过识别目标内存空间的多个内存页的地址信息,按照内存页与内存页记录的对应关系,更新元数据存储区的映射表区的内存页记录,使其由未占用状态变为被占用状态。
在一个可选的实施方式中,提供了一种操作系统之间的通信方式。该方式包括如下步骤:
步骤201,接收第一操作系统的内存申请指令,并对处理器的内存执行加锁操作,其中,内存申请指令用于申请使用处理器的内存;
需要说明的是,为防止多个操作系统同时申请处理器的内存空间时导致申请失败,因此,第一操作系统发送内存申请指令时,向处理器内存申请加锁操作,当申请加锁成功后才可以申请内存。
可选的,通过判断内存中存储的预设变量是否为预设值来确定是否加锁成功,若预设变量不为预设参值数,则表明没有其他系统或任务在申请内存空间,则加锁成功;反之,预设变量为预设参数,则表明在当前时刻,存在除该操作系统之外的其他系统或任务在申请内存空间,则加锁失败。
步骤202,在对内存加锁成功的情况下,读取内存的被占用状态,并根据内存的被占用状态判断内存中是否存在空闲的目标内存空间,其中,目标内存空间的大小大于等于内存申请指令申请的内存的大小;
可选的,在申请加锁成功时,根据操作系统发出的内存申请指令,扫描用于记录内存被占用状态的信息判断是否存在目标内存空间,也即,判断处理器中是否存在处于未被占用状态的、连续的内存空间,在一个示例性实施例中,判断处于未被占用状态的、连续的内存空间的大小是否大于等于操作系统申请的内存大小,得到判断结果。
步骤203,在内存中存在目标内存空间的情况下,将目标内存空间的地址信息反馈至第一操作系统,更新内存的被占用状态,并释放对内存的加锁;
可选的,在判断结果指示处理器的内存空间中存在可供操作系统使用的目标内存空间后,将该目标连续空间的地址信息发送给该操作系统,操作系统根据地址信息将需要传输的数据存储到相应的内存空间中。
可选的,根据操作系统的数据写入情况更新处理器的内存空间的占用状态,也即将目标内存空间由未占用状态变为被占用状态,并释放在动态申请内存之前的加锁操作。
步骤204,响应第一操作系统的存储操作,将目标数据存储至目标内存空间,并将连续内存空间的地址信息发送至第二操作系统;
可选的,在申请内存成功后,第一操作系统将需要传递的目标数据存储值申请的目标内存空间,并向与第一操作系统协同的第二操作系统发送该目标内存空间的地址信息,通知第二操作系统进行数据获取。
步骤205,接收第二操作系统基于地址信息发送的获取指令,将目标内存空间存储的目标数据发送至第二操作系统。
可选的,在第二操作系统接收到目标内存空间的地址信息后,发出数据的获取指令,嵌入式系统接收该指令并将存储在目标内存空间的目标数据发送到第二操作系统中。
通过上述步骤:接收第一操作系统的内存申请指令,并对处理器的内存执行加锁操作,其中,内存申请指令用于申请使用处理器的内存;在对内存加锁成功的情况下,读取内存的被占用状态,并根据内存的被占用状态判断内存中是否存在空闲的目标内存空间,其中,目标内存空间的大小大于等于内存申请指令申请的内存的大小;在内存中存在目标内存空间的情况下,将目标内存空间的地址信息反馈至内存申请指令的发送端,更新内存的被占用状态,并释放对内存的加锁;响应第一操作系统的存储操作,将目标数据存储至目标内存空间,并将连续内存空间的地址信息发送至第二操作系统;接收第二操作系统基于地址信息发送的获取指令,将目标内存空间存储的目标数据发送至第二操作系统,解决了多个内核间共享内存的使用效率低、灵活性差以及过于依赖操作系统的问题,达到提高共享内存灵活性以及使用效率,并降低对操作系统的依赖的效果。
在一个示例性实施例中,在第一操作系统使用物理地址作进行数据读写操作,第二操作系统使用虚拟地址作进行数据读写操作的情况下,第二操作系统将目标内存空间的地址信息转换为虚拟地址,并采用虚拟地址访问内存,从目标内存空间读取目标数据。
由于在核间通信使用共享内存发送和接收数据时,会使用动态申请内存返回的地址,但是不同的系统使用地址系统可能不同,例如,实时操作系统为第一操作系统,非实时操作系统为第二操作系统,实时操作系统中访问共享内存可以直接使用物理地址,非实时操作系统中不能直接使用物理地址访问共享内存,则需要使用映射后的虚拟地址,在第二操作系统接收到目标内存空间的地址信息后,通过地址信息offset进行转换,将其映射为虚拟地址,并根据虚拟地址进行操作。可选的,非实时操作系统下的共享内存虚拟基地址vBase(共享内存真实物理地址假设为0x96000000);实时操作系统下共享内存物理基地址pBase(即0x96000000)。
非实时操作系统中动态申请的内存返回的地址也是虚拟地址vData,非实时操作系统中,Offset=vData-vBase;数据发送从非实时操作系统中发送到实时操作系统中,实时操作系统使用地址pData访问动态申请的共享内存pData=pBase+Offset。
实时操作系统中动态申请的内存返回的地址是物理地址pData,实时操作系统中Offset=pData-pBase;数据发送从实时操作系统中发送到非实时操作系统中,非实时操作系统使用地址vData访问动态申请的共享内存vData=vBase+Offset。
在一个示例性实施例中,内存中包括元数据存储区和数据存储区,数据存储区由多个内存页构成,每条内存页用于存储业务数据,元数据存储区存储有映射表,映射表中包含多条记录,每条记录用于记录一个内存页的被占用状态,读取内存的被占用状态,并根据内存的被占用状态判断内存中是否存在空闲的目标内存空间包括:确定内存申请指令申请的内存页的预设数量;从映射表的初始位置依次扫描每条记录;在扫描到连续的预设数量的目标记录的情况下,确定内存中存在目标内存空间,其中,目标记录指示内存页处于空闲状态。
可选的,获取处理器中存储的元数据存储区,并识别元数据存储区中的映射表,从映射表中的索引位置开始遍历每一条内存页记录,依次查询映射表的每条内存页记录,确定是否存在连续的记录空闲内存页的大于等于预设数量的内存页记录,在存在符合上述条件的内存页记录的情况下,通过内存页记录与内存页的对应关系,确定处理器中存在目标内存空间。
在一个示例性实施例中,在初始位置为映射表中的最后一个位置的情况下,将目标内存空间的地址信息反馈至内存申请指令的发送端包括:确定连续的预设数量的目标记录中最后扫描到的目标记录,将最后扫描到的目标记录指示的内存页的首地址反馈至发送端。
可选的,当对映射表进行扫描时,扫描方式可以选择从映射表的第一个位置扫描或者从映射表的最后一个位置开始扫描,当扫描方式为从映射表的最后一个位置扫描时,将扫描到的最后一个内存页记录对应的内存页的首地址,并将这些内存页置为非空,将首地址作为本次内存申请指令的整个连续内存页的首地址。在一个示例性实施例中,将该地址反馈至发出内存申请指令的操作系统,由操作系统根据地址信息对内存进行数据写入操作。
在本实施例中还提供了一种共享内存的方法,该方法包括:在操作系统发出内存申请指令前,为防止多个操作系统同时申请处理器的内存空间时导致申请冲突,需要申请加锁操作,并判断是否加锁成功;在判断结果表示动态申请内存加锁成功的情况下,根据发出的内存申请指令中内存大小计算需要分配的连续内存页的页数,并记作nmemb;若判断结果表示申请加锁失败的情况下,在等待一段时间(可以为100微秒)后重新发出申请,直至申请成功,若申请加锁失败的次数大于预设次数(预设次数可以为三次)的情况下,则退出内存申请。
在一个示例性实施例中,在申请加锁成功后,对处理器的元数据存储区进行初始化操作,并将映射表的最后位置记作offset,并根据申请内存指令中的所需内存的空间大小计算出需要的连续内存页的数量,并将内存页数量记作nmemb,并设置用记录内存页数量的计数器,记作cmemb,然后获取处理器中的元数据存储区的映射表,并从映射表的offset位置开始扫描整个映射表,通过映射表中存储的内存页记录与数据存储区中内存页的对应关系,寻找连续的空的内存页,如果扫描的当前的内存页处于被占用状态,则将offset=offset-cmemb,然后把计数器中累加的连续的空的内存页的数据cmemb清零,继续从新的offset位置重新开始寻找连续的空的内存页;若扫描的内存页为空,也即处于空闲状态时,将计数器的数值cmemb加1,并offset=offset-1,继续判断下一个内存页,直到cmemb等于nmemb,也即计数器数据与所需内存的空间大小相等时,表示扫描到满足要求的连续内存页。
在一个示例性实施例中,将符合要求的内存页在对应的映射表中标记为被占用状态,将最后一个找到的内存页的首地址作为动态申请的整个连续的内存页的首地址,释放动态申请内存的锁,本次动态申请内存成功。
若在扫描整个映射表的过程中,offset的值小于0,则表明没有符合要求的内存页供操作系统使用,释放动态申请内存的锁,本次动态申请内存失败。
此外,动态申请了空间之后发现空间不够用时还可以动态调整大小,可选,可以再次发出更新后的内存申请指令,并对内存执行加锁操作,在加锁成功的情况下,若更新后的内存申请指令需要申请的内存空间增大,判断已申请的目标连续内存之后是否存在所需的内存空间,在存在的情况下,申请成功,若更新后的内存申请指令需要申请的内存空间减小,则释放部分内存空间。
本实施例通过划分多个存储区,利用索引位置根据实际需要的空间大小动态申请,使用完成后释放掉,并且动态申请了空间之后发现空间不够用时还可以动态调整大小,可以达到提高共享内存灵活性以及使用效率的效果。
在本实施例中,图7是根据本申请实施例的一种业务数据交互过程的示意图,如图7所示,第一操作系统在运行过程中产生业务数据并判定该业务数据是第二操作系统需要的或者是需要发送给第二操作系统的。此时,第一操作系统将业务数据存储至存储空间中,并向第二操作系统发送第八中断请求,第二操作系统响应该第八中断请求从存储空间中读取业务数据,并进行后续的处理。
第一操作系统可以但不限于具有不同的运行机制,比如:控制第一操作系统基于处理器周期性运行;或者,响应接收到的唤醒请求,控制第一操作系统基于处理器运行;或者,根据处理器上所产生的操作业务与第一操作系统之间的匹配度,控制第一操作系统基于处理器运行。
可选地,在本实施例中,第一操作系统的运行机制可以但不限于包括:周期性运行和触发式运行。周期性运行又可以称为轮询模式,触发式运行也可以称为触发模式,其可以但不限于包括两种方式:一种可以是请求触发,由唤醒请求触发第一操作系统的唤醒运行。另一种可以是条件触发,由操作业务与第一操
作系统之间的匹配度来触发第一操作系统的唤醒运行。
可选地,在本实施例中,第一操作系统在周期性运行的情况下,其单个运行周期的时长与两个运行周期之间的间隔时长可以相同也可以不同。两个运行周期之间的间隔时长中第一操作系统可以但不限于处于休眠状态,由第二操作系统来使用为第一操作系统分配的处理器核心。如果单个运行周期的时长与两个运行周期之间的间隔时长相同,则第一操作系统和第二操作系统交替占用为第一操作系统分配的处理器核心相同时长。如果单个运行周期的时长与两个运行周期之间的间隔时长不同,则第一操作系统和第二操作系统交替占用为第一操作系统分配的处理器核心不同时长。可以第一操作系统占用的时长大于第二操作系统占用的时长,也可以第二操作系统占用的时长大于第一操作系统占用的时长。
依据不同的运行场景,不同的系统功能可以但不限于采用不同的运行机制运行第一操作系统,从而更加灵活地找到与当前的运行场景,系统功能更加匹配的运行机制来运行第一操作系统,提高操作业务的处理效率。
在一个可选的实施方式中,提供了一种轮询模式下第一操作系统(比如RTOS)的唤醒策略,图8是根据本申请实施例的一种第一操作系统运行过程的示意图一,如图8所示,轮循模式可以为一种基于时间片的轮询调度模式,可以根据设置的时间进行RTOS周期性唤醒运行,该模式下多系统(以Linux和RTOS双系统为例)运行过程中(T0,T1)=(Tn,T(n+1))其中n取非0正整数,也就是说双系统交替占用CPU核0相同时间。(T0,T1)时间片内RTOS调度CPU核0运行其进程,(T1,T2)时间片内Linux调度CPU核0运行其进程,该时间片内RTOS是休眠状态,后面时间片以此类推按照周期分割进行。
对于触发模式中的请求触发方式来说,唤醒请求可以但不限于是第一操作系统所连接的设备发起的,或者也可以但不限于是第二操作系统发起的。
在一个可选的实施方式中,以设备触发第一操作系统唤醒运行为例,提供了一种触发模式下第一操作系统(比如RTOS)唤醒策略,图9是根据本申请实施例的一种第一操作系统运行过程的示意图二,如图9所示,触发模式可以由RTOS总线域的设备发起的中断启动,RTOS总线域连接了设备0至设备N,当RTOS处于休眠状态时,假设某一时刻设备0触发中断给RTOS,RTOS随即被唤醒,唤醒后的RTOS先触发抢占CPU核0的中断给Linux,Linux收到中断后首先释放CPU核0,并保存现场(将运行的数据压入堆栈),然后RTOS系统调度CPU核0处理设备0触发的中断所指示的操作业务,如果当前处于轮询模式下,后续处理过程与上述轮询模式相同,这里不再赘述。
如果是第二操作系统触发的第一操作系统唤醒运行,第二操作系统如果当前正在占用分配给第一操作系统的处理器核心,可以直接释放该处理器核心,由第一操作系统唤醒后使用该处理器核心处理第二操作系统分配的操作业务。
在一个可选的实施方式中,第一操作系统上运行的业务可以但不限于包括硬件接口信号的生成业务,在本实施方式中提供了一种硬件接口信号的生成过程,该过程包括如下步骤:
步骤11,通过第一操作系统获取请求命令。
在步骤11中,请求命令可以是一种硬件接口信号的生成命令,例如,硬件接口信号可以是PECI信号,则请求命令是基于PECI协议的一种PECI请求命令。
可选地,硬件接口信号还可以是其他协议类型的硬件接口信号,例如,HDMI(high definition multimedia interface,高清多媒体接口)信号、RGMII(reduced gigabit media independent interface,并行总线)信号、SGMII(serial gigabit media independent interface,单路传输的串行总线)信号、GPIO(general-purpose input/output,通用型输入输出端口)信号、SPI(serial peripheral interface,串行外设接口)信号等等。在此基础上,请求命令也可以是其他协议类型的请求命令,例如,在硬件接口信号为GPIO信号时,请求命令为GPIO请求命令。本申请对于请求命令和硬件接口信号的可选类型不作特殊限定。
步骤12,确定请求命令对应的多个逻辑位信息。
在步骤12中,第一操作系统在得到请求命令之后,可以分析得到请求命令对应的多个逻辑位信息,其
中,多个逻辑位信息之间存在先后顺序,第一操作系统通过请求命令对应的多个逻辑位信息可以生成请求命令对应的波形信号(即硬件接口信号),从而通过硬件接口信号将请求命令包含的信息传输给其他设备。
可选地,请求命令中包括有至少一个字段,每个字段可以通过逻辑位0或1进行表示,在此基础上,每个字段与逻辑位1或0之间对应的转换关系即为该字段对应的逻辑位信息,在请求命令对应多个字段的情况下,请求命令对应有多个逻辑位信息。此外,每个逻辑位可通过高电平信号和低电平信号的结合使用来表示,例如,对于逻辑位0,可使用第一预设时长的高电平信号和第二预设时长的低电平信号来组合表示,对于逻辑位1,可使用第二预设时长的高电平信号和第一预设时长的低电平信号来组成表示,其中,第一预设时长和第二预设时长不同。在此基础上,由于每个逻辑位既包含有高电平信号,也包含有低电平信号,因此每个逻辑位实际上是通过一段波形信号(高低电平信号之间的变换呈现为一个波形)来表示的,由于请求命令对应有多个逻辑位信息,也就是对应有多个逻辑位,因此请求命令对应的硬件接口信号是由每个逻辑位信息对应的波形信号组合得到的一个波形信号。
步骤13,根据多个逻辑位信息和定时器生成请求命令对应的硬件接口信号。
可选地,步骤13中的定时器可以是第一操作系统中的一个计时程序,定时器还可以是第一操作系统所在芯片上的一个寄存器,其中,定时器至少可以提供计时功能以及计数功能。本申请采用定时器的计时功能和计数功能,结合多个逻辑位信息生成请求命令对应的硬件接口信号。
需要注意到的是,以芯片为BMC芯片、硬件接口信号为PECI信号为例,在相关技术中,为了实现BMC芯片与CPU等元器件之间的PECI通信,相关技术需要BMC芯片本身具备PECI控制器的硬件逻辑设计,从而导致了BMC芯片的设计成本较高的问题。换言之,相关技术中,为了在BMC芯片上生成PECI信号,则必须要预先在BMC芯片上实现PECI控制器的硬件逻辑设计,而在本申请中,仅需要第一操作系统即可在BMC芯片上生成PECI信号,无需必须在BMC芯片上实现PECI控制器的硬件逻辑设计,从而降低了BMC芯片的设计难度和设计成本。
基于步骤11至步骤13的内容可知,在本实施方式中,采用由第一操作系统生成请求命令对应的硬件接口信号的方式,首先通过第一操作系统获取请求命令,然后确定请求命令对应的多个逻辑位信息,最后根据多个逻辑位信息和定时器生成请求命令对应的硬件接口信号。
由上述内容可知,本实施方式中通过第一操作系统生成请求命令对应的硬件接口信号,从而实现了使用软件方式模拟生成硬件接口信号的技术效果,进而达到了无需芯片本身具备相关硬件接口信号的硬件逻辑设计的目的,不仅能够降低芯片的设计难度,还能减低芯片的设计成本。
由此可见,本实施方式达到了在无需对芯片进行硬件接口信号的硬件逻辑设计的基础上利用软件系统生成硬件接口信号的目的,从而降低了芯片的设计难度,进而解决了相关技术中需要芯片本身具备控制器的硬件逻辑设计,从而导致的芯片的设计成本较高的技术问题。
可选地,在通过第一操作系统检测到第二操作系统触发的第一请求的情况下,获取请求数据,其中,第一操作系统和第二操作系统在同一处理器上运行,请求数据由第二操作系统生成,第二操作系统的业务响应速度小于第一操作系统的业务响应速度。最后,由第一操作系统对请求数据进行解析,得到请求命令。
可选地,在获取请求数据之前,可以通过第二操作系统将请求数据存储至目标内存(即处理器上的存储空间)中,并在请求数据存储完毕之后,通过第二操作系统触发第一请求,其中,第一请求用于通知第一操作系统从目标内存中读取请求数据,目标内存为第一操作系统和第二操作系统均能够访问的内存。
在一种可选的实施例中,第一操作系统还可以接收硬件接口信号对应的响应数据,其中,响应数据的传输形式与硬件接口信号的传输形式相同。其次,第一操作系统还将响应数据的数据结构调整为第二数据结构。
另外,在将响应数据的数据结构调整为第二数据结构之后,通过第一操作系统触发第二请求,其中,第二请求用于通知第二操作系统读取响应数据。
以第一操作系统为RTOS系统,第二操作系统为Linux系统,硬件接口信号为PECI信号为例。针对命令请求过程,首先由Linux系统中涉及PECI业务的上层应用(如故障诊断、CPU温度获取等)根据需要主动发起PECI请求命令,这些请求命令包括但不限于基本的Ping()命令,获取CPU温度命令及读取MSR寄存器(Machine Specific Register,机器特定寄存器)信息命令等,不同的PECI请求命令的代码实现由对应的接口函数完成。
可选地,Linux系统按照PECI协议规范将各请求命令的目标地址、读写长度、命令码、para参数等请求数据写入目标内存中,待请求数据全部写入目标内存之后,Linux系统产生第一请求通知RTOS系统。其中,第一请求可以是一种SGI中断请求(software generated interrupt,一种处理器核之间的通信中断请求)。
需要注意到的是,在通过第二操作系统将请求数据存储至目标内存中的过程中,第二操作系统将请求数据按照第一数据结构的形式存储至目标内存中,其中,第一数据结构至少包括设备地址、写长度、读长度、命令码以及请求参数,设备地址用于表征目标器件的地址,目标器件为依据硬件接口信号生成响应数据的器件,命令码用于区别不同的请求命令,写长度用于表征从命令码开始到请求数据结束的字节数量,读长度用于表征请求数据中包含完成码以及读数据在内的字节数量,请求参数用于表征请求命令的参数。
针对命令响应过程,由RTOS系统接收PECI总线传来的响应数据,然后完成数据解析,以将响应数据的信号形式从硬件接口信号的形式转换为软件信号的形式,例如,识别硬件接口信号中的高电平信号与低电平信号之间的波形变化,从而得到对应的逻辑位信息,基于逻辑为信息得到软件信号数据。解析后的响应数据经命令参数结构化模块进行调整并写入目标内存中。待解析后的响应数据全部写入完毕后,RTOS系统触发第二请求通知Linux系统。Linux系统检测到第二请求,主动读取目标内存中存储的解析后的响应数据,数据经处理后返回给上层应用。其中,第二请求可以是一种SGI中断请求。
目标内存除了可以是共享内存之外,还可以是其他内存,例如,随机存取存储器(Random Access Memory,简称为RAM)、闪存存储器(Flash)等等。
在一种可选的实施例中,在根据逻辑位信息和定时器生成请求命令对应的硬件接口信号之后,第一操作系统可以对硬件接口信号的电压进行转换,得到目标硬件接口信号。
可选地,第一操作系统可以将硬件接口信号输入至电压转换器件中,得到电压转换器件输出的目标硬件接口信号。
可选地,上述的电压转换器件可以是CPLD,并且CPLD可以与目标器件相连接,其中,目标器件可以是服务器中的CPU。
需要注意到的是,上述业务除了可以应用于替代PECI接口生成PECI信号之外,还可以应用在其他硬件接口上。
由上述内容可知,联合嵌入式系统的第一操作系统与第二操作系统,通过核间中断与共享内存的方式实现了嵌入式系统中数据的交互,在RTOS系统中构建请求命令的波形发生功能模块,通过软件模拟的方式实现嵌入式系统与外部器件之间进行硬件接口信号的通信。另外,充分利用RTOS系统的高实时性特点,保证了在模拟请求命令波形时时序的准确性,具有灵活高效的特点。能够显著降低芯片设计难度,由于采用软件模拟生成硬件接口信号,为嵌入式系统中通信功能与其他业务功能间的优化设计提供了更多可能性,同时由于省去芯片中专门被设置为实现硬件接口信号通信的控制器,因此能够降低芯片的设计成本和制造成本。
在一个可选的实施方式中,第一操作系统上运行的业务可以但不限于包括串口切换业务,在本实施方式中提供了一种串口的切换过程,该过程包括如下步骤:
步骤21,在检测到第二操作系统接收到串口切换指令的情况下,通过第二操作系统将串口切换指令发送至第一操作系统。
可选的,在用户发起串口切换的时,可以通过第二操作系统检测是否接收到用户发起的串口切换指
令。需要说明的是,串口切换指令中需要包括待切换至的目标串口的信息,例如,串口切换指令包括待切换至的目标串口的串口号。
在一可选的实例中,串口切换指令的格式可以是<switch_command_app-n number-t sleep_time>,switch_command_app表征切换指令程序,-n代表切换的目标串口号,number的取值可以为1、2、3,-t代表从指令发起后休眠多久后执行切换动作,sleep_time单位为秒。
需要说明的是,在实现串口切换时可以为当前可以进行串口切换的串口进行编号,以便后续在进行串口切换时,通过串口号实现对目标串口的切换。
在一可选的实施例中,当前可以进行串口切换的串口包括:BMC Linux系统串口、服务器BIOS(Basic Input Output System,基本输入输出系统)串口以及SMART NIC(network interface controller,智能网络接口控制器)串口,对应的可以用1代表BMC Linux系统串口、2代表服务器BIOS串口,以及3代表SMART NIC串口。
步骤22,通过第一操作系统依据串口切换指令执行串口切换。
可选的,在检测到第二操作系统接收到串口切换指令的情况下,第二操作系统会立刻将串口切换指令发送至第一操作系统中。需要说明的是,可以将第一操作系统与第二操作系统分别运行两个处理器核心中,然后第一操作系统与第二操作系统之间采用核间通信,这样可以有助于提高信号传递的可靠性。
需要说明的是,第一操作系统对指令的响应速度远远快于第二操作系统对指令的响应速度,这样通过第一操作系统可以快速响应串口切换指令,并且在极短的时间内完成切换工作。
综上,通过运行于同一个处理器中的第一操作系统和第二操作系统来代替CPLD或FPGA实现串口切换软件功能,在第二操作系统接收到串口切换指令的情况下,通过第二操作系统将串口切换指令转发至第一操作系统中,第一操作系统根据串口切换指令实现串口的切换,避免了相关技术中需要通过CPLD或者FPGA将各个串口连接起来,然后利用CPLD或者FPGA中的开关结构的方式来实现串口切换,减少了硬件成本,并且在第一操作系统接收到串口切换指令之后,可以迅速在很短的时间内完成串口切换,因此,通过本方案提出的技术方法既可以有效降低串口切换成本,还可以有效提高串口切换的效率。
为了第二操作系统能够实现串口切换,在本实施方式提供的串口切换过程中,串口切换指令中至少包括:目标串口的串口号,在通过第一操作系统依据串口切换指令执行串口切换之前,包括以下步骤:通过第一操作系统从目标存储器中获取串口切换指令的解析规则;依据解析规则对串口切换指令中的目标串口的串口号进行解析,确定串口号对应的设备,其中,目标串口为设备的串口,目标串口连接于芯片中。
通过第一操作系统依据串口切换指令执行串口切换包括:通过第一操作系统确定设备的串口地址;依据串口地址将目标串口映射至芯片的目标输出接口。
为了使得第一操作系统能够实现串口切换,第一操作系统可以对串口切换指令进行解析,进而能够得到目标串口对应的设备。
在一可选的实施例中,可根据芯片或者服务器主板的不同来定制对串口切换指令的解析规则,并将解析规则保存在目标存储器中,目标存储器可以是带电可擦可编程只读存储器(eeprom)、非易失性内存(flash)等存储介质。需要说明的是,目标存储器可以部署在芯片中,还可以不部署在芯片中。通过目标存储器存储解析规则,提高了数据的安全性,以及可根据芯片或服务器主板的不同来定制解析规则,使得可编程性和可扩展性比较好。
在第一操作系统接收到串口切换指令之后,从目标存储器中读取串口切换指令的解析规则,然后利用解析规则对串口切换指令中的目标串口的串口号进行解析,得到这个串口号对应的设备。
在得到串口号对应的设备之后,第一操作系统就可以通过设备的串口地址将目标串口映射至芯片的目标输出接口。将设备的串口地址映射到目标输出接口之后,就可以通过目标输出接口实现对设备的访问。
需要说明的是,串口切换指令和解析规则可以根据使用的芯片的型号以及第一操作系统、第二操作系统的类型进行设置。
在本申请实施例一提供的串口切换方法中,芯片包括:串行数据总线,在通过第一操作系统确定设备的串口地址之前,该方法还包括:确定与串行数据总线的串口连接的多个设备;通过串行数据总线将每个设备的串口映射至芯片的内存中,以得到每个设备的串口地址。
可选的,在上述的芯片中还包括串行数据总线,当前多个设备的串口的TX和RX与串行数据总线相连,比如当前的串口包括BMC Linux系统串口(UART1)、服务器BIOS串口(UART2)以及SMART NIC串口(UART3)。UART,Universal Asynchronous Receiver/Transmitter,通用异步收发传输器。串口数据总线会将UART1、UART2和UART3不同串口的TX和RX数据映射到BMC内存的不同地址空间中,也就是上述的通过串行数据总线将每个设备的串口映射至芯片的内存中。例如,UART1 TX和RX buffer为串口UART1的串口地址,UART2TX和RX buffer为串口UART2的串口地址,UART3 TX和RX buffer为串口UART3的串口地址。
当用户下发串口切换指令时,第一操作系统(RTOS)选择UART映射的不同三段内存(三选一),将其中一段内存数据交互给客户,达到模拟CPLD硬件串口切换电路的目的。
需要说明的是,如果不能区分不同设备的串口,那么开发人员在维修时不能准确定位哪一个设备的串口是存在问题的,因此,需要通过串口切换实现对异常问题的定位。
在本实施方式中,在依据串口地址将目标串口映射至芯片的目标输出接口之后,若目标输出接口与目标智能网卡连接,包括:通过智能网卡检测是否接收到对目标串口的访问请求;若接收到对目标串口的访问请求,则通过智能网卡将访问请求转发至目标串口。
可选的,在芯片的目标输出接口中还可以与目标智能网卡连接,然后通过智能网卡检测是否接收到用户对目标串口的访问请求,如果接收到对目标串口的访问请求,可以直接通过目标智能网卡实现对设备的串口访问,实现SOL(Serial over LAN,一种数据封包格式和协议的规范)功能。通过上述步骤,提高了对设备的串口访问效率。
在一可选的实施例中,在依据串口地址将目标串口映射至芯片的目标输出接口之后,还包括以下步骤:通过第一操作系统获取串口切换指令的执行结果,其中,执行结果为以下之一:切换成功和切换失败;通过第一操作系统将执行结果发送至第二操作系统。
通过第二操作系统接收串口切换指令的执行结果,其中,执行结果由第一操作系统发送至第二操作系统,执行结果为以下之一:串口切换成功和串口切换失败。
在第一操作系统在切换完串口之后,会获取串口切换指令的执行结果,然后把串口切换指令的执行结果反馈到第二操作系统中,告知第二操作系统串口成功或者失败。
为了提高串口切换的成功率,在本实施方式中,在通过第二操作系统接收串口切换指令的执行结果之后,还包括:若执行结果为执行失败,则重复执行通过第二操作系统下发串口切换指令至第一操作系统的步骤,直至执行结果为成功,或者,执行串口切换的次数超过预设次数。若执行串口切换的次数超过预设次数,通过第二操作系统触发提示信号,其中,提示信号用于提示串口切换失败。
如果串口切换指令的执行结果为执行失败,那么需要重复执行通过第二操作系统下发串口切换指令至第一操作系统的步骤,直至执行结果为成功,或者,执行串口切换的次数超过预设次数,预设次数可以设置为3次。如果执行串口切换的次数超过了预设次数,对应的第二操作系统触发提示信号,来提示串口切换失败,以便对这种情况及时处理。
在检测到第一操作系统检测接收到串口切换指令之前,还包括:在第二操作系统启动完成后,第二处理器核心触发第一中断,并发送第一信号至第一操作系统;通过第一操作系统依据第一信号检测芯片中的多个串口的运行状态,得到检测结果;通过第一处理器核心触发第二中断,并将检测结果通过第二信号发送至第二操作系统;通过第二操作系统接收检测结果,以确定芯片中的运行正常的串口数量。
在第二处理器核心触发第一中断,并发送第一信号至第一操作系统之后,检测第一操作系统是否接收到第一信号;若第一操作系统接收到第一信号,则通过第一操作系统检测芯片中多个串口的运行状态,得
到检测结果。
当第二操作系统启动完成之后,第二处理器核心触发第一中断(IPI中断,IPI,inter-processor interrupts,处理器间中断),来将第一信号发送到第一操作系统,第一操作系统通过第一信号可以得知第二操作系统已正常启动,可以和第二操作系统进行正常交互,并且第一操作系统会根据第一信号检测芯片中的多个串口的运行状态,确定所有的串口是否运行正常。
在第一操作系统检测得到检测结果之后,第一处理器核心会触发第二中断来将检测结果通过第二信号发送至第二操作系统,第二操作系统通过检测结果确定可切换的串口数量(即上述的运行正常的串口数量),以便后续对这些串口进行串口切换。同时为了第一操作系统能够更加快速的实现串口切换,在第一操作系统检测完成后,第一操作系统开始阻塞等待接收第二操作系统发出的串口切换指令。
在一可选的实施例中,第一操作系统为RTOS,当第二操作系统为Linux时,第一操作系统运行在CPU0,第二操作系统运行在CPU1,串口切换前的准备步骤包括:当CPU1上的Linux系统启动到特定的阶段时候,CPU1会触发一个IPI中断,通知CPU0上的RTOS系统,Linux已正常启动,可以和CPU1上的Linux进行正常交互,RTOS系统收到来自CPU1上的IPI中断后,会启动串口切换控制器程序,检查UART1、UART2、UART3是否正常,然后CPU0再触发一个IPI中断,通知CPU1上的Linux操作系统,RTOS系统已启动完成,同时上报的信息包含CPU0上的RTOS操作系统拥有可切换的串口数量,然后CPU0上的RTOS操作系统开始阻塞等待接收CPU1上的操作系统发出的切换指令。
在第二操作系统运行存在异常的情况下,通过服务终端将串口切换指令下发至第一操作系统;通过第一操作系统依据串口切换指令执行串口切换。
由于第二操作系统运行的功能较多、承担的业务量也比较大,因此,可以存在运行异常或者需要重启的情况,那么当第二操作系统运行存在异常时,可以直接通过服务终端将串口切换指令下发至第一操作系统,保证第一操作系统正常执行串口切换。需要说明的是,服务终端可以是芯片所处的服务器上的终端。
通过上述步骤,保证了第一操作系统不依赖于第二操作系统来实现对串口的切换,提高了第一操作系统执行串口切换的独立性。
综上,在本实施方式提供的串口切换过程中,通过运行于同一个处理器中的第一操作系统和第二操作系统来代替CPLD或FPGA实现串口切换软件功能,在第二操作系统接收到串口切换指令的情况下,通过第二操作系统将串口切换指令转发至第一操作系统中,第一操作系统根据串口切换指令实现串口的切换,避免了采用硬件的方式来实现串口切换,减少了硬件成本,并且第一操作系统接收到串口切换指令之后,可以迅速在很短的时间内完成串口切换,因此,上述过程既可以有效降低串口切换成本,还可以有效提高串口切换的效率。
对于触发模式中的条件触发方式来说,当前操作业务与第一操作系统之间的匹配度可以但不限于表示处理器上产生的操作业务是否适合由第一操作系统来处理,将适合的操作业务分到第一操作系统处理,从而实现了操作业务的合理分配,也提高了操作业务的处理效率。
在本实施例中,可以但不限于通过以下方式控制第一操作系统基于处理器运行:检测处理器上所产生的当前操作业务的业务信息;在检测到业务信息与第一操作系统之间的匹配度高于匹配度阈值的情况下,控制第一操作系统基于处理器运行当前操作业务。
可选地,在本实施例中,操作业务与第一操作系统之间的匹配度可以但不限于由操作业务的业务信息与第一操作系统之间的匹配度来表示,业务信息可以但不限于是任何具有处理要求的维度,比如:业务响应速度,业务资源占用率,业务耦合度,业务重要性等等。
可选地,在本实施例中,业务信息与第一操作系统之间的匹配度高于匹配度阈值可以表示操作业务是适合在第一操作系统上运行的。匹配度阈值可以但不限于是根据第一操作系统当前的资源使用情况或者运行要求动态调整的。从而使得第一操作系统的适应性和灵活性更高。
在本实施例中,可以但不限于通过以下方式检测处理器上所产生的当前操作业务的业务信息:检测当
前操作业务的目标响应速度,和/或,目标资源占用量,其中,业务信息包括:目标响应速度,和/或,资源占用量,目标响应速度是当前操作业务需要处理器达到的响应速度,目标资源占用量是当前操作业务需要处理器提供的资源量;在目标响应速度小于或者等于速度阈值,和/或,目标资源占用量小于或者等于占用量阈值的情况下,确定业务信息与第一操作系统之间的匹配度高于匹配度阈值。
可选地,在本实施例中,业务信息可以但不限于包括:目标响应速度,和/或,资源占用量,目标响应速度是当前操作业务需要处理器达到的响应速度,目标资源占用量是当前操作业务需要处理器提供的资源量。可以单独考虑当前操作业务对处理器响应速度的需求,或者,单独考虑当前操作业务对处理器上可用资源的需求。也可以综合考虑二者进行当前操作业务的分配。
在一个可选的实施方式中,可以但不限于采用以下方式为各个操作系统分配操作业务和处理资源:
根据资源动态分配规则将一组待分配业务分配给嵌入式系统中对应的操作系统,其中,资源动态分配规则包括根据以下至少之一进行资源动态分配:业务响应速度,业务资源占用率,业务耦合度,业务重要性,嵌入式系统包括第一操作系统和第二操作系统,第一操作系统和第二操作系统运行于处理器上,第一操作系统的响应速度高于第二操作系统;
确定与一组待分配业务对应的资源分配结果,其中,资源分配结果用于指示处理器的处理资源中与一组待分配业务中的每个待分配业务对应的处理资源,处理器的处理资源包括处理器核心;
根据与每个待分配业务对应的操作系统以及资源分配结果,将处理器的处理资源分配给第一操作系统和第二操作系统。
在处理器运行的过程中,可以获取一组待分配业务,即,待分配给第一操作系统和第二操作系统的业务。由于不同的待分配业务对于响应速度、业务资源占用率、与其他业务的业务耦合度、业务重要性等维度上可能会存在区别,因此,可以预先配置资源动态分配规则,资源动态分配规则可以包括用于进行业务分配的规则,将业务分配给对应的操作系统,以便由对应的操作系统的处理资源执行分配给自己的业务。可选地,资源动态分配规则可以包括根据以下至少之一进行资源动态分配:业务响应速度,业务资源占用率,业务耦合度,业务重要性,不同的分配规则可以具有对应的优先级,例如,优先级按照由高到低的顺序依次为:业务重要性,业务耦合度,业务响应速度,业务资源占用率。根据源动态分配规则,可以将一组待分配业务(或者待分配任务,不同的待分配业务可以对应于不同的进程)分配给嵌入式系统中对应的操作系统,得到业务分配结果。
可选地,基于对响应时间的约束,第一操作系统可以是具有明确固定的时间约束的操作系统,所有处理过程(任务调度)需要在固定的时间约束内完成,否则系统会出错,其可以是实时操作系统(Real Time Operating System,简称RTOS),例如,FreeRTOS、RTLinux等,还可以是其他嵌入式系统中的实时操作系统。第二操作系统不具备该特征,第二操作系统一般采用公平任务调度算法,线程/进程数量增加时,就需要分享CPU时间,任务调试具有不确定性,可称为非实时操作系统,例如,contiki、HeliOS、Linux(全称GNU/Linux,是一套可自由传播的类Unix操作系统)等,还可以是其他嵌入式系统中的非实时操作系统,其中,Linux系统是一个基于POSIX(Portable Operating System Interface,可移植操作系统接口)的多用户、多任务、支持多线程和多CPU的操作系统。
对应地,分配给第一操作系统的业务通常为实时性业务,实时性业务是指需要在规定的时间内得到调度的业务,该业务需要处理器以足够快的速度予以处理,其处理的结果又能在规定的时间内来控制生产过程或者对处理系统做出快速响应。作为一个典型的场景,工业控制中对机器手的控制属于实时性业务,系统需要在检测到机器手误操作之前及时采取措施,否则可能会造成严重后果。分配给第二操作系统的业务通常为非实时性业务,非实时性业务是指对调度时间不敏感的业务,对调度的延迟具有一定的容忍度,例如,服务器中读取温度传感器(sensor)的传感器数据。
需要说明的是,实时操作系统是指当外界事件或数据产生时,能够接受并以足够快的速度予以处理,其处理的结果又能在规定的时间之内来控制生产过程或对处理系统做出快速响应,调度一切可利用的资源
完成实时业务,并控制所有实时业务协调一致运行的操作系统具有及时响应和高可靠性的特点。
在将每个待分配业务分配至对应的操作系统之后,根据业务分配结果,可以为每个待分配业务分配对应的处理资源,得到与一组待分配业务对应的资源分配结果。在为待分配业务分配处理资源时,可以为分配给第一操作系统的业务分配第一操作系统的处理资源,分配给第二操作系统的业务分配第二操作系统的处理资源,同时,考虑到负载均衡,在存在未分配处理资源时,可以为部分业务分配未分配处理资源。
处理器的处理资源可以以时间片为单位进行处理资源的动态分配,考虑到频繁的切换处理资源所属的操作系统以及业务处理时间并不一定是时间片的整数倍,从而导致部分业务的响应时间被延长,可以以处理器核心为单位被分配给第一操作系统和第二操作系统,即,处理器的处理器核心是以整个处理器核心为单位被分配给对应的操作系统,每个操作系统所分配的处理器核心的数量为整数个,且不同的操作系统分配的处理器核心互不相同。
根据与每个待分配业务对应的操作系统以及资源分配结果,可以将处理器的处理资源分配给第一操作系统和第二操作系统。可选地,可以将处理器的未分配处理资源分配给与其对应的操作系统,未分配处理资源可以是基于未分配处理资源与待分配业务的对应关系以及待分配业务与操作系统的对应关系确定的。
可选地,将处理器的处理资源分配给第一操作系统和第二操作系统可以是由资源自适应调度模块(例如,核心自适应调度模块)执行的,该资源自适应调度模块可以是运行在第一操作系统或者第二操作系统上的软件模组,以运行在第二操作系统上为例,资源自适应调度模块可以由Linux系统中的软件实现,其可以根据业务管理模块的输出和资源动态分配模块的输出,完成对处理器的处理资源(例如,处理器硬核资源)的实际调度动作。比如,经过核心资源自适应模块的资源调度,(M+N)个核心中的M个核心调度给了实时操作系统,N个核心调度给了非实时操作系统。
例如,可以在同一处理器的不同硬核上运行异构的操作系统(异构操作系统),使整个处理器系统具备实时及非实时业务的并行处理能力,同时,通过自适应调整不同操作系统占用的处理器硬核资源(例如,处理器核心),实现处理器资源利用率的显著提升。这里,异构是指嵌入式系统的同一个多核处理器中运行的操作系统类型不同,多系统是指嵌入式系统的同一个多核处理器上运行的操作系统数量为多个,且这些操作系统在时间维度上是同时运行的。
可选地,上述过程还包括:通过读取规则配置文件,生成规则结构体,其中,规则结构体用于记录资源动态分配规则。
资源动态分配规则可以是基于规则配置文件进行配置的,通过读取的规则配置文件,可以生成用于记录资源动态分配规则的规则结构体,这里,规则配置文件可以是负载均衡策略文件(payload_balance.config),负载均衡策略文件可以用于配置运行的各种业务(或进程)的分类方法、实时性等级的评估原则等。负载均衡策略文件中可以不同的参数配置资源动态分配规则,负载均衡策略配置文件的一个示例如下:
classification kinds=2//取值为1表示按重要和非重要等属性对进程进行分类,否则按预置的分类方法(如实时与非实时)对进程进行分类;
real-time grade evaluation=2//取值为1表示将过去统计分钟(statistic minutes)内CPU的平均占用率作为进程实时性等级评估原则;否则表示按预置的优先级作为进程实时性等级评估原则;
statistic minutes=5//表示各进程的平均占用率的统计时间(单位为minute,分钟),当real-time grade evaluation为1时有效。
可选地,资源动态分配规则可以存储在负载均衡策略模块,这里,负载均衡策略模块可以是运行在第一操作系统或者第二操作系统下的软件模组(例如,运行在Linux系统下的软件模组),其可以为业务管理模块提供策略指导,包括系统中运行的各种业务(或进程)的分类方法、实时性等级的评估原则等。业务管理模块可以对系统中的业务按实时性等级进行业务划分与管理,可选指导资源自适应调度模块进行处理器资源的重分配。示例性地,其可以根据负载均衡策略模块的输出,执行业务的实际分类,产生包含实时
业务与非实时业务的列表。
需要说明的是,上述分类方法与实时性等级评估原则是开放的,用户可以自己定义某种方法或原则,业务管理模块进行业务管理所基于规则可以是动态配置的,可以在已有规则的基础上进行可选规则的设置。业务管理模块中可以设置有相同功能的多个规则,但规则之间不存在矛盾,即,可以基于规则的配置时间、规则的优先级等规则选取条件,确定作用相同的规则中,当前使用的规则,从而避免规则之间出现矛盾。上述配置文件load_balance.config描述了一种可能情况,在配置文件中,classification_kinds变量指示可选的分类标准(例如,按业务的重要性或实时性)及分类类别(例如,重要业务和一般业务、实时业务与非实时业务等),而real-time_grade_evaluation变量指示实时性评估标准(可以是按过去statistic_minutes分钟内CPU的平均占用率或预置的业务优先级),实时性等级类型由用户自定义,可定义为高、普通、低三种,也可以细分更多种。
负载均衡策略模块的输出即为配置好的分类方法、实时性等级评估原则等,在软件实现时,可以是可选的配置文件(如load_balance.config文件),也可以是结构体变量,这些文件或结构体变量最终均能被业务管理模块访问到,进而获取负载均衡的可选策略。
通过本实施例,通过读取规则配置文件,生成规则结构体以记录资源动态分配规则,可以信息配置的便捷性。
可选地,上述过程还包括:通过第二操作系统的对外接口获取规则更新配置文件,其中,规则更新配置文件用于更新已配置的资源动态分配规则;使用规则更新配置文件更新规则结构体,以更新规则结构体所记录的资源动态分配规则。
规则结构体可以是固定格式,即,在嵌入式系统运行的过程中不允许被修改,也可以是可灵活配置的格式,即,可以通过特定格式的配置文件进行配置更改。在本实施例中,可以获取规则更新配置文件,该规则更新配置文件用于更新已配置的资源动态分配规则;使用规则更新配置文件,可以更新规则结构体,从而更新规则结构体所记录的资源动态分配规则。
在使用规则更新配置文件更新规则结构体时,可以是直接根据规则更新配置文件生成新的规则结构体,并使用新生成的规则结构体替换已有的规则结构体,也可以是使用规则更新配置文件所指示的规则参数的参数值更新规则结构体中对应的规则参数的参数值。
可选地,特定格式的配置文件可以是通过第一操作系统或者第二操作系统的对外接口进行读取的,考虑到所需处理的业务量级,嵌入式系统的资源动态调度等可以主要是由第二操作系统负责的。在获取规则更新配置文件时,可以通过第二操作系统的对外接口获取规则更新配置文件。
例如,负载均衡策略模块可以是固定格式,也可以通过Linux系统的对外接口进行配置,例如,可以定义如前述的特定格式的配置文件(load_balance.config),通过文件读写方式进行配置更改。
需要说明的是,对外接口是多核处理器的对外接口,可以是网络接口,SPI(Serial Peripheral Interface,串行外设接口)控制器接口、UART(Universal Asynchronous Receiver/Transmitter,通用异步收发传输器)串口等,只要能从外界获取数据的通路即可。读取文件用到的硬件和文件位置有不同的实现方案,例如,通过网络接口可从Web(World Wide Web,全球广域网)界面加载配置文件;通过SPI控制器可从板卡的SPI Flash(闪存)中读取配置文件;通过UART串口可从另一台PC(Personal Computer,个人计算机)上的串口数据收发软件工具获取配置文件。
通过本实施例,通过获取规则更新配置文件并使用获取的规则更新配置文件更新规则结构体,可以提高资源动态分配规则配置的灵活性。
可选地,可以但不限于采用以下方式根据资源动态分配规则将一组待分配业务分配给嵌入式系统中对应的操作系统:将一组待分配业务中业务响应速度要求大于或者等于设定响应速度阈值的待分配业务分配给第一操作系统,以及,将一组待分配业务中业务响应速度要求小于设定响应速度阈值的待分配业务分配给第二操作系统。
在进行待分配业务分配时,可以基于待分配业务的业务响应速度要求将待分配业务分配给对应的操作系统。业务响应速度可以用于评估业务的实时性等级,业务响应速度要求越高,其对操作系统的调度时间和响应速度越敏感,实时性等级越高,业务响应速度要求高的业务需要操作系统以足够快的速度予以处理,其处理的结果又能在规定的时间内来控制生产过程或者对处理系统做出快速响应,而业务响应速度要求不高的业务,对调度的延迟具有一定的容忍度。
对于业务响应速度要求大于或者等于设定响应速度阈值的待分配业务,其对操作系统的调度时间和响应速度敏感,可以将此类待分配业务分配给第一操作系统(例如,将实时业务分配给实时操作系统)。对于业务响应速度要求小于设定响应速度阈值的待分配业务,其对响应速度和调度时间不敏感的业务,因此,可以将此类待分配业务分配给第二操作系统(例如,将非实时业务分配给非实时操作系统)。这里,业务响应速度要求可以是通过业务响应速度的指示参数进行指示的,设定响应速度阈值可以是毫秒级的响应速度阈值或者秒级的响应速度阈值,例如,100ms、200ms、1s等等,本实施例中对于设定响应速度阈值不做限定。
可选地,在将一组待分配业务分配给嵌入式系统中对应的操作系统时,可以输出与第一操作系统对应的第一业务列表和第二操作系统对应的第二业务列表,第一业务列表用于记录分配给第一操作系统的业务,而第二业务列表用于记录分配给第二操作系统的业务,即,业务分配结果包括第一业务列表和第二业务列表,输出的第一业务列表和第二业务列表可以用于进行处理器的处理资源的动态调度过程。
例如,对系统业务实时性等级划分,获得实时性业务与非实时性业务列表,假设总共有20个业务,其中实时性业务为业务1和业务2,非实时性业务为业务3~业务20。
这里,业务管理模块可以对当前待执行的业务进行分类,BMC系统首次运行时,由于系统当前要运行的所有业务对系统是已知的,所以业务管理模块根据负载均衡模块的输出对这些业务进行一次分类,分类后不同业务将被分配到不同的操作系统(RTOS系统与Linux系统)中执行,后续运行过程中,如果业务进程出现数量变动(例如,某些进程挂死、或有新的进程启动)时,业务管理模块还会继续进行业务划分,实时地对现存业务按照负载均衡策略进行划分与管理。业务管理模块可以是Linux系统中的一个常驻的进程,它本身是一直运行着的,且对当前运行的进程进行管理与划分。
通过本实施例,通过按照业务响应速度要求将待分配业务分配给对应的操作系统,可以保证对调度时间敏感的业务的业务响应的及时性。
可选地,可以但不限于采用以下方式根据资源动态分配规则将一组待分配业务分配给嵌入式系统中对应的操作系统:将一组待分配业务中业务资源占用率小于第一占用率阈值的待分配业务分配给第一操作系统,以及,将一组待分配业务中业务资源占用率大于或者等于第一占用率阈值的待分配业务分配给第二操作系统。
在进行待分配业务分配时,可以基于待分配业务的业务资源占用率将待分配业务分配给对应的操作系统。业务资源占用率可以是单位时间内业务对于处理资源的平均占比(例如,每分钟的CPU占用率),业务资源占用率的高低影响了本业务的响应速度以及后续业务的响应速度,因此,可以基于业务资源占用率评估业务的实时性等级,业务资源占用率越高,其对操作系统的调度时间和响应速度的影响越大,实时性等级越低,而业务资源占用率不高的业务,其对操作系统的调度时间和响应速度的影响不大,实时性等级越高。
对于业务资源占用率小于第一占用率阈值的待分配业务,其对操作系统的调度时间和响应速度的影响不大,可以将此类待分配业务分配给第一操作系统。对于业务资源占用率大于或者等于第一占用率阈值的待分配业务,其对操作系统的调度时间和响应速度的影响较大,因此,可以将此类待分配业务分配给第二操作系统。这里,第一占用率阈值可以根据需要进行配置,其可以是10%、15%、20%或者其他阈值,同时,该第一占用率阈值可以进行动态调整。
通过本实施例,通过按照业务资源占用率将待分配业务分配给对应的操作系统,可以保证对业务资源
占用率低的业务响应的及时性。
可选地,可以但不限于采用以下方式至少之一根据资源动态分配规则将一组待分配业务分配给嵌入式系统中对应的操作系统:
将一组待分配业务中与第一操作系统的已分配业务的业务耦合度大于或者等于第一耦合度阈值的待分配业务,分配给第一操作系统;
将一组待分配业务中与第二操作系统的已分配业务的业务耦合度大于或者等于第二耦合度阈值的待分配业务,分配给第二操作系统。
在进行待分配业务分配时,可以基于待分配业务的业务耦合度将待分配业务分配给对应的操作系统。业务耦合度可以用于表示待分配业务与各个操作系统中的已分配业务之间的关联程度。如果一个待分配业务与某一个操作系统的已分配业务的业务耦合度较高,则不适宜将其分配给另一个操作系统。因此,可以基于待分配业务与各个操作系统中的已分配业务之间的业务耦合度,将待分配业务分配给对应的操作系统。
可选地,可以通过业务的输入和输出之间的关联评价业务耦合度,业务耦合度可以通过不同的耦合度等级进行表示,如果业务的输入和输出之间没有关系,耦合度等级为低级(或者其他表示业务之间没有关联的耦合度等级),如果一个业务的执行依赖于另一个应用的输出(没有该输出作为输入业务无法开始进行),则业务之间的耦合度等级为高级,如果一个业务的执行用到了另一个应用的输出,但该输出不会妨碍业务的正常执行(在业务执行到对应的操作时能够获取到该输出即可,且对应的操作不是核心操作),则业务之间的耦合度等级为中级。此外,也可以通过数值表示业务耦合度,可以通过一种或多种耦合度条件(例如,输入和输出之间的关联关系)评价业务耦合度,将满足的耦合度条件所对应的数值,确定为业务耦合度的数值。
如果一组待分配业务中存在与第一操作系统的已分配业务的业务耦合度大于或者等于第一耦合度阈值的待分配业务,则可以将此类待分配业务分配给第一操作系统,而如果一组待分配业务中存在与第二操作系统的已分配业务的业务耦合度大于或者等于第一耦合度阈值的待分配业务,则可以将此类待分配业务分配给第二操作系统。
例如,除了产生实时业务列表和非实时业务列表,业务管理模块还负责业务解耦评估与管理,即,从所有实时业务中找出可以独立出来交给实时操作系统运行的业务,以便硬件资源动态分配模块进行处理器资源的重分配,对于不能独立出来交给实时操作系统运行的业务,如果其与非实时业务的业务耦合度高,则可以将其分配给非实时操作系统。
这里,由于有些业务虽然具有实时性要求,但它与系统中的其他非实时业务的交互非常频繁(即,业务耦合度高),这时,为了提升整体的数据交互效率,把这类业务分配给非实时操作系统。而还有一类实时业务,它本身相对独立,这时只需要将其划分到实时操作系统即可,这个过程即为“解耦”操作。判断业务独立出来的标准并不唯一,可以是上述业务间关联的密切程度,也可以是其他用户关切的指标。
重分配的策略是开放的,一种可能的策略为:系统首次运行时,根据业务管理模块分配给实时操作系统与非实时操作系统的业务数量的比例分配处理器核心,后续运行过程中根据双系统中各自的核心资源占用率调整资源分配,从这个角度讲,重分配过程与核心抢占与释放过程是相互配合的过程。
通过本实施例,通过按照业务耦合度将待分配业务分配给对应的操作系统,可以保证对业务耦合度较高的多个业务进行业务处理的准确性。
可选地,可以但不限于采用以下方式根据资源动态分配规则将一组待分配业务分配给嵌入式系统中对应的操作系统:
将一组待分配业务中包含敏感信息的待分配业务,分配给目标操作系统,其中,目标操作系统是第一操作系统和第二操作系统中,与使用对象交互频率低的操作系统。
在本实施例中,对于包含敏感数据(例如,密码等敏感信息)的待分配业务(其可以为重要敏感型的
业务,比如,不希望暴露给用户的业务),其可以被分配到目标操作系统,通过目标操作系统对包含敏感信息的待分配业务进行硬核级别的安全防护隔离,这里,目标操作系统是第一操作系统和第二操作系统中,与使用对象交互频率低的操作系统,或者,响应速度快的操作系统,例如,第一操作系统。
例如,该业务处理模块负责可选对系统业务进行硬核级别的安全防护隔离,即,将重要敏感型(不希望暴露给用户)的业务划分为实时业务,最终可以实现这些业务由非实时操作系统至实时操作系统的卸载,起到安全防护的效果。这里,该业务处理模块划分出的不同业务在软件实现时可以采用结构体的形式进行组织。通过对异构操作系统间安全空间进行设计,将敏感型业务由非实时操作系统卸载至实时操作系统,达到硬核级别安全防护的目的。这里,敏感性业务是指:与安全相关的业务,比如,用户密码、身份信息等涉及用户个人隐私的业务。
这里,硬核级别是指业务在处理器核心层面进行了隔离,即,敏感型业务分配给实时操作系统(实时操作系统所占核心区别于非实时操作系统,所以属于核心层面的隔离),实时操作系统与非实时操作系统相比,与用户交互的频率和程度相对较弱,所以作为使用者的用户很难“探测”到运行在其上的业务产生的敏感数据。对于上层应用而言,用户的身份认证管理、安全加密等业务属于上述重要敏感性业务,通过业务管理模块将上述业务强行划分为实时业务,后续进行硬件资源动态分配时就能实现上述业务在实时操作系统运行,起到安全隔离效果。
通过本实施例,通过将包含敏感信息的待分配业务分配给与用户交互频率低的操作系统,可以对系统业务进行硬核级别的安全防护隔离,提高业务执行的安全性。
可选地,可以但不限于采用以下方式确定与一组待分配业务对应的资源分配结果:
根据一组待分配业务的分配结果,结合第一操作系统的处理资源的资源利用情况和第二操作系统的处理资源的资源利用情况,生成一组待分配业务与处理器的处理资源的映射表。
在本实施例中,一组待分配业务的分配结果用于指示待分配业务与操作系统之间的对应关系,分配给一个操作系统的待执行业务通常使用该操作系统的处理资源执行,而如果某一操作系统分配的业务量过大且当前存在未分配处理资源,则也可以为分配给某一操作系统的待分配业务分配未分配处理资源。因此,根据一组待分配业务的分配结果,结合第一操作系统的处理资源的资源利用情况和第二操作系统的处理资源的资源利用情况,可以生成一组待分配业务与处理器的处理资源的映射表,以指示为每个待分配业务所分配的处理资源。
这里,每个待分配业务仅与某一处理器核心具有映射关系,而同一处理器核心可与多个待分配业务具有映射关系,不同的业务可以通过占用同一处理器核心的不同时间片与同一处理器核心具有映射关系。在同一时间,同一处理器核心仅被一个业务占用,即,仅用于执行一个业务。分配给一个操作系统的不同业务可以按照分配时间、业务响应速度要求或者其他方式确定占用同一处理器资源的时间片。
例如,资源动态分配模块根据业务管理模块的输出结果,对处理器资源进行动态调整,形成不同业务与实际硬件资源的映射表,优化不同硬件资源在异构操作系统下的部署结构,以达到提升全系统硬件资源利用率的目的。上述资源动态分配过程由第二操作系统中的软件进行管理配置。
以八核处理器(核心1~核心8)为例,已调度给第一操作系统的处理器核心包括:核心1,已调度给第二操作系统的处理器核心包括:核心2、核心3和核心4,待分配的业务有6个,实时性业务为业务1和业务2,非实时性业务为业务3~业务6,为6个业务分配对应的处理器核心,为业务1分配核心1,为业务2分配核心5,为业务3分配核心2,为业务4分配核心3,为业务5分配核心4,为业务6分配核心6。
通过本实施例,基于业务与操作系统的对应关系,结合不同操作系统的处理资源的使用情况进行处理资源的动态分配,可以保证处理资源分配的合理性。
可选地,可以但不限于采用以下方式根据与每个待分配业务对应的操作系统以及资源分配结果,将处理器的处理资源分配给第一操作系统和第二操作系统:在根据资源分配结果确定处理器的处理资源中的未分配处理资源存在对应的待分配业务的情况下,将未分配处理资源分配给与未分配处理资源对应的待分配
业务所分配给的操作系统。
在进行处理资源分配时,如果处理器的处理资源中的未分配处理资源存在对应的待分配业务,即,为待分配业务分配了未分配处理资源,可以将未分配处理资源分配给与未分配处理资源对应的待分配业务所分配给的操作系统。
可选地,资源自适应调度模块可以根据硬件资源动态分配的结果,完成对处理器的处理资源的实际调度动作。资源自适应调度模块调度一部分处理器核心执行分配给第一操作系统的业务,比如核心组1的M个核心,调度其余的处理器核心运行分配给第二操作系统的业务,比如核心组2的N个核心。
以前述八核处理器为例,根据业务分配结果和资源分配结果,可以将未分配的核心4分配给第一操作系统,将未分配的核心5和核心6分配给Linux系统。整个调度过程可以由第二操作系统主导。
通过本实施例,基于资源分配结果将未分配的处理器资源调度给对应的操作系统,可以提高处理器资源的利用率。
在第一操作系统运行结束后,可以控制其进入休眠状态。比如:控制第一操作系统在运行结束后进行休眠。
第一操作系统的运行结束可以是运行周期结束,也可以是唤醒请求处理完成,或者还可以是当前操作业务处理完成。
在第一操作系统休眠后,可以由第二操作系统占用分配给第一操作系统的处理器核心,从而提高资源的利用率。比如:在控制第一操作系统在运行结束后进行休眠之后,通知第二操作系统允许占用第一操作系统所使用的处理器核心,其中,第二操作系统用于在第一操作系统休眠期间将第一操作系统所使用的目标处理器核心添加至第二操作系统的调度资源池中,调度资源池中包括处理器中除目标处理器核心以外的其他处理器。
可选地,在本实施例中,通知第二操作系统允许占用第一操作系统所使用的处理器核心的方式可以但不限于包括向第二操作系统发送中断请求的方式等等。第一操作系统休眠后,向第二操作系统发送中断请求来通知允许其占用第一操作系统所使用的处理器核心,第二操作系统响应该中断请求将第一操作系统所使用的目标处理器核心添加至调度资源池中来进行调度和使用。
在一个示例性实施例中,可以但不限于对第二操作系统上执行的操作业务进行监控,如果监控到异常操作业务,可以由第一操作系统来接管异常操作业务的运行,从而避免操作业务运行异常对整个处理过程的影响,提高业务运行的成功率和效率。比如:监控第二操作系统上执行的操作业务;在监控到第二操作系统上执行的操作业务中存在异常操作业务的情况下,通过第一操作系统接管异常操作业务。
可选地,在本实施例中,可以但不限于由处理器或者第一操作系统对第二操作系统进行监控,监控到的异常操作业务(比如业务线程挂死的操作业务)由第一操作系统进行接管。或者也可以将监控到的异常操作业务分配到多操作系统中与异常操作业务匹配度较高的操作系统上接管
可选地,在本实施例中,对第二操作系统上执行的操作业务进行监控的方式可以但不限于包括心跳信号的监控,业务日志的监控等等。比如:如果监控到生成了异常日志则确定操作业务异常。
在一个示例性实施例中,可以但不限于采用以下方式监控第二操作系统上执行的操作业务:接收第二操作系统上执行的每个操作业务的心跳信号;将心跳信号的频率不符合所对应的目标频率的操作业务确定为异常操作业务。
可选地,在本实施例中,第二操作系统上执行的每个操作业务会产生心跳信号,不同操作业务的心跳信号具有不同的频率,将第二操作系统上执行的每个操作业务的心跳信号接入操作业务的监控方,比如处理器或者第一操作系统,将接入的心跳信号的频率与该操作业务对应的目标频率进行比对,对于心跳信号的频率不符合所对应的目标频率的操作业务作为异常操作业务进行接管。
可选地,在本实施例中,心跳信号的频率与所对应的目标频率是否符合可以但不限于通过比对二者是否完全一致来判定,如果完全一致则判定为符合,如果不完全一致则判定为不符合。或者,也可以但不限
于给出一定的误差范围,通过比对心跳信号的频率是否落入目标频率的误差范围来判定二者是否符合,如果落入误差范围则判定为符合,如果未落入误差范围则判定为不符合。
在一个示例性实施例中,在通过第一操作系统接管异常操作业务之后,可以但不限于通过以下方式重启第二操作系统上的异常操作业务:向第二操作系统发送重启指令,其中,重启指令用于指示重启异常操作业务。
可选地,在本实施例中,重启指令用于指示重启异常操作业务,第二操作系统接收到重启指令后可以对异常操作业务进行初始化直至重新运行。
可选地,在本实施例中,第二操作系统上的异常操作业务重新运行后第一操作系统可以将接管的该异常操作业务返还给第二操作系统,第一操作系统可以将异常操作业务当前的运行现场保存至共享内存中,并发送中断请求给第二操作系统,第二操作系统从共享内存中读取异常操作业务当前的运行现场并加载至其运行的异常操作业务中,使得异常操作业务能够继续运行,提高了业务运行效率。
图10是根据本申请实施例的一种系统异常监控过程的示意图,如图10所示,第一操作系统接收第二操作系统上执行的操作业务的心跳信号,检测出心跳信号的频率不符合目标频率的异常操作业务,由第一操作系统接管第二操作系统上的异常操作业务继续执行。并且第一操作系统向第二操作系统发送重启指令,使得第二操作系统能够重启异常操作业务。
在一个示例性实施例中,可以但不限于通过以下方式启动双系统:引导第一操作系统启动;引导第二操作系统启动。
可选地,在本实施例中,先引导第一操作系统启动,再引导第二操作系统启动,第一操作系统可以但不限于是启动过程较快较简单的操作系统,在第二操作系统启动的过程中第一操作系统可以执行一些紧急的或者有助于第二操作系统启动的操作业务,从而可以提高操作系统的启动效率,或者提高操作业务的处理效率。
可选地,在本实施例中,第一操作系统和第二操作系统可以但不限于先后启动,第一操作系统可以但不限于比第二操作系统启动更快,第一操作系统也可以但不限于比第二操作系统启动所需的条件更简单,在第一操作系统先启动后可以运行能够满足第二操作系统启动所需的条件,或者能够加快第二操作系统启动的业务,从而使得多系统能够更加高效快速地启动并运行业务。
比如:引导第一操作系统启动后可以由第一操作系统运行能够控制芯片环境参数达到第二操作系统启动要求的业务(比如:风扇运行,参数控制等业务),使得芯片环境参数迅速达成第二操作系统启动运行的环境,提高操作系统的启动效率和运行效率。
可选地,在本实施例中,第一操作系统可以但不限于由第一操作系统的引导程序引导启动,第二操作系统可以但不限于由第二操作系统的引导程序引导启动。或者,二者可以由同一个引导程序先后引导启动。
在一个示例性实施例中,可以但不限于采用以下方式引导第一操作系统启动:芯片启动上电,通过处理器唤醒处理器中为第一操作系统分配的第一处理器核心;通过第一处理器核心执行第一操作系统的引导程序引导第一操作系统启动。
可选地,在本实施例中,可以但不限于根据第一操作系统所在的处理器所具有的处理器核心确定第一操作系统的第一处理器核心,比如:第一操作系统所在的处理器可以但不限于包括多个处理器核心(处理器核心0至处理器核心N),可以但不限于将多个处理器核心中的一个或者多个处理器核心(比如处理器核心0)分配给第一操作系统作为第一操作系统的第一处理器核心。
可选地,在本实施例中,上述第一操作系统的引导程序可以但不限于存储于芯片上的特定存储空间中专门用于启动第一操作系统。
可选地,在本实施例中,上述第一操作系统的第一处理器核心可以但不限于被设置为执行第一操作系统的引导程序,可以但不限于通过执行第一操作系统的引导程序启动第一操作系统。
在一个示例性实施例中,可以但不限于采用以下方式通过第一处理器核心执行第一操作系统的引导程序引导第一操作系统启动:通过第一处理器核心执行二级程序加载器,其中,第一操作系统的引导程序包括二级程序加载器;通过二级程序加载器加载第一操作系统。
可选地,在本实施例中,第一操作系统的引导程序可以但不限于包括二级程序加载器,第一处理器核心可以但不限于通过执行二级程序加载器(Second Program Loader,SPL)加载第一操作系统。
在一个示例性实施例中,可以但不限于采用以下方式引导第二操作系统启动:通过二级程序加载器唤醒为第二操作系统分配的第二处理器核心;通过第二处理器核心执行第二操作系统的引导程序引导第二操作系统启动。
可选地,在本实施例中,可以但不限于根据第二操作系统所在的处理器的处理器核心确定第二操作系统的第二处理器核心,比如:第二操作系统所在的处理器可以但不限于包括多个处理器核心(处理器核心0至处理器核心N),可以但不限于将多个处理器核心中的一个或者多个处理器核心(处理器核心1至处理器核心N)分配给第二操作系统作为第二操作系统的第二处理器核心。
可选地,在本实施例中,可以但不限于根据二级程序加载器唤醒第二操作系统的第二处理器核心,比如:在使用二级程序加载器加载第一操作系统完成后,可以但不限于通过二级程序加载器唤醒第二操作系统的第二处理器核心。或者,在使用二级程序加载器加载第一操作系统的过程中,可以但不限于通过二级程序加载器唤醒第二操作系统的第二处理器核心。
可选地,在本实施例中,可以但不限于使用第二处理器核心执行第二操作系统的引导程序引导第二操作系统启动。
在一个示例性实施例中,可以但不限于采用以下方式通过第二处理器核心执行第二操作系统的引导程序引导第二操作系统启动:通过第二处理器核心执行通用引导加载器,其中,第二操作系统的引导程序包括通用引导加载器;通过通用引导加载器加载第二操作系统。
可选地,在本实施例中,第二处理器核心可以但不限于通过执行通用引导加载器加载第二操作系统,通用引导加载器可以但不限于包括U-Boot(Universal Boot Loader)。
在一个示例性实施例中,可以但不限于采用以下方式通过第一处理器核心执行二级程序加载器:通过芯片上的引导存储器对二级程序加载器的代码进行安全启动检查;在检查结果为正常的情况下,通过第一处理器核心执行二级程序加载器。
可选地,在本实施例中,操作系统的引导程序可以但不限于包括二级程序加载器,可以但不限于将操作系统的引导程序作为上述引导存储器,通过引导存储器验证操作系统的引导程序包括的二级程序加载器的代码,比如:可以但不限于根据第一操作系统的引导程序(引导程序可以但不限于为BootROM)得到第一操作系统的二级程序加载器(二级程序加载器可以但不限于SPL),可以但不限于根据第一操作系统的引导存储器(引导存储器可以但不限于为BootROM)验证二级程序加载器的代码。
可选地,在本实施例中,引导存储器对二级程序加载器的代码进行安全启动检查的过程可以但不限于为:引导存储器读取二级程序加载器的代码以及验证码,通过约定的运算方式(比如哈希运算)对二级程序加载器的代码进行运算,得到运算值,再将该运算值与读取的验证码进行比对,二者一致则检查结果为正常,二者不一致则检查结果为异常。
可选地,在本实施例中,二级程序加载器也可以对通用引导加载器的代码进行安全启动检查,二级程序加载器读取通用引导加载器的代码以及验证码,通过约定的运算方式(比如哈希运算,与上述引导存储器检查二级程序加载器的运算方式可以相同也可以不同)对通用引导加载器的代码进行运算,得到运算值,再将该运算值与读取的验证码进行比对,二者一致则检查结果为正常,二者不一致则检查结果为异常。检查结果为正常的情况下,再通过通用引导加载器加载第二操作系统。
在一个示例性实施例中,提供了一种第一操作系统和第二操作系统启动的示例。以第一处理器核心为CPU-0,第二处理器核心为CPU-1至CPU-N为例,可以但不限于通过以下方式启动第一操作系统和第二操
作系统:芯片启动上电;唤醒处理器中第一操作系统的第一处理器核心CPU-0;使用第一处理器核心CPU-0执行第一操作系统的引导程序,可以但不限于为二级程序加载器;通过芯片上的引导存储器(可以但不限于为BootROM)对二级程序加载器的代码进行安全启动检查;检查结果为正常,通过第一处理器核心执行二级程序加载器(可以但不限于为SPL)加载第一操作系统;通过二级程序加载器唤醒第二操作系统的第二处理器核心CPU-1至CPU-N;通过第二处理器核心执行通用引导加载器(可以但不限于为U-Boot)加载第二操作系统。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例的方法。
在本实施例中还提供了一种被设置为实施上述操作系统的运行控制方法的嵌入式系统,图11是根据本申请实施例的一种嵌入式系统的示意图一,如图11所示,该嵌入式系统可以包括:芯片和至少两个操作系统,其中,芯片包括处理器1102、硬件控制器1104、第一总线1106和第二总线1108,其中,第一总线1106的带宽高于第二总线1108带宽,且第一总线1106被配置为多主多从模式,第二总线1108被配置为一主多从模式;至少两个操作系统基于处理器1102运行;至少两个操作系统通过第一总线1106进行通信;至少两个操作系统通过第二总线1108实现对硬件控制器的控制。
其中,上述芯片可以是BMC芯片;上述处理器可以是多核处理器,上述硬件控制器可以被设置为控制连接到对应的对外接口的外部设备;上述第一总线被配置为多主多从模式,其可以是处理器的多个处理器内核之间进行通信所使用的总线,例如,AHB(Advanced High Performance Bus,高级高性能总线),上述第二总线被配置为一主多从模式,其可以是处理器对硬件控制器之间控制所使用的总线,例如,APB(Advanced Peripheral Bus,外围总线),第一总线的带宽高于第二总线带宽。
嵌入式系统可以包括至少两个操作系统,至少两个操作系统基于处理器运行,而处理器的处理资源被动态分配给至少两个操作系统,处理器的处理资源包括处理器核心,至少两个操作系统通过第一总线进行通信,至少两个操作系统通过第二总线实现对硬件控制器的控制。
可选地,硬件控制器可以包括一种或多种,可以包括但不限于以下至少之一的芯片外设对应的控制器:I2C,USB(Universal Serial Bus,通用串行总线),UART,ADC(Analog to Digital Converter,模拟数字转换器),JTAG(Joint Test Action Group,联合测试工作组),RTC(Real_Time Clock,实时时钟),GPIO(General Purpose Input/Output,通用输入输出),WDT(Watch Dog Timer,看门狗),虚拟UART(Virtual UART),超级I/O(Super I/O),SGPIO(Serial General Purpose Input/Output,串行通用输入输出),PWM(Pulse Width Modulation,脉冲宽度调制),FanTach(风扇调速),Timer(时钟),PECI(Platform Environment Control Interface,平台环境式控制接口),邮箱(MailBox),还可以包括其他类型的控制器。对外接口可以包括一种或多种,可以包括但不限于与上述任一控制器对应的对外接口。
例如,BMC芯片的一个示例可以如图12所示,BMC芯片的硬件可以但不限于包括SOC子模块和BMC带外子模块,其中,SOC子模块主要包含ARM核心(ARM Core 1,ARM Core 2,...,ARM Core X),其还可以但不限于包括DDR(Double Data Rate,双倍速率)4控制器(内存控制器)、MAC(Media Access Control Address,媒体访问控制地址)控制器(网络控制器)、SD(Secure Digital,安全数字)Card/eMMC(Embedded Multi Media Card,嵌入式多媒体卡)控制器(存储控制器)、PCIe RC(Root Complex,根复合体)控制器、SRAM(Static Random-Access Memory,静态随机存取存储器)及SPI控制器。
上述核心与各控制器通过第二总线互连,实现核心与各控制器间的交互。同时,ARM核心间连接至第一总线(比如:可以通过AXI(Advanced eXtensible Interface,高级可扩展接口)桥(Bridge)连接),核心间的通信通过第一总线实现。此外,SOC子模块中还实现了第一总线与第二总线的互连互通(比如:通过
桥(Bridge)的转换实现),这样为SOC子模块访问第二总线上的外设提供一条物理通路。
DDR4控制器可以通过DDR4PHY(Physical Layer,物理层)接口与其他部件或者设备相连,MAC控制器通过RGMII(Reduced Gigabit Media Independent Interface,吉比特介质独立接口)与其他部件或者设备相连,SD卡/eMMC控制器通过SD接口与其他部件或者设备相连,PCIe RC控制器通过PCIe PHY接口与其他部件或者设备相连。
BMC带外子模块主要包含PWM、GPIO、FanTech(风扇调速)、mailbox(邮箱)等芯片外设对应的控制器,通过这些控制器能够实现对BMC的PECI通信(比如使用GPIO模拟PECI)、风扇调控等带外管理功能。由图12可知,该BMC带外子模块可以但不限于通过第二总线实现与SOC子模块的交互。
BMC芯片通过第一总线与第二总线实现片内ARM核、存储单元及控制器硬件资源间的互连。处理器资源的动态均衡调度主要涉及BMC芯片的ARM核心资源调度,核间通信指ARM核之间进行的通信。以Linux系统抢占RTOS系统核心为例,Linux系统首先在核2~N的某个核上通过片上第一总线向核1发送核间中断(中断号9)。如果此时RTOS系统处于空闲状态允许抢占,核1通过第一总线回复核间中断(中断号10),并释放当前核1映射的外设控制器资源(如,PWM/PECI),Linux系统收到核间中断10,发起抢占流程,把核1加入Linux SMP调度中,同时获得了PWM/PECI外设的控制权,可以通过第二总线对其进行控制。
一方面,至少两个操作系统包括第一操作系统和第二操作系统,其中,芯片将通信值装载至第一总线,第一总线将携带有通信值的通信信号发送至第二操作系统对应的通信寄存器,以实现第一操作系统和第二操作系统之间的通信,其中,通信值用于指示第一操作系统和第二操作系统之间的通信内容。
另一方面,芯片将控制值装载至第二总线,第二总线将携带有控制值的控制信号发送至硬件控制器对应的寄存器,以实现操作系统对硬件控制器的控制,其中,控制值用于指示操作系统对硬件控制器的控制内容。
操作系统通过访问(比如执行读操作与写操作)各硬件控制器的寄存器来控制硬件控制器,操作系统访问硬件控制器的寄存器的方式可以但不限于是通过对各硬件控制器的寄存器地址进行读或写,而这些寄存器的地址可以但不限于是在芯片设计时唯一且确定的。例如,操作系统向特定的地址(即上述通信寄存器或者硬件控制器对应的寄存器)写特定的值(即上述通信值或者控制值)就能实现特定的功能(比如上述操作系统之间的通信功能或者操作系统对硬件控制器的控制功能)。也就是说,不同功能对应了不同的控制值,芯片中维护了硬件控制器的功能与控制值之间的对应关系,比如:控制值00表示空调加速一档,控制值01表示空调减速一档等等。
各个操作系统之间,操作系统与硬件控制器之间可以但不限于通过总线进行通信,控制等等的交互。上述操作系统对各硬件控制器的寄存器的读写操作最终会转换为第一总线(或第二总线)对该硬件控制器的控制信号,这部分转换工作及第一总线(或第二总线)对硬件控制器的控制过程可以但不限于是由芯片内部硬件自动实现的。其实现过程遵循总线规范。其中,第一总线(或第二总线)的操作过程中一方面可以传输控制与总线协议相关的物理信号,另一方面还可以通过其物理数据通道传输有效数据至各硬件控制器。
第一总线系统可以但不限于包括主模块、从模块和基础结构(Infrastructure)3部分,整个第一总线上的传输都由主模块发出,由从模块负责回应。基础结构则可以但不限于包括仲裁器(arbiter)、主模块到从模块的多路器、从模块到主模块的多路器、译码器(decoder)、虚拟从模块(dummy Slave)、虚拟主模块(dummy Master)。对于第一总线的多主(master)多从(slave)模式,master会首先向仲裁器发送发文请求,仲裁器决定何时让master获取总线访问的权限,master获取权限之后会将数据与控制信号发送到仲裁器,仲裁器通过地址解析判断对应的slave通路,然后将请求发送到对应的目的端。同样响应的数据会通过Decoder解析,然后返回给对应的master。通过这种多路复用的机制实现多对多的访问。
对于第二总线的一主多从模式,第二总线可以挂在第一总线系统下,通Bridge(桥结构)将事务在总
线系统之间进行转化,此时Bridgre即为第二总线的master,其他的外围设备(即硬件控制器)均为slave。数据请求只能由master发向slave,slave收到请求后返回相应的响应数据给master,此过程可以实现一对多的访问,且访问可以不涉及第一总线中的仲裁和Decoder解析操作。
通过上述第一总线被配置为多主多从模式,第二总线被配置为一主多从模式的嵌入式系统,多主多从模式的第一总线能够利用相对更加复杂的逻辑电路和总线协议更高效地完成系统间的通信,一主多从模式的第二总线能够利用相对较为简单的逻辑电路和总线协议在完成系统对硬件控制器的控制的同时降低结构的复杂度,降低整个嵌入式系统的功耗,总线上多种模式的配置和配合能够更加提高嵌入式系统的运行性能。
通过上述嵌入式系统,第一操作系统和第二操作系统基于处理器运行,并通过不同功能的总线实现操作系统间的通信和硬件控制器的控制。由于第一操作系统和第二操作系统均是基于同一个处理器运行,避免了硬件器件的增加和部署,降低了系统成本,并且合理利用处理器资源支持系统之间的运行,因此,可以解决操作系统的运行效率较低的技术问题,达到了提高操作系统的运行效率的技术效果。
在一个示例性实施例中,至少两个操作系统包括第一操作系统和第二操作系统,其中,第一操作系统基于处理器控制目标硬件控制器运行目标操作业务;第一操作系统在目标操作业务运行到目标业务状态时通过第二总线释放目标硬件控制器;第二操作系统通过第二总线控制目标硬件控制器运行目标操作业务。
可选地,在本实施例中,目标操作业务由目标硬件控制器运行,第一操作系统基于处理器控制目标硬件控制器。第二操作系统可以通过接管目标硬件控制器来接管目标操作业务。
可选地,在本实施例中,目标操作业务的接管过程与前述实施例中类似,在此不做赘述。
第一操作系统在目标操作业务运行到目标业务状态时,写禁用目标硬件控制器对应的特定的值(即上述控制值)到目标硬件控制器的寄存器以实现禁用目标硬件控制器的目的。上述需要写入的特定的值由芯片硬件自动装载到第二总线的数据通道中,最终以硬件方式实现对硬件控制器的控制(即实现释放操作)。
第二操作系统写目标操作业务对应的特定的值(即上述控制值)到目标硬件控制器的寄存器以实现控制目标硬件控制器运行目标操作业务的目的。上述需要写入的特定的值由芯片硬件自动装载到第二总线的数据通道中,最终以硬件方式实现对硬件控制器的控制(即实现目标操作业务的运行)。
在一个示例性实施例中,第二操作系统通过第一总线向第一操作系统发送第一中断请求,其中,第一中断请求用于请求接管目标硬件控制器;第一操作系统响应第一中断请求通过第二总线释放目标硬件控制器;或者,第一操作系统在目标操作业务的业务属性达到目标业务属性时通过第二总线释放目标硬件控制器。
可选地,在本实施例中,第二操作系统可以主动请求接管目标硬件控制器从而接管目标操作业务,第一操作系统也可以主动释放目标硬件控制器从而释放目标操作业务。
可选地,在本实施例中,目标硬件控制器释放和接管的过程与前述实施例中类似,在此不做赘述。
第二操作系统写第一中断请求对应的特定的值(即上述通信值)到中断寄存器中,以实现向第一操作系统发送第一中断请求。上述需要写入的特定的值由芯片硬件自动装载到第一总线的数据通道中,最终以硬件方式实现中断请求功能。
在一个示例性实施例中,第一操作系统响应第一中断请求确定是否由第二操作系统接管目标硬件控制器;第一操作系统在由第二操作系统接管目标硬件控制器的情况下通过第二总线释放目标硬件控制器。
可选地,在本实施例中,第一操作系统可以对于是否由第二操作系统接管目标硬件控制器进行判定,判定过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,第一操作系统在不由第二操作系统接管目标硬件控制器的情况下通过第一总线向第二操作系统发送第二中断请求,其中,第二中断请求用于指示拒绝第二操作系统接管目标硬件控制器。
可选地,在本实施例中,第一操作系统拒绝第二操作系统接管目标硬件控制器的过程与前述实施例中
类似,在此不做赘述。
在一个示例性实施例中,第一操作系统向第二操作系统发送第三中断请求,其中,第三中断请求用于指示第一操作系统已释放目标硬件控制器;第二操作系统响应第三中断请求通过第二总线控制目标硬件控制器运行目标操作业务。
可选地,在本实施例中,第一操作系统向第二操作系统通知已释放目标硬件控制器的过程与前述实施例中类似,在此不做赘述。
第二操作系统写目标操作业务对应的特定的值(即上述控制值)到目标硬件控制器的寄存器以实现控制目标硬件控制器运行目标操作业务的目的。上述需要写入的特定的值由芯片硬件自动装载到第二总线的数据通道中,最终以硬件方式实现对硬件控制器的控制。
在一个示例性实施例中,至少两个操作系统包括第一操作系统和第二操作系统,其中,第一操作系统基于处理器中的目标处理器核心运行;第一操作系统在运行到目标系统状态时释放目标处理器核心;第二操作系统将目标处理器核心添加至第二操作系统的调度资源池中,其中,调度资源池中包括处理器中为第二操作系统分配的处理器核心。
可选地,在本实施例中,至少两个操作系统占用目标处理器核心的过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,第二操作系统通过第一总线向第一操作系统发送第四中断请求,其中,第四中断请求用于请求占用目标处理器核心;第一操作系统响应第四中断请求释放目标处理器核心;或者,第一操作系统在系统属性达到目标系统属性时释放目标处理器核心。
可选地,在本实施例中,第二操作系统可以主动抢占目标处理器核心,第一操作系统也可以主动释放目标处理器核心。
可选地,在本实施例中,目标处理器核心的抢占和释放的过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,第一操作系统响应第四中断请求确定是否由第二操作系统占用目标处理器核心;第一操作系统在由第二操作系统占用目标处理器核心的情况下释放目标处理器核心。
可选地,在本实施例中,第一操作系统可以判定是否由第二操作系统占用目标处理器核心,该过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,第一操作系统在不由第二操作系统占用目标处理器核心的情况下通过第一总线向第二操作系统发送第五中断请求,其中,第五中断请求用于指示拒绝第二操作系统占用目标处理器核心。
可选地,在本实施例中,第一操作系统拒绝第二操作系统占用目标处理器核心的过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,第一操作系统向第二操作系统发送第六中断请求,其中,第六中断请求用于指示第一操作系统已释放目标处理器核心;第二操作系统响应第六中断请求将目标处理器核心添加至调度资源池中。
可选地,在本实施例中,第一操作系统向第二操作系统通知已释放目标处理器核心的过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,至少两个操作系统包括第一操作系统和第二操作系统,其中,处理器中的目标处理器核心已被添加至第二操作系统的调度资源池中,其中,调度资源池中包括处理器中为第二操作系统分配的处理器核心;第二操作系统在第一操作系统被唤醒时释放目标处理器核心;第一操作系统基于目标处理器核心运行。
可选地,在本实施例中,第一操作系统唤醒使用目标处理器核心的过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,第二操作系统在检测到第一操作系统被唤醒时释放目标处理器核心;或者,
第一操作系统在被唤醒时向第二操作系统发送第七中断请求,其中,第七中断请求用于请求第二操作系统释放目标处理器核心;第二操作系统响应第七中断请求释放目标处理器核心。
可选地,在本实施例中,第二操作系统在第一操作系统唤醒时主动释放目标处理器核心或者由第一操作系统主动请求其释放目标处理器核心,该过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,至少两个操作系统包括第一操作系统和第二操作系统,芯片还包括存储空间,至少两个操作系统通过第一总线控制存储空间,其中,第一操作系统基于处理器运行的过程中产生业务数据;第一操作系统通过第一总线将业务数据存储至存储空间,并通过第一总线向第二操作系统发送第八中断请求,其中,第八中断请求用于请求第二操作系统从存储空间读取业务数据;第二操作系统响应第八中断请求从存储空间读取业务数据。
可选地,在本实施例中,第一操作系统和第二操作系统可以但不限于通过存储空间和中断请求的传输实现系统间业务数据的交互,系统间业务数据的交互过程与前述实施例中类似,在此不做赘述。
第一操作系统写特定的值到存储控制器的特定地址以实现将业务数据存储至存储空间的目的。上述需要写入的特定的值由芯片硬件自动装载到第一总线的数据通道中,最终以硬件方式实现对存储控制器的控制以及业务数据的存储(即实现了通过其物理数据通道传输有效数据)。
在一个示例性实施例中,第一操作系统基于处理器周期性运行;或者,第一操作系统响应接收到的唤醒请求基于处理器运行;或者,第一操作系统根据处理器上所产生的当前操作业务与第一操作系统之间的匹配度基于处理器运行。
可选地,在本实施例中,第一操作系统的运行机制与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,第一操作系统在运行结束后进行休眠;第二操作系统在第一操作系统休眠期间将第一操作系统所使用的目标处理器核心添加至第二操作系统的调度资源池中,其中,调度资源池中包括处理器中除目标处理器核心以外的其他处理器。
可选地,在本实施例中,在第一操作系统休眠期间由第二操作系统占用目标处理器核心的过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,至少两个操作系统通过第一总线部署的通信协议进行通信;或者,至少两个操作系统通过第一总线,第二总线和硬件控制器中的通信硬件控制器进行通信。
可选地,在本实施例中,至少两个操作系统可以但不限于通过第一总线部署的通信协议进行通信,即可以但不限于通过软件形式实现核间通信。
可选地,在本实施例中,至少两个操作系统还可以但不限于通过第一总线,第二总线和硬件控制器中的通信硬件控制器进行通信,即可以但不限于通过硬件形式实现核间通信。
在一个示例性实施例中,至少两个操作系统通过第一总线发送处理器间中断请求进行通信;或者,至少两个操作系统中的一个操作系统向第一总线发送系统中断请求;第一总线将系统中断请求转发至第二总线;第二总线向通信硬件控制器所控制的邮箱硬件模块发送系统中断请求;邮箱硬件模块通过第二总线和第一总线将系统中断请求发送至少两个操作系统中的另一个操作系统。
可选地,在本实施例中,在不同操作系统之间进行处理器资源的抢占和释放,以及业务数据的交互可以但不限于通过核间中断完成,例如,SGI(Software Generated Interrupt,软件触发中断,Linux系统中的核间中断),一个操作系统可以通过IPI(Inter-Processor Interrupt,处理器间中断)向另一个操作系统发出资源抢占请求(例如,核心抢占请求)或者资源释放请求(例如,核心释放请求),以请求进行处理资源的抢占或者释放。
可选地,在本实施例中,还可以但不限于通过带外子模块中邮箱控制器连接的邮箱通道mailbox实现核间通信。
在一个示例性实施例中,至少两个操作系统包括第一操作系统和第二操作系统,其中,第一操作系统通过第一总线监控第二操作系统上执行的操作业务;第一操作系统在第二操作系统上执行的操作业务中存
在异常操作业务时,通过第一总线接管异常操作业务。
可选地,在本实施例中,第一操作系统对第二操作系统上异常操作业务的监控过程与前述实施例中类似,在此不做赘述。
第二操作系统的各操作业务以一定频率向存储控制器特定地址写值,第一操作系统读存储控制器的特定地址以实现对第二操作系统上执行的操作业务进行监控的目的。上述需要读取的存储控制器的特定地址由芯片硬件自动装载到第一总线的地址通道中,以硬件方式实现对存储控制器的特定地址的读取,读取的值从第一总线的数据通道以硬件的形式返回给第一操作系统,最终实现对第二操作系统上执行的操作业务的监控。
第一操作系统接管异常操作业务可以是对异常操作业务对应的硬件控制器的控制。第一操作系统写特定的值到上述异常操作业务的硬件控制器的寄存器以实现控制该硬件控制器。上述需要写入的特定的值由芯片硬件自动装载到第一总线的数据通道中,最终以硬件方式实现对硬件控制器的控制及异常操作业务的接管。
在一个示例性实施例中,第一操作系统通过第一总线接收第二操作系统上执行的操作业务的心跳信号;第一操作系统通过第一总线将心跳信号的频率不符合所对应的目标频率的操作业务作为异常操作业务接管。
可选地,在本实施例中,第一操作系统通过监控心跳信号的频率对第二操作系统上异常操作业务进行监控的过程与前述实施例中类似,在此不做赘述。
第一操作系统读存储控制器的特定地址的值以实现对第二操作系统上执行的操作业务的心跳信号接收的目的。上述需要读取的存储控制器的特定地址由芯片硬件自动装载到第一总线的地址通道中,以硬件方式实现对存储控制器的特定地址的读取,读取的值由从第一总线的数据通道以硬件的形式返回给第一操作系统,最终实现对第二操作系统上执行的操作业务的心跳信号的接收。
在一个示例性实施例中,第一操作系统在接管异常操作业务之后,通过第一总线向第二操作系统发送重启指令,其中,重启指令用于指示重启异常操作业务。
可选地,在本实施例中,第一操作系统在接管第二操作系统上异常操作业务之后对异常操作业务的重启过程与前述实施例中类似,在此不做赘述。
第一操作系统在接管异常操作业务之后,写特定的值到存储控制器的特定地址以实现重启第二操作系统异常操作业务的目的。上述需要写入的特定的值由芯片硬件自动装载到第一总线的数据通道中,以硬件方式实现对存储控制器的特定地址的值更新。第二操作系统读取上述特定值并解析,进而重启相应的异常操作业务。
在一个示例性实施例中,芯片还包括:存储器,存储器中存储了启动引导模块,芯片上电后运行启动引导模块引导至少两个操作系统中的一个操作系统启动,启动引导模块至少两个操作系统中的其他操作系统启动。
可选地,在本实施例中,多操作系统的启动引导过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,至少两个操作系统包括第一操作系统和第二操作系统,其中,第一操作系统基于处理器控制目标硬件控制器运行目标操作业务;第一操作系统在目标操作业务运行到目标业务状态时通过第二总线释放目标硬件控制器;第二操作系统通过第二总线控制目标硬件控制器运行目标操作业务;第一操作系统基于处理器中的目标处理器核心运行;第一操作系统在运行到目标系统状态时释放目标处理器核心;第二操作系统将目标处理器核心添加至第二操作系统的调度资源池中,其中,调度资源池中包括处理器中为第二操作系统分配的处理器核心;芯片还包括存储空间,至少两个操作系统通过第一总线控制存储空间,其中,第一操作系统基于处理器运行的过程中产生业务数据;第一操作系统通过第一总线将业务数据存储至存储空间,并通过第一总线向第二操作系统发送第八中断请求,其中,第八中断请求用于请求第二操作系统从存储空间读取业务数据;第二操作系统响应第八中断请求从存储空间读取业务数据。
可选地,在本实施例中,操作系统之间可以即接管硬件控制器,又抢占处理器核心,该过程与前述实施例中类似,在此不做赘述。
在本实施例中还提供了另一种被设置为实施上述操作系统的运行控制方法的嵌入式系统,上述嵌入式系统可以运行于上述BMC芯片上,该嵌入式系统包括:第一操作系统,第二操作系统,控制器和处理器,其中,第一操作系统和第二操作系统基于处理器运行,控制器被设置为检测第一操作系统在运行过程中的运行状态,并根据运行状态控制第一操作系统所使用的处理器资源。
通过上述嵌入式系统,第一操作系统和第二操作系统基于处理器运行,控制器检测第一操作系统在运行过程中的运行状态,并根据该运行状态对第一操作系统所使用的处理器资源进行控制。由于第一操作系统和第二操作系统均是基于同一个处理器运行,避免了硬件器件的增加和部署,降低了系统成本,并且可以在操作系统运行的过程中对其使用的处理器资源进行控制,从而合理利用处理器资源支持系统之间的运行,因此,可以解决操作系统的运行效率较低的技术问题,达到了提高操作系统的运行效率的技术效果。
在本实施例中,第一操作系统和第二操作系统可以与前述实施例中类似,第一操作系统和第二操作系统基于处理器运行,控制器可以是运行在第一操作系统或者第二操作系统下的软件模组。
可选地,在本实施例中,控制器的处理逻辑可以但不限于部署在处理器上,还可以部署在第一操作系统上,或者也可以但不限于按照功能划分为第一控制单元和第二控制单元分别部署在第一操作系统和第二操作系统上,从而实现系统间的处理器资源控制,操作业务管理和业务交互等功能。
在一个示例性实施例中,控制器,被设置为以下至少之一:检测第一操作系统基于处理器所运行的目标操作业务的业务状态,其中,运行状态包括业务状态;检测第一操作系统的系统状态,其中,运行状态包括系统状态,第一操作系统基于处理器中的目标处理器核心运行。
可选地,在本实施例中,控制器对业务状态和系统状态的检测与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,控制器,被设置为在检测到业务状态为目标业务状态的情况下,释放目标操作业务,其中,处理器资源包括目标操作业务;第二操作系统,用于运行目标操作业务;和/或,控制器,被设置为在检测到系统状态为目标系统状态的情况下,释放目标处理器核心,其中,处理器资源包括目标处理器核心;第二操作系统,用于将目标处理器核心添加至第二操作系统的调度资源池中,调度资源池中包括处理器中为第二操作系统分配的处理器核心。
可选地,在本实施例中,控制器控制目标操作业务以及目标处理器核心的释放过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,嵌入式系统还包括:运行于第一操作系统上的第一业务交互线程,运行于第二操作系统上的第二业务交互线程,其中,控制器,被设置为在获取到第二业务交互线程向第一业务交互线程发送的第一中断请求的情况下,确定检测到业务状态为目标业务状态,其中,第一中断请求用于请求接管目标操作业务;或者,控制器,被设置为在目标操作业务的业务属性达到目标业务属性的情况下,确定检测到业务状态为目标业务状态。
可选地,在本实施例中,操作系统间的交互过程可以但不限于通过各个操作系统上分别部署的业务交互线程来控制。
可选地,在本实施例中,业务状态的检测过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,控制器,被设置为:响应第一中断请求,确定是否由第二操作系统接管目标操作业务;在由第二操作系统接管目标操作业务的情况下,释放目标操作业务。
可选地,在本实施例中,控制器对于是否由第二操作系统接管目标操作业务的判定过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,嵌入式系统还包括:运行于第一操作系统上的第一业务交互线程,运行于第二操作系统上的第二业务交互线程,其中,第一业务交互线程,用于在不由第二操作系统接管目标操作业务的情况下,向第二业务交互线程发送第二中断请求,其中,第二中断请求用于指示拒绝第二操作系统接管
目标操作业务。
可选地,在本实施例中,拒绝第二操作系统接管目标操作业务的过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,嵌入式系统还包括:运行于第一操作系统上的第一业务交互线程,运行于第二操作系统上的第二业务交互线程,其中,第一业务交互线程,用于向第二业务交互线程发送第三中断请求,其中,第三中断请求用于指示已释放目标硬件控制器;第二操作系统,用于响应第三中断请求控制目标硬件控制器运行目标操作业务。
可选地,在本实施例中,对于已释放目标硬件控制器的通知过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,嵌入式系统还包括:运行于第一操作系统上的第一业务交互线程,运行于第二操作系统上的第二业务交互线程,其中,控制器,被设置为在获取到第二业务交互线程向第一业务交互线程发送的第四中断请求的情况下,确定检测到系统状态为目标系统状态,其中,第四中断请求用于请求占用目标处理器核心;或者,控制器,被设置为在第一操作系统的系统属性达到目标系统属性的情况下,确定检测到系统状态为目标系统状态。
可选地,在本实施例中,系统状态的检测过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,控制器,被设置为:响应第四中断请求,确定是否由第二操作系统占用目标处理器核心;在由第二操作系统占用目标处理器核心的情况下,释放目标处理器核心。
可选地,在本实施例中,控制器对于是否由第二操作系统占用目标处理器核心的判定过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,嵌入式系统还包括:运行于第一操作系统上的第一业务交互线程,运行于第二操作系统上的第二业务交互线程,其中,第一业务交互线程,用于在不由第二操作系统占用目标处理器核心的情况下,向第二业务交互线程发送第五中断请求,其中,第五中断请求用于指示拒绝第二操作系统占用目标处理器核心。
可选地,在本实施例中,拒绝第二操作系统占用目标处理器核心的过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,嵌入式系统还包括:运行于第一操作系统上的第一业务交互线程,运行于第二操作系统上的第二业务交互线程,其中,第一业务交互线程,用于向第二业务交互线程发送第六中断请求,其中,第六中断请求用于指示第一操作系统已释放目标处理器核心;第二操作系统,用于响应第八中断请求将目标处理器核心添加至调度资源池中。
可选地,在本实施例中,已释放目标处理器核心的通知过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,控制器,还被设置为:在处理器中的目标处理器核心已被添加至第二操作系统的调度资源池中,且,第一操作系统被唤醒运行的情况下,检测目标处理器核心是否被释放,其中,调度资源池中包括处理器中为第二操作系统分配的处理器核心;在检测到第二操作系统在第一操作系统被唤醒时已释放目标处理器核心的情况下,基于目标处理器核心运行第一操作系统。
可选地,在本实施例中,第一操作系统唤醒时的操作过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,嵌入式系统还包括:运行于第一操作系统上的第一业务交互线程,运行于第二操作系统上的第二业务交互线程,其中,第一业务交互线程,用于在检测到目标处理器核心未被释放的情况下,向第二业务交互线程发送第七中断请求,其中,第七中断请求用于请求第二操作系统释放目标处理器核心;第二操作系统,用于响应第七中断请求释放目标处理器核心。
可选地,在本实施例中,第二操作系统释放目标处理器核心的协商过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,嵌入式系统还包括:运行于第一操作系统上的第一业务交互线程,运行于第
二操作系统上的第二业务交互线程,其中,第一业务交互线程,用于获取第一操作系统基于处理器运行的过程中产生的业务数据;将业务数据存储至处理器上的存储空间;向第二业务交互线程发送第八中断请求,其中,第八中断请求用于请求第二操作系统从存储空间读取业务数据;第二操作系统,用于响应第八中断请求从存储空间读取业务数据。
可选地,在本实施例中,操作系统之间业务数据的交互过程与前述实施例中类似,在此不做赘述。
在一个可选的实施方式中,提供了一种基于硬件模块实现操作系统间业务数据通信的过程,以第一操作系统为RTOS,第二操作系统为Linux为例,图13是根据本申请可选的实施方式的一种操作系统间的业务数据通信过程的示意图,如图13所示,Linux与RTOS间具备业务交互能力,这种能力可以但不限于是通过核间通信来实现的,比如采用基于共享内存的通信架构来实现,采用mailbox作为硬件模块,其作用是将内存的指针从一个Linux所在的核传送给RTOS所在的核,并且指针的发送和接收采用独立的mailbox通道。共享内存Shared Memory可以被所有核访问,该共享内存空间可以来自系统内存DDR的固定存储区域。Linux核首先将数据写入共享内存,然后mailbox将中断请求传递到RTOS核上,而RTOS核拿到中断请求后,可以直接从Share Memory读取数据,整个过程由于不涉及数据拷贝操作,通信效率高,尤其适合大数据量传输。
运行在Linux上的系统间业务交互线程(即上述第二业务交互线程)简称为Linux线程,运行在RTOS上的系统间业务交互线程(即上述第一业务交互线程)简称为RTOS线程,上述异构的多系统核间通信过程可以但不限于包括如下步骤:
步骤1,Linux线程拷贝数据到共享内存Share Memory中的指定位置1。
步骤2,Linux线程将共享内存Share Memory中的指定位置1的地址1和中断请求等信息写入硬件模块mailbox的通道A。
步骤3,RTOS线程接收硬件模块mailbox的通道A中的中断请求和地址1。
步骤4,RTOS线程从共享内存Share Memory中读取地址1中存储的数据。
步骤5,RTOS线程拷贝数据到共享内存Share Memory的指定位置2。
步骤6,RTOS线程将共享内存Share Memory中的指定位置2的地址2和中断请求等信息写入硬件模块Mailbox的通道B。
步骤7,Linux线程接收硬件模块mailbox的通道B中的中断请求和地址2。
步骤8,Linux线程从共享内存Share Memory中的地址2读取数据。
通过上述核间通信机制,实现了Linux的系统间业务交互线程和RTOS的系统间业务交互线程之间的消息传递、处理和响应。
在一个示例性实施例中,控制器还被设置为:控制第一操作系统基于处理器周期性运行;或者,响应接收到的唤醒请求,控制第一操作系统基于处理器运行;或者,根据处理器上所产生的操作业务与第一操作系统之间的匹配度,控制第一操作系统基于处理器运行。
可选地,在本实施例中,控制器对于第一操作系统唤醒控制过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,控制器,被设置为:检测处理器上所产生的当前操作业务的业务信息;在检测到业务信息与第一操作系统之间的匹配度高于匹配度阈值的情况下,控制第一操作系统基于处理器运行当前操作业务。
可选地,在本实施例中,控制器对于操作业务与第一操作系统之间匹配度的判定过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,控制器,被设置为:检测当前操作业务的目标响应速度,和/或,目标资源占用量,其中,业务信息包括:目标响应速度,和/或,资源占用量,目标响应速度是当前操作业务需要处理器达到的响应速度,目标资源占用量是当前操作业务需要处理器提供的资源量;在目标响应速度小于或者
等于速度阈值,和/或,目标资源占用量小于或者等于占用量阈值的情况下,确定业务信息与第一操作系统之间的匹配度高于匹配度阈值。
可选地,在本实施例中,控制器对于业务信息的处理过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,控制器,还被设置为:控制第一操作系统在运行结束后进行休眠。
可选地,在本实施例中,控制器对于第一操作系统的休眠控制过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,嵌入式系统还包括:运行于第一操作系统上的第一业务交互线程,运行于第二操作系统上的第二业务交互线程,其中,第一业务交互线程,用于通知第二业务交互线程允许占用第一操作系统所使用的处理器核心;第二操作系统,用于在第一操作系统休眠期间将第一操作系统所使用的目标处理器核心添加至第二操作系统的调度资源池中,调度资源池中包括处理器中除目标处理器核心以外的其他处理器。
可选地,在本实施例中,第二操作系统在第一操作系统休眠期间对于处理器核心的占用过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,嵌入式系统还包括:运行于第一操作系统上的业务接管线程,其中,业务接管线程,用于监控第二操作系统上执行的操作业务;在监控到第二操作系统上执行的操作业务中存在异常操作业务的情况下,接管异常操作业务。
可选地,在本实施例中,第一操作系统上部署了业务接管线程对第二操作系统上执行的操作业务进行监控。
可选地,在本实施例中,第二操作系统上执行的操作业务的监控过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,业务接管线程,用于:接收第二操作系统上执行的每个操作业务的心跳信号;将心跳信号的频率不符合所对应的目标频率的操作业务确定为异常操作业务。
可选地,在本实施例中,业务接管线程通过心跳信号的频率监控异常操作业务的过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,业务接管线程,还用于:在通过第一操作系统接管异常操作业务之后,向第二操作系统发送重启指令,其中,重启指令用于指示重启异常操作业务。
可选地,在本实施例中,业务接管线程控制第二操作系统重启异常操作业务的过程与前述实施例中类似,在此不做赘述。
在一个示例性实施例中,嵌入式系统还包括:启动引导模块,被设置为引导第一操作系统启动;引导第二操作系统启动。
可选地,在本实施例中,多操作系统的启动引导过程与前述实施例中类似,在此不做赘述。
在一个可选的实施方式中,提供了一种嵌入式系统中业务管理的过程,以第一操作系统为RTOS,第二操作系统为Linux为例,图14是根据本申请可选的实施方式的一种嵌入式系统中业务管理过程的示意图,如图14所示,嵌入式系统的处理器上部署了n+1个CPU核,分别为核0,核1,......,核n。将核0分配给RTOS,将核1至核n分配给Linux,其中的核0为可动态配置型CPU核,即RTOS可以在上述的某种情况下释放核0以供Linux调度使用,而Linux也可以在上述的某种机制下抢占核0,调度核0的资源运行自身的任务。
对于RTOS,其可以包含任务调度器和各种线程(比如:实时控制线程、任务管理线程和系统间业务交互线程等)。其中,任务调度器被设置为各线程的调度管理,各线程调度可采用轮询或者线程优先级的方式进行。
图15是根据本申请可选的实施方式的一种任务调度过程的示意图,如图15所示,任务调度器在采用轮询方式时为各个实时线程分配时间片,比如为实时线程A,实时线程B和实时线程C分别分配了时间
片。实时线程C之后的时间片属于空调度状态,任务调度器可以唤醒定时器启动,将实时线程C之后的时间片分配给Linux,由Linux调度业务线程1和业务线程2占用核0。
实时控制线程用于处理RTOS中的高实时性线程。任务接管线程主要为了保障系统的健壮性以及业务的连续性而设计,一旦Linux由于某种原因发生错误而无法运行传统业务线程时,RTOS将通过任务接管线程接管该业务,然后复位Linux,待Linux运行正常后,再次将该业务交还给Linux。而系统间业务交互线程则用于RTOS和Linux的核间通信功能。
对于Linux系统,其包含传统业务线程、核间调度器及系统间业务交互线程等。传统业务线程用于处理系统中数量多且复杂的非实时性业务(比如传统业务线程A,传统业务线程B和传统业务线程C等等)。核间调度器被设置为完成核0的抢占和调度。系统间业务交互线程用于实现Linux与RTOS之间的通信。
上述嵌入式系统可以但不限于采用如下的运行过程:
步骤a,系统上电,首先引导RTOS启动,再引导Linux系统启动,RTOS占用CPU核0,Linux系统占用其余的核1至核n。
步骤b,RTOS系统启动后其任务调度器根据轮询时间片策略分配时间片给需要调度的线程,若存在空闲的时间片,将其记录在空闲时间片链表中,并配置唤醒寄存器(即定时器);否则不进行空闲时间片记录和唤醒寄存器操作。
步骤c,RTOS系统启动系统间业务交互线程,等待交互过程,实际交互时使用上述核间通信机制。
步骤d,Linux系统正常启动,传统业务被调度,而核间调度器、任务接管线程处于静默状态。
步骤e,Linux系统启动核间调度器,该启动过程涉及两种情形,第一种情形为当RTOS任务调度器发现一个调度周期内无任何线程需要调度时,触发释放核0的核间中断给Linux系统,RTOS将正在运行的数据压入堆栈,然后进入休眠状态,上述中断将触发Linux启动核间调度器,当该调度器接收到中断后,通知Linux系统接管核0,Linux中负责调度均衡的模块将会给核0分配线程进行调度。第二种情形为当Linux系统检测到其CPU占用率过高时,将启动Linux核间调度器并触发抢占核0的核间中断给RTOS,RTOS收到中断后会将正在运行的数据压入堆栈,然后进入休眠状态,同时Linux系统接管核0进行调度。
步骤f,一旦Linux系统由于某种原因出现错误并导致传统业务线程无法运行时,RTOS将通过任务接管线程接管该业务,然后复位Linux系统,待Linux系统运行正常时,再次将接管的业务交还给Linux系统。
通过上述嵌入式系统的运行过程,对嵌入式系统中的多个操作系统进行并行的管理和控制,利用RTOS实时系统代替传统的CPLD、EC芯片、实时控制芯片等硬件器件,实现了嵌入式系统的实时管理控制。采用通用嵌入式系统加实时操作系统的嵌入式异构系统架构,有效改善了传统嵌入式系统实时业务处理能力不足的现状,与此同时,通过将传统嵌入式系统实时敏感性任务分配给实时操作系统,显著减轻了传统嵌入式系统的工作负载,提升了系统运行效率。并且通过CPU核0的休眠和唤醒策略,使得嵌入式CPU算力得到充分发挥,有效提升了嵌入式系统CPU资源的利用率。此外,采用RTOS实时系统代替传统的CPLD、EC芯片、实时控制芯片等硬件逻辑器件,最直接的收益为硬件成本的节省,此外,由于是软件实现,相比传统基于硬件器件的实现方式具有更高的灵活性与扩展能力。
根据本申请实施例的另一个方面,还提供了另一种嵌入式系统,上述嵌入式系统可以运行于上述BMC芯片上,图16是本申请实施例的可选的嵌入式系统的示意图二,如图16所示,上述嵌入式系统可以包括:
第一操作系统和第二操作系统,第一操作系统和第二操作系统运行于处理器上,第一操作系统的响应速度高于第二操作系统;
业务管理模块,被设置为根据资源动态分配规则将一组待分配业务分配给对应的操作系统,其中,资源动态分配规则包括根据以下至少之一进行资源动态分配:业务响应速度,业务资源占用率;
资源动态分配模块,被设置为确定与一组待分配业务对应的资源分配结果,其中,资源分配结果用于指示处理器的处理资源中与一组待分配业务中的每个待分配业务对应的处理资源,处理器的处理资源包括处理器核心;
资源自适应调度模块,被设置为根据与每个待分配业务对应的操作系统以及资源分配结果,将处理器的处理资源分配给第一操作系统和第二操作系统。
在本实施例中,第一操作系统和第二操作系统可以与前述实施例中类似,在此不做赘述,业务管理模块、资源动态分配模块和资源自适应调度模块可以是运行在第一操作系统或者第二操作系统下的软件模组,通过进行上述模块划分,可以方便进行不同功能模块的开发与维护,同时,对于资源动态分配规则,通过对资源动态分配规则进行灵活设置,提高资源分配的灵活性。
通过上述嵌入式系统,将处理器的处理资源分配给第一操作系统和第二操作系统,解决了相关技术中存在由于多核处理器多数的处理资源处于空闲状态导致的核心资源的整体利用率较低的问题,提高了处理资源的利用率。
在本实施例中还提供了一种操作系统的运行控制装置,该装置被设置为实现上述实施例及可选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。
图17是根据本申请实施例的操作系统的运行控制装置的结构框图,如图17所示,该装置包括:
第一检测模块1702,被设置为检测第一操作系统在运行过程中的运行状态,其中,第一操作系统和第二操作系统基于处理器运行;
控制模块1704,被设置为根据运行状态控制第一操作系统所使用的处理器资源。
通过上述装置,第一操作系统和第二操作系统基于处理器运行,第一检测模块检测第一操作系统在运行过程中的运行状态,控制模块根据该运行状态对第一操作系统所使用的处理器资源进行控制。由于第一操作系统和第二操作系统均是基于同一个处理器运行,避免了硬件器件的增加和部署,降低了系统成本,并且可以在操作系统运行的过程中对其使用的处理器资源进行控制,从而合理利用处理器资源支持系统之间的运行,因此,可以解决操作系统的运行效率较低的技术问题,达到了提高操作系统的运行效率的技术效果。
在一个示例性实施例中,第一检测模块,被设置为以下至少之一:
检测第一操作系统基于处理器所运行的目标操作业务的业务状态,其中,运行状态包括业务状态;
检测第一操作系统的系统状态,其中,运行状态包括系统状态,第一操作系统基于处理器中的目标处理器核心运行。
在一个示例性实施例中,第一检测模块,被设置为以下至少之一:
在检测到业务状态为目标业务状态的情况下,释放目标操作业务,其中,处理器资源包括目标操作业务,第二操作系统用于运行目标操作业务;
在检测到系统状态为目标系统状态的情况下,释放目标处理器核心,其中,处理器资源包括目标处理器核心,第二操作系统用于将目标处理器核心添加至第二操作系统的调度资源池中,调度资源池中包括处理器中为第二操作系统分配的处理器核心。
在一个示例性实施例中,上述装置还包括:
第一确定模块,被设置为在获取到第二操作系统向第一操作系统发送的第一中断请求的情况下,确定检测到业务状态为目标业务状态,其中,第一中断请求用于请求接管目标操作业务;或者,
第二确定模块,被设置为在目标操作业务的业务属性达到目标业务属性的情况下,确定检测到业务状态为目标业务状态。
在一个示例性实施例中,第一检测模块,被设置为:
响应第一中断请求,确定是否由第二操作系统接管目标操作业务;
在由第二操作系统接管目标硬件控制器的情况下,释放目标操作业务。
在一个示例性实施例中,上述装置还包括:
第一发送模块,被设置为在确定是否由第二操作系统接管目标操作业务之后,在不由第二操作系统接管目标操作业务的情况下,向第二操作系统发送第二中断请求,其中,第二中断请求用于指示拒绝第二操作系统接管目标操作业务。
在一个示例性实施例中,上述装置还包括:
第二发送模块,被设置为在检测到业务状态为目标业务状态的情况下,释放目标操作业务之后,向第二操作系统发送第三中断请求,其中,第三中断请求用于指示已释放目标操作业务,第二操作系统用于响应第三中断请求运行目标操作业务。
在一个示例性实施例中,上述装置还包括:
第三确定模块,被设置为在获取到第二操作系统向第一操作系统发送的第四中断请求的情况下,确定检测到系统状态为目标系统状态,其中,第四中断请求用于请求占用目标处理器核心;或者,
第四确定模块,被设置为在第一操作系统的系统属性达到目标系统属性的情况下,确定检测到系统状态为目标系统状态。
在一个示例性实施例中,第三确定模块,被设置为:
响应第四中断请求,确定是否由第二操作系统占用目标处理器核心;
在由第二操作系统占用目标处理器核心的情况下,释放目标处理器核心。
在一个示例性实施例中,上述装置还包括:
第三发送模块,被设置为在确定是否由第二操作系统占用目标处理器核心之后,在不由第二操作系统占用目标处理器核心的情况下,向第二操作系统发送第五中断请求,其中,第五中断请求用于指示拒绝第二操作系统占用目标处理器核心。
在一个示例性实施例中,上述装置还包括:
第四发送模块,被设置为在检测到系统状态为目标系统状态的情况下,释放目标处理器核心之后,向第二操作系统发送第六中断请求,其中,第六中断请求用于指示第一操作系统已释放目标处理器核心,第二操作系统用于响应第六中断请求将目标处理器核心添加至调度资源池中。
在一个示例性实施例中,上述装置还包括:
第二检测模块,被设置为在处理器中的目标处理器核心已被添加至第二操作系统的调度资源池中,且,第一操作系统被唤醒运行的情况下,检测目标处理器核心是否被释放,其中,调度资源池中包括处理器中为第二操作系统分配的处理器核心;
运行模块,被设置为在检测到第二操作系统在第一操作系统被唤醒时已释放目标处理器核心的情况下,基于目标处理器核心运行第一操作系统。
在一个示例性实施例中,上述装置还包括:
第五发送模块,被设置为在检测目标处理器核心是否被释放之后,在检测到目标处理器核心未被释放的情况下,向第二操作系统发送第七中断请求,其中,第七中断请求用于请求第二操作系统释放目标处理器核心,第二操作系统用于响应第七中断请求释放目标处理器核心。
在一个示例性实施例中,上述装置还包括:
获取模块,被设置为获取第一操作系统基于处理器运行的过程中产生的业务数据;
存储模块,被设置为将业务数据存储至处理器上的存储空间;
第六发送模块,被设置为向第二操作系统发送第八中断请求,其中,第八中断请求用于请求第二操作系统从存储空间读取业务数据,第二操作系统用于响应第八中断请求从存储空间读取业务数据。
在一个示例性实施例中,上述装置还包括:
第一控制模块,被设置为控制第一操作系统基于处理器周期性运行;或者,
响应模块,被设置为响应接收到的唤醒请求,控制第一操作系统基于处理器运行;或者,
第二控制模块,被设置为根据处理器上所产生的操作业务与第一操作系统之间的匹配度,控制第一操作系统基于处理器运行。
在一个示例性实施例中,第二控制模块,被设置为:
检测处理器上所产生的当前操作业务的业务信息;
在检测到业务信息与第一操作系统之间的匹配度高于匹配度阈值的情况下,控制第一操作系统基于处理器运行当前操作业务。
在一个示例性实施例中,第二控制模块,被设置为:
检测当前操作业务的目标响应速度,和/或,目标资源占用量,其中,业务信息包括:目标响应速度,和/或,资源占用量,目标响应速度是当前操作业务需要处理器达到的响应速度,目标资源占用量是当前操作业务需要处理器提供的资源量;
在目标响应速度小于或者等于速度阈值,和/或,目标资源占用量小于或者等于占用量阈值的情况下,确定业务信息与第一操作系统之间的匹配度高于匹配度阈值。
在一个示例性实施例中,上述装置还包括:
第三控制模块,被设置为控制第一操作系统在运行结束后进行休眠。
在一个示例性实施例中,上述装置还包括:
通知模块,被设置为在控制第一操作系统在运行结束后进行休眠之后,通知第二操作系统允许占用第一操作系统所使用的处理器核心,其中,第二操作系统用于在第一操作系统休眠期间将第一操作系统所使用的目标处理器核心添加至第二操作系统的调度资源池中,调度资源池中包括处理器中除目标处理器核心以外的其他处理器。
在一个示例性实施例中,上述装置还包括:
监控模块,被设置为监控第二操作系统上执行的操作业务;
接管模块,被设置为在监控到第二操作系统上执行的操作业务中存在异常操作业务的情况下,通过第一操作系统接管异常操作业务。
在一个示例性实施例中,监控模块,被设置为:
接收第二操作系统上执行的每个操作业务的心跳信号;
将心跳信号的频率不符合所对应的目标频率的操作业务确定为异常操作业务。
在一个示例性实施例中,上述装置还包括:
第七发送模块,被设置为在通过第一操作系统接管异常操作业务之后,向第二操作系统发送重启指令,其中,重启指令用于指示重启异常操作业务。
在一个示例性实施例中,上述装置还包括:
第一引导模块,被设置为引导第一操作系统启动;
第二引导模块,被设置为引导第二操作系统启动。
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述各个模块以任意组合的形式分别位于不同的处理器中。
本申请的实施例还提供了一种芯片,其中,该芯片包括可编程逻辑电路以及可执行指令中的至少之一,该芯片在电子设备中运行,被设置为实现上述任一项方法实施例中的步骤。
本申请的实施例还提供了一种BMC芯片,其中,该BMC芯片可以包括:存储单元以及与存储单元连接的处理单元。存储单元被设置为存储程序,而处理单元被设置为运行该程序,以执行上述任一项方法实施例中的步骤。
本申请的实施例还提供了一种主板,其中,该主板包括:至少一个处理器;至少一个存储器,被设置为存储至少一个程序;当至少一个程序被至少一个处理器执行,使得至少一个处理器实现上述任一项方法实施例中的步骤。
本申请的实施例还提供了一种服务器,其中,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信,存储器,被设置为存放计算机程序;处理器,被设置为执行存储器上所存放的程序时,实现上述任一项方法实施例中的步骤,以达到相同的技术效果。
上述服务器的通信总线可以是PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。通信接口被设置为上述服务器与其他设备之间的通信。
存储器可以包括RAM(RandomAccessMemory,随机存取存储器),也可以包括NVM(Non-VolatileMemory,非易失性存储器),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。上述的处理器可以是通用处理器,包括CPU(CentralProcessingUnit,中央处理器)、NP(NetworkProcessor,网络处理器)等;还可以是DSP(DigitalSignalProcessing,数字信号处理器)、ASIC(ApplicationSpecificIntegratedCircuit,专用集成电路)、FPGA(Field ProgrammableGateArray,现场可编程门阵列)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
针对服务器而言,服务器至少具有可扩展性高和稳定性高的特性,其中,由于企业网络是不可能长久不变的,在网络信息化的今天,如果服务器没有一定的可扩展性,则是会导致影响到企业之后的发展,影响到企业的使用,因此可拓展性是成为最为基本的特性,只有在拥有较高的可扩展性才能保证后期更好的利用,可扩展性除了硬件上的可扩展性上之外,还包含软件上的可扩展性,由于服务器的功能与计算机相比较而言还是十分复杂,因此不仅仅是在硬件配置方面,还有就是软件配置也是很重要的,想要实现更多的功能,没有全面软件支持也是没有办法想象的。
此外,由于服务器需要处理大量的数据用以支撑业务的持续性运行,因此服务器还有一个很重要的特征,即为稳定性高,如果服务器的数据传输不能稳定运行,则无疑会对业务开展造成极大的影响。
本申请的方案依据检测到的第一操作系统在运行过程中的运行状态来控制第一操作系统所使用的处理器资源,使得服务器能够合理分配处理器资源,进而依托于分配的资源进行更加合理的性能扩展,此外,根据为第一操作系统分配的操作业务和/或处理器核心控制第一操作系统的运行,使得服务器无论是对软件资源进行扩展还是对硬件资源进行扩展都能够进行合理的调度和控制,提高了服务器的可扩展性。另外,通过对处理器资源和操作系统的合理调度,能够使得服务器的运行更加稳定,提高了服务器的稳定性。
本申请的实施例还提供了一种非易失性可读存储介质,该非易失性可读存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述非易失性可读存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
本申请的实施例还提供了一种电子设备,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述电子设备还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。
本实施例中的可选示例可以参考上述实施例及示例性实施方式中所描述的示例,本实施例在此不再赘述。
显然,本领域的技术人员应该明白,上述的本申请的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以
以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本申请不限制于任何特定的硬件和软件结合。
以上仅为本申请的可选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
Claims (66)
- 一种嵌入式系统,其特征在于,包括:芯片和至少两个操作系统,其中,所述芯片包括处理器、硬件控制器、第一总线和第二总线,其中,所述第一总线的带宽高于所述第二总线带宽,且所述第一总线被配置为多主多从模式,所述第二总线被配置为一主多从模式;所述至少两个操作系统基于所述处理器运行;所述至少两个操作系统通过所述第一总线进行通信;所述至少两个操作系统通过所述第二总线实现对硬件控制器的控制。
- 根据权利要求1所述的嵌入式系统,其特征在于,所述至少两个操作系统包括第一操作系统和第二操作系统,其中,所述第一操作系统基于所述处理器控制目标硬件控制器运行目标操作业务;所述第一操作系统在所述目标操作业务运行到目标业务状态时通过所述第二总线释放所述目标硬件控制器;所述第二操作系统通过所述第二总线控制所述目标硬件控制器运行所述目标操作业务。
- 根据权利要求2所述的嵌入式系统,其特征在于,所述第二操作系统通过所述第一总线向所述第一操作系统发送第一中断请求,其中,所述第一中断请求用于请求接管所述目标硬件控制器;所述第一操作系统响应所述第一中断请求通过所述第二总线释放所述目标硬件控制器;或者,所述第一操作系统在所述目标操作业务的业务属性达到目标业务属性时通过所述第二总线释放所述目标硬件控制器。
- 根据权利要求3所述的嵌入式系统,其特征在于,所述第一操作系统响应所述第一中断请求确定是否由所述第二操作系统接管所述目标硬件控制器;所述第一操作系统在由所述第二操作系统接管所述目标硬件控制器的情况下通过所述第二总线释放所述目标硬件控制器。
- 根据权利要求4所述的嵌入式系统,其特征在于,所述第一操作系统在不由所述第二操作系统接管所述目标硬件控制器的情况下通过所述第一总线向所述第二操作系统发送第二中断请求,其中,所述第二中断请求用于指示拒绝所述第二操作系统接管所述目标硬件控制器。
- 根据权利要求2所述的嵌入式系统,其特征在于,所述第一操作系统向所述第二操作系统发送第三中断请求,其中,所述第三中断请求用于指示所述第一操作系统已释放所述目标硬件控制器;所述第二操作系统响应所述第三中断请求通过所述第二总线控制所述目标硬件控制器运行所述目标操作业务。
- 根据权利要求1至6中任一项所述的嵌入式系统,其特征在于,所述至少两个操作系统包括第一操作系统和第二操作系统,其中,所述第一操作系统基于所述处理器中的目标处理器核心运行;所述第一操作系统在运行到目标系统状态时释放所述目标处理器核心;所述第二操作系统将所述目标处理器核心添加至所述第二操作系统的调度资源池中,其中,所述调度资源池中包括所述处理器中为所述第二操作系统分配的处理器核心。
- 根据权利要求7所述的嵌入式系统,其特征在于,所述第二操作系统通过所述第一总线向所述第一操作系统发送第四中断请求,其中,所述第四中断请求用于请求占用所述目标处理器核心;所述第一操作系统响应所述第四中断请求释放所述目标处理器核心;或者,所述第一操作系统在系统属性达到目标系统属性时释放所述目标处理器核心。
- 根据权利要求8所述的嵌入式系统,其特征在于,所述第一操作系统响应所述第四中断请求确定是否由所述第二操作系统占用所述目标处理器核心;所述第一操作系统在由所述第二操作系统占用所述目标处理器核心的情况下释放所述目标处理器核心。
- 根据权利要求9所述的嵌入式系统,其特征在于,所述第一操作系统在不由所述第二操作系统占用所述目标处理器核心的情况下通过所述第一总线向所述第二操作系统发送第五中断请求,其中,所述第五中断请求用于指示拒绝所述第二操作系统占用所述目标处理器核心。
- 根据权利要求7所述的嵌入式系统,其特征在于,所述第一操作系统向所述第二操作系统发送第六中断请求,其中,所述第六中断请求用于指示所述第一操作系统已释放所述目标处理器核心;所述第二操作系统响应所述第六中断请求将所述目标处理器核心添加至所述调度资源池中。
- 根据权利要求1所述的嵌入式系统,其特征在于,所述至少两个操作系统包括第一操作系统和第二操作系统,其中,所述处理器中的目标处理器核心已被添加至所述第二操作系统的调度资源池中,其中,所述调度资源池中包括所述处理器中为所述第二操作系统分配的处理器核心;所述第二操作系统在所述第一操作系统被唤醒时释放所述目标处理器核心;所述第一操作系统基于所述目标处理器核心运行。
- 根据权利要求12所述的嵌入式系统,其特征在于,所述第二操作系统在检测到所述第一操作系统被唤醒时释放所述目标处理器核心;或者,所述第一操作系统在被唤醒时向所述第二操作系统发送第七中断请求,其中,所述第七中断请求用于请求所述第二操作系统释放所述目标处理器核心;所述第二操作系统响应所述第七中断请求释放所述目标处理器核心。
- 根据权利要求1所述的嵌入式系统,其特征在于,所述至少两个操作系统包括第一操作系统和第二操作系统,所述芯片还包括存储空间,所述至少两个操作系统通过所述第一总线控制所述存储空间,其中,所述第一操作系统基于所述处理器运行的过程中产生业务数据;所述第一操作系统通过所述第一总线将所述业务数据存储至所述存储空间,并通过所述第一总线向所述第二操作系统发送第八中断请求,其中,所述第八中断请求用于请求所述第二操作系统从所述存储空间读取所述业务数据;所述第二操作系统响应所述第八中断请求从所述存储空间读取所述业务数据。
- 根据权利要求1所述的嵌入式系统,其特征在于,所述第一操作系统基于所述处理器周期性运行;或者,所述第一操作系统响应接收到的唤醒请求基于所述处理器运行;或者,所述第一操作系统根据所述处理器上所产生的当前操作业务与所述第一操作系统之间的匹配度基于所述处理器运行。
- 根据权利要求15所述的嵌入式系统,其特征在于,所述第一操作系统在运行结束后进行休眠;所述第二操作系统在所述第一操作系统休眠期间将所述第一操作系统所使用的目标处理器核心添加至所述第二操作系统的调度资源池中,其中,所述调度资源池中包括所述处理器中除所述目标处理器核心以外的其他处理器。
- 根据权利要求1所述的嵌入式系统,其特征在于,所述至少两个操作系统通过所述第一总线部署的通信协议进行通信;或者,所述至少两个操作系统通过所述第一总线,所述第二总线和所述硬件控制器中的通信硬件控制器进行通信。
- 根据权利要求17所述的嵌入式系统,其特征在于,所述至少两个操作系统通过所述第一总线发送处理器间中断请求进行通信;或者,所述至少两个操作系统中的一个操作系统向所述第一总线发送系统中断请求;所述第一总线将所述系统中断请求转发至所述第二总线;所述第二总线向所述通信硬件控制器所控制的邮箱硬件模块发送系统中断请求;所述邮箱硬件模块通过所述第二总线和所述第一总线将所述系统中断请求发送至所述至少两个操作系统中的另一个操作系统。
- 根据权利要求1所述的嵌入式系统,其特征在于,所述至少两个操作系统包括第一操作系统和第二操作系统,其中,所述第一操作系统通过所述第一总线监控所述第二操作系统上执行的操作业务;所述第一操作系统在所述第二操作系统上执行的操作业务中存在异常操作业务时,通过所述第一总线接管所述异常操作业务。
- 根据权利要求19所述的嵌入式系统,其特征在于,所述第一操作系统通过所述第一总线接收所述第二操作系统上执行的操作业务的心跳信号;所述第一操作系统通过所述第一总线将所述心跳信号的频率不符合所对应的目标频率的操作业务作为所述异常操作业务接管。
- 根据权利要求19所述的嵌入式系统,其特征在于,所述第一操作系统在接管所述异常操作业务之后,通过所述第一总线向所述第二操作系统发送重启指令,其中,所述重启指令用于指示重启所述异常操作业务。
- 根据权利要求1所述的嵌入式系统,其特征在于,所述芯片还包括:存储器,所述存储器中存储了启动引导模块,所述芯片上电后运行所述启动引导模块引导所述至少两个操作系统中的一个操作系统启动,所述启动引导模块引导所述至少两个操作系统中的其他操作系统启动。
- 根据权利要求1所述的嵌入式系统,其特征在于,所述至少两个操作系统包括第一操作系统和第二操作系统,其中,所述第一操作系统基于所述处理器控制目标硬件控制器运行目标操作业务;所述第一操作系统在所述目标操作业务运行到目标业务状态时通过所述第二总线释放所述目标硬件控制器;所述第二操作系统通过所述第二总线控制所述目标硬件控制器运行所述目标操作业务;所述第一操作系统基于所述处理器中的目标处理器核心运行;所述第一操作系统在运行到目标系统状态时释放所述目标处理器核心;所述第二操作系统将所述目标处理器核心添加至所述第二操作系统的调度资源池中,其中,所述调度资源池中包括所述处理器中为所述第二操作系统分配的处理器核心;所述芯片还包括存储空间,所述至少两个操作系统通过所述第一总线控制所述存储空间,其中,所述第一操作系统基于所述处理器运行的过程中产生业务数据;所述第一操作系统通过所述第一总线将所述业务数据存储至所述存储空间,并通过所述第一总线向所述第二操作系统发送第八中断请求,其中,所述第八中断请求用于请求所述第二操作系统从所述存储空间读取所述业务数据;所述第二操作系统响应所述第八中断请求从所述存储空间读取所述业务数据。
- 根据权利要求1所述的嵌入式系统,其特征在于,所述至少两个操作系统包括第一操作系统和第二操作系统,其中,所述芯片将通信值装载至所述第一总线,所述第一总线将携带有所述通信值的通信信号发送至所述第 二操作系统对应的通信寄存器,以实现所述第一操作系统和所述第二操作系统之间的通信,其中,所述通信值用于指示所述第一操作系统和所述第二操作系统之间的通信内容。
- 根据权利要求1所述的嵌入式系统,其特征在于,所述芯片将控制值装载至所述第二总线,所述第二总线将携带有所述控制值的控制信号发送至所述硬件控制器对应的寄存器,以实现操作系统对所述硬件控制器的控制,其中,所述控制值用于指示操作系统对所述硬件控制器的控制内容。
- 一种嵌入式系统,其特征在于,包括:第一操作系统,第二操作系统,控制器和处理器,其中,所述第一操作系统和所述第二操作系统基于所述处理器运行,所述控制器用于检测所述第一操作系统在运行过程中的运行状态,并根据所述运行状态控制所述第一操作系统所使用的处理器资源。
- 根据权利要求26所述的嵌入式系统,其特征在于,所述控制器,用于以下至少之一:检测所述第一操作系统基于所述处理器所运行的目标操作业务的业务状态,其中,所述运行状态包括所述业务状态;检测所述第一操作系统的系统状态,其中,所述运行状态包括所述系统状态,所述第一操作系统基于所述处理器中的目标处理器核心运行。
- 根据权利要求27所述的嵌入式系统,其特征在于,所述控制器,用于在检测到所述业务状态为目标业务状态的情况下,释放所述目标操作业务,其中,所述处理器资源包括所述目标操作业务;所述第二操作系统,用于运行所述目标操作业务;和/或,所述控制器,用于在检测到所述系统状态为目标系统状态的情况下,释放所述目标处理器核心,其中,所述处理器资源包括所述目标处理器核心;所述第二操作系统,用于将所述目标处理器核心添加至所述第二操作系统的调度资源池中,所述调度资源池中包括所述处理器中为所述第二操作系统分配的处理器核心。
- 根据权利要求28所述的嵌入式系统,其特征在于,所述嵌入式系统还包括:运行于所述第一操作系统上的第一业务交互线程,运行于所述第二操作系统上的第二业务交互线程,其中,所述控制器,用于在获取到所述第二业务交互线程向所述第一业务交互线程发送的第一中断请求的情况下,确定检测到所述业务状态为所述目标业务状态,其中,所述第一中断请求用于请求接管所述目标操作业务;或者,所述控制器,用于在所述目标操作业务的业务属性达到目标业务属性的情况下,确定检测到所述业务状态为所述目标业务状态。
- 根据权利要求29所述的嵌入式系统,其特征在于,所述控制器,用于:响应所述第一中断请求,确定是否由所述第二操作系统接管所述目标操作业务;在由所述第二操作系统接管所述目标操作业务的情况下,释放所述目标操作业务。
- 根据权利要求30所述的嵌入式系统,其特征在于,所述嵌入式系统还包括:运行于所述第一操作系统上的第一业务交互线程,运行于所述第二操作系统上的第二业务交互线程,其中,所述第一业务交互线程,用于在不由所述第二操作系统接管所述目标操作业务的情况下,向所述第二业务交互线程发送第二中断请求,其中,所述第二中断请求用于指示拒绝所述第二操作系统接管所述目标操作业务。
- 根据权利要求28所述的嵌入式系统,其特征在于,所述嵌入式系统还包括:运行于所述第一操作系统上的第一业务交互线程,运行于所述第二操作系统上的第二业务交互线程,其中,所述第一业务交互线程,用于向所述第二业务交互线程发送第三中断请求,其中,所述第三中断请求用于指示已释放所述目标硬件控制器;所述第二操作系统,用于响应所述第三中断请求控制所述目标硬件控制器运行所述目标操作业务。
- 根据权利要求28所述的嵌入式系统,其特征在于,所述嵌入式系统还包括:运行于所述第一操作系统上的第一业务交互线程,运行于所述第二操作系统上的第二业务交互线程,其中,所述控制器,用于在获取到所述第二业务交互线程向所述第一业务交互线程发送的第四中断请求的情况下,确定检测到所述系统状态为所述目标系统状态,其中,所述第四中断请求用于请求占用所述目标处理器核心;或者,所述控制器,用于在所述第一操作系统的系统属性达到目标系统属性的情况下,确定检测到所述系统状态为所述目标系统状态。
- 根据权利要求33所述的嵌入式系统,其特征在于,所述控制器,用于:响应所述第四中断请求,确定是否由所述第二操作系统占用所述目标处理器核心;在由所述第二操作系统占用所述目标处理器核心的情况下,释放所述目标处理器核心。
- 根据权利要求34所述的嵌入式系统,其特征在于,所述嵌入式系统还包括:运行于所述第一操作系统上的第一业务交互线程,运行于所述第二操作系统上的第二业务交互线程,其中,所述第一业务交互线程,用于在不由所述第二操作系统占用所述目标处理器核心的情况下,向所述第二业务交互线程发送第五中断请求,其中,所述第五中断请求用于指示拒绝所述第二操作系统占用所述目标处理器核心。
- 根据权利要求28所述的嵌入式系统,其特征在于,所述嵌入式系统还包括:运行于所述第一操作系统上的第一业务交互线程,运行于所述第二操作系统上的第二业务交互线程,其中,所述第一业务交互线程,用于向所述第二业务交互线程发送第六中断请求,其中,所述第六中断请求用于指示所述第一操作系统已释放所述目标处理器核心;所述第二操作系统,用于响应所述第八中断请求将所述目标处理器核心添加至所述调度资源池中。
- 根据权利要求26所述的嵌入式系统,其特征在于,所述控制器,还用于:在所述处理器中的目标处理器核心已被添加至所述第二操作系统的调度资源池中,且,所述第一操作系统被唤醒运行的情况下,检测所述目标处理器核心是否被释放,其中,所述调度资源池中包括所述处理器中为所述第二操作系统分配的处理器核心;在检测到所述第二操作系统在所述第一操作系统被唤醒时已释放所述目标处理器核心的情况下,基于所述目标处理器核心运行所述第一操作系统。
- 根据权利要求37所述的嵌入式系统,其特征在于,所述嵌入式系统还包括:运行于所述第一操作系统上的第一业务交互线程,运行于所述第二操作系统上的第二业务交互线程,其中,所述第一业务交互线程,用于在检测到所述目标处理器核心未被释放的情况下,向所述第二业务交互线程发送第七中断请求,其中,所述第七中断请求用于请求所述第二操作系统释放所述目标处理器核心;所述第二操作系统,用于响应所述第七中断请求释放所述目标处理器核心。
- 根据权利要求26至38中任一项所述的嵌入式系统,其特征在于,所述嵌入式系统还包括:运行于所述第一操作系统上的第一业务交互线程,运行于所述第二操作系统上的第二业务交互线程,其中,所述第一业务交互线程,用于获取所述第一操作系统基于所述处理器运行的过程中产生的业务数据;将所述业务数据存储至所述处理器上的存储空间;向所述第二业务交互线程发送第八中断请求,其中,所述第八中断请求用于请求所述第二操作系统从所述存储空间读取所述业务数据;所述第二操作系统,用于响应所述第八中断请求从所述存储空间读取所述业务数据。
- 根据权利要求26所述的嵌入式系统,其特征在于,所述控制器还用于:控制所述第一操作系统基于所述处理器周期性运行;或者,响应接收到的唤醒请求,控制所述第一操作系统基于所述处理器运行;或者,根据所述处理器上所产生的操作业务与所述第一操作系统之间的匹配度,控制所述第一操作系统基于所述处理器运行。
- 根据权利要求40所述的嵌入式系统,其特征在于,所述控制器,用于:检测所述处理器上所产生的当前操作业务的业务信息;在检测到所述业务信息与所述第一操作系统之间的所述匹配度高于匹配度阈值的情况下,控制所述第一操作系统基于所述处理器运行所述当前操作业务。
- 根据权利要求41所述的嵌入式系统,其特征在于,所述控制器,用于:检测所述当前操作业务的目标响应速度,和/或,目标资源占用量,其中,所述业务信息包括:目标响应速度,和/或,资源占用量,所述目标响应速度是所述当前操作业务需要所述处理器达到的响应速度,所述目标资源占用量是所述当前操作业务需要所述处理器提供的资源量;在所述目标响应速度小于或者等于速度阈值,和/或,所述目标资源占用量小于或者等于占用量阈值的情况下,确定所述业务信息与所述第一操作系统之间的所述匹配度高于所述匹配度阈值。
- 根据权利要求40所述的嵌入式系统,其特征在于,所述控制器,还用于:控制所述第一操作系统在运行结束后进行休眠。
- 根据权利要求43所述的嵌入式系统,其特征在于,所述嵌入式系统还包括:运行于所述第一操作系统上的第一业务交互线程,运行于所述第二操作系统上的第二业务交互线程,其中,所述第一业务交互线程,用于通知所述第二业务交互线程允许占用所述第一操作系统所使用的处理器核心;所述第二操作系统,用于在所述第一操作系统休眠期间将所述第一操作系统所使用的目标处理器核心添加至所述第二操作系统的调度资源池中,所述调度资源池中包括所述处理器中除所述目标处理器核心以外的其他处理器。
- 根据权利要求26所述的嵌入式系统,其特征在于,所述嵌入式系统还包括:运行于所述第一操作系统上的业务接管线程,其中,所述业务接管线程,用于监控所述第二操作系统上执行的操作业务;在监控到所述第二操作系统上执行的操作业务中存在异常操作业务的情况下,接管所述异常操作业务。
- 根据权利要求45所述的嵌入式系统,其特征在于,所述业务接管线程,用于:接收所述第二操作系统上执行的每个操作业务的心跳信号;将所述心跳信号的频率不符合所对应的目标频率的操作业务确定为所述异常操作业务。
- 根据权利要求45所述的嵌入式系统,其特征在于,所述业务接管线程,还用于:在所述通过所述第一操作系统接管所述异常操作业务之后,向所述第二操作系统发送重启指令,其中,所述重启指令用于指示重启所述异常操作业务。
- 根据权利要求26所述的嵌入式系统,其特征在于,所述嵌入式系统还包括:启动引导模块,所述启动引导模块,用于引导所述第一操作系统启动;引导所述第二操作系统启动。
- 一种操作系统的运行控制方法,其特征在于,包括:检测第一操作系统在运行过程中的运行状态,其中,所述第一操作系统和第二操作系统基于处理器运行;根据所述运行状态控制所述第一操作系统所使用的处理器资源。
- 根据权利要求49所述的方法,其特征在于,所述检测第一操作系统在运行过程中的运行状态,包括以下至少之一:检测所述第一操作系统基于所述处理器所运行的目标操作业务的业务状态,其中,所述运行状态包括所述业务状态;检测所述第一操作系统的系统状态,其中,所述运行状态包括所述系统状态,所述第一操作系统基于所述处理器中的目标处理器核心运行。
- 根据权利要求50所述的方法,其特征在于,所述根据所述运行状态控制所述第一操作系统所使用 的处理器资源,包括以下至少之一:在检测到所述业务状态为目标业务状态的情况下,释放所述目标操作业务,其中,所述处理器资源包括所述目标操作业务,所述第二操作系统用于运行所述目标操作业务;在检测到所述系统状态为目标系统状态的情况下,释放所述目标处理器核心,其中,所述处理器资源包括所述目标处理器核心,所述第二操作系统用于将所述目标处理器核心添加至所述第二操作系统的调度资源池中,所述调度资源池中包括所述处理器中为所述第二操作系统分配的处理器核心。
- 根据权利要求51所述的方法,其特征在于,所述方法还包括:在获取到所述第二操作系统向所述第一操作系统发送的第一中断请求的情况下,确定检测到所述业务状态为所述目标业务状态,其中,所述第一中断请求用于请求接管所述目标操作业务;或者,在所述目标操作业务的业务属性达到目标业务属性的情况下,确定检测到所述业务状态为所述目标业务状态。
- 根据权利要求52所述的方法,其特征在于,所述在检测到所述业务状态为目标业务状态的情况下,释放所述目标操作业务,包括:响应所述第一中断请求,确定是否由所述第二操作系统接管所述目标操作业务;在由所述第二操作系统接管所述目标硬件控制器的情况下,释放所述目标操作业务。
- 根据权利要求51所述的方法,其特征在于,所述方法还包括:在获取到所述第二操作系统向所述第一操作系统发送的第四中断请求的情况下,确定检测到所述系统状态为所述目标系统状态,其中,所述第四中断请求用于请求占用所述目标处理器核心;或者,在所述第一操作系统的系统属性达到目标系统属性的情况下,确定检测到所述系统状态为所述目标系统状态。
- 根据权利要求54所述的方法,其特征在于,所述在检测到所述系统状态为目标系统状态的情况下,释放所述目标处理器核心,包括:响应所述第四中断请求,确定是否由所述第二操作系统占用所述目标处理器核心;在由所述第二操作系统占用所述目标处理器核心的情况下,释放所述目标处理器核心。
- 根据权利要求49至55中任一项所述的方法,其特征在于,所述方法还包括:获取所述第一操作系统基于所述处理器运行的过程中产生的业务数据;将所述业务数据存储至所述处理器上的存储空间;向所述第二操作系统发送第八中断请求,其中,所述第八中断请求用于请求所述第二操作系统从所述存储空间读取所述业务数据,所述第二操作系统用于响应所述第八中断请求从所述存储空间读取所述业务数据。
- 根据权利要求49所述的方法,其特征在于,所述方法还包括:控制所述第一操作系统基于所述处理器周期性运行;或者,响应接收到的唤醒请求,控制所述第一操作系统基于所述处理器运行;或者,根据所述处理器上所产生的操作业务与所述第一操作系统之间的匹配度,控制所述第一操作系统基于所述处理器运行。
- 根据权利要求57所述的方法,其特征在于,所述根据所述处理器上所产生的当前操作业务与所述第一操作系统之间的匹配度,控制所述第一操作系统基于所述处理器运行,包括:检测所述处理器上所产生的当前操作业务的业务信息;在检测到所述业务信息与所述第一操作系统之间的所述匹配度高于匹配度阈值的情况下,控制所述第一操作系统基于所述处理器运行所述当前操作业务。
- 根据权利要求49所述的方法,其特征在于,所述方法还包括:监控所述第二操作系统上执行的操作业务;在监控到所述第二操作系统上执行的操作业务中存在异常操作业务的情况下,通过所述第一操作系统接管所述异常操作业务。
- 一种操作系统的运行控制装置,其特征在于,包括:第一检测模块,被设置为检测第一操作系统在运行过程中的运行状态,其中,所述第一操作系统和第二操作系统基于处理器运行;控制模块,被设置为根据所述运行状态控制所述第一操作系统所使用的处理器资源。
- 一种芯片,其特征在于,所述芯片包括可编程逻辑电路以及可执行指令中的至少之一,所述芯片在电子设备中运行,用于实现权利要求49至59任一项所述的方法。
- 一种BMC芯片,其特征在于,包括:存储单元以及与所述存储单元连接的处理单元,所述存储单元被设置为存储程序,所述处理单元被设置为运行所述程序,以执行如权利要求49至59任一项所述的方法。
- 一种主板,其特征在于,包括:至少一个处理器;至少一个存储器,被设置为存储至少一个程序;当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求49至59任一项所述的方法。
- 一种服务器,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;存储器,被设置为存放计算机程序;处理器,被设置为执行存储器上所存放的程序时,实现权利要求49至59任一项所述的方法。
- 一种非易失性可读存储介质,其特征在于,所述非易失性可读存储介质中存储有计算机程序,其中,所述计算机程序被处理器执行时实现所述权利要求49至59任一项中所述的方法的步骤。
- 一种电子设备,包括存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现所述权利要求49至59任一项中所述的方法的步骤。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020237045143A KR20240159791A (ko) | 2023-04-28 | 2023-04-28 | 운영체제 실행 제어 방법과 장치, 및 임베디드 시스템과 칩 |
US18/549,718 US20250036463A1 (en) | 2023-04-28 | 2023-04-28 | Method and apparatus for controlling running of operating system, and embedded system and chip |
CN202380009034.4A CN116868167A (zh) | 2023-04-28 | 2023-04-28 | 操作系统的运行控制方法和装置,以及嵌入式系统和芯片 |
PCT/CN2023/091864 WO2024221465A1 (zh) | 2023-04-28 | 2023-04-28 | 操作系统的运行控制方法和装置,以及嵌入式系统和芯片 |
EP23817286.0A EP4478184A4 (en) | 2023-04-28 | 2023-04-28 | OPERATION CONTROL METHOD AND APPARATUS FOR OPERATING SYSTEM, EMBEDDED SYSTEM AND CHIP |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2023/091864 WO2024221465A1 (zh) | 2023-04-28 | 2023-04-28 | 操作系统的运行控制方法和装置,以及嵌入式系统和芯片 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024221465A1 true WO2024221465A1 (zh) | 2024-10-31 |
Family
ID=88234550
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/091864 WO2024221465A1 (zh) | 2023-04-28 | 2023-04-28 | 操作系统的运行控制方法和装置,以及嵌入式系统和芯片 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20250036463A1 (zh) |
EP (1) | EP4478184A4 (zh) |
KR (1) | KR20240159791A (zh) |
CN (1) | CN116868167A (zh) |
WO (1) | WO2024221465A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119311463A (zh) * | 2024-12-17 | 2025-01-14 | 麒麟软件有限公司 | Linux系统异常设备排查方法、装置和存储介质 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112380025A (zh) * | 2020-12-03 | 2021-02-19 | 王志平 | 一种基于hsc的进程同步实现方法 |
CN117472588B (zh) * | 2023-12-27 | 2024-04-09 | 山东方寸微电子科技有限公司 | 一种用于网络密码设备的混合式软件架构及密码设备 |
CN117555760B (zh) * | 2023-12-29 | 2024-04-12 | 苏州元脑智能科技有限公司 | 服务器监测方法及装置、基板控制器及嵌入式系统 |
CN118885061A (zh) * | 2024-09-29 | 2024-11-01 | 山东云海国创云计算装备产业创新中心有限公司 | 散热设备的控制方法及系统、程序产品、存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1791862A (zh) * | 2003-04-09 | 2006-06-21 | 扎鲁纳股份有限公司 | 操作系统 |
US20070050770A1 (en) * | 2005-08-30 | 2007-03-01 | Geisinger Nile J | Method and apparatus for uniformly integrating operating system resources |
CN115237480A (zh) * | 2021-11-05 | 2022-10-25 | 科东(广州)软件科技有限公司 | 嵌入式设备的启动方法、装置、嵌入式设备及存储介质 |
CN115379512A (zh) * | 2021-05-19 | 2022-11-22 | Oppo广东移动通信有限公司 | 带宽调整方法及装置、电子设备、计算机可读存储介质 |
CN115421871A (zh) * | 2022-09-26 | 2022-12-02 | 科东(广州)软件科技有限公司 | 一种对系统的硬件资源动态分配的方法、装置及计算设备 |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5473771A (en) * | 1993-09-01 | 1995-12-05 | At&T Corp. | Fault-tolerant processing system architecture |
US5706424A (en) * | 1995-10-23 | 1998-01-06 | Unisys Corporation | System for fast read and verification of microcode RAM |
US5904733A (en) * | 1997-07-31 | 1999-05-18 | Intel Corporation | Bootstrap processor selection architecture in SMP systems |
US6286110B1 (en) * | 1998-07-30 | 2001-09-04 | Compaq Computer Corporation | Fault-tolerant transaction processing in a distributed system using explicit resource information for fault determination |
US20020193989A1 (en) * | 1999-05-21 | 2002-12-19 | Michael Geilhufe | Method and apparatus for identifying voice controlled devices |
JP2005250833A (ja) * | 2004-03-04 | 2005-09-15 | Nec Electronics Corp | バスシステム及びアクセス制御方法 |
US7281082B1 (en) * | 2004-03-26 | 2007-10-09 | Xilinx, Inc. | Flexible scheme for configuring programmable semiconductor devices using or loading programs from SPI-based serial flash memories that support multiple SPI flash vendors and device families |
US20050240669A1 (en) * | 2004-03-29 | 2005-10-27 | Rahul Khanna | BIOS framework for accommodating multiple service processors on a single server to facilitate distributed/scalable server management |
US20050251806A1 (en) * | 2004-05-10 | 2005-11-10 | Auslander Marc A | Enhancement of real-time operating system functionality using a hypervisor |
US7305510B2 (en) * | 2004-06-25 | 2007-12-04 | Via Technologies, Inc. | Multiple master buses and slave buses transmitting simultaneously |
US8933941B2 (en) * | 2004-08-23 | 2015-01-13 | Hewlett-Packard Development Company, L.P. | Method and apparatus for redirection of video data |
US7437618B2 (en) * | 2005-02-11 | 2008-10-14 | International Business Machines Corporation | Method in a processor for dynamically during runtime allocating memory for in-memory hardware tracing |
JP2007220086A (ja) * | 2006-01-17 | 2007-08-30 | Ntt Docomo Inc | 入出力制御装置、入出力制御システム及び入出力制御方法 |
US7673114B2 (en) * | 2006-01-19 | 2010-03-02 | International Business Machines Corporation | Dynamically improving memory affinity of logical partitions |
WO2008113418A1 (en) * | 2007-03-22 | 2008-09-25 | Telefonaktiebolaget Lm Ericsson (Publ) | A system and method of reporting in-service performance statistics in layered networks |
US7913009B2 (en) * | 2007-06-20 | 2011-03-22 | Microsoft Corporation | Monitored notification facility for reducing inter-process/inter-partition interrupts |
US8321579B2 (en) * | 2007-07-26 | 2012-11-27 | International Business Machines Corporation | System and method for analyzing streams and counting stream items on multi-core processors |
US8069344B2 (en) * | 2007-09-14 | 2011-11-29 | Dell Products L.P. | System and method for analyzing CPU performance from a serial link front side bus |
US8271700B1 (en) * | 2007-11-23 | 2012-09-18 | Pmc-Sierra Us, Inc. | Logical address direct memory access with multiple concurrent physical ports and internal switching |
US8205209B2 (en) * | 2008-03-18 | 2012-06-19 | International Business Machines Corporation | Selecting a number of processing resources to run an application effectively while saving power |
JP5469940B2 (ja) * | 2009-07-13 | 2014-04-16 | 株式会社日立製作所 | 計算機システム、仮想計算機モニタ及び仮想計算機モニタのスケジューリング方法 |
KR20110072023A (ko) * | 2009-12-22 | 2011-06-29 | 삼성전자주식회사 | 휴대 단말기의 프로세서 간 데이터 통신 방법 및 장치 |
US8429276B1 (en) * | 2010-10-25 | 2013-04-23 | Juniper Networks, Inc. | Dynamic resource allocation in virtual environments |
US8468169B2 (en) * | 2010-12-01 | 2013-06-18 | Microsoft Corporation | Hierarchical software locking |
US9619415B2 (en) * | 2012-11-30 | 2017-04-11 | Dell Products, Lp | System and method for intelligent platform management interface keyboard controller style interface multiplexing |
KR102053360B1 (ko) * | 2013-07-31 | 2019-12-06 | 삼성전자주식회사 | 시스템 인터커넥트, 이를 포함하는 시스템온칩, 그리고 이의 구동 방법 |
US9940483B2 (en) * | 2016-01-25 | 2018-04-10 | Raytheon Company | Firmware security interface for field programmable gate arrays |
US10198275B2 (en) * | 2016-05-31 | 2019-02-05 | American Megatrends, Inc. | Protecting firmware flashing from power operations |
US20190286590A1 (en) * | 2018-03-14 | 2019-09-19 | Quanta Computer Inc. | Cpld cache application in a multi-master topology system |
-
2023
- 2023-04-28 CN CN202380009034.4A patent/CN116868167A/zh active Pending
- 2023-04-28 KR KR1020237045143A patent/KR20240159791A/ko active Pending
- 2023-04-28 EP EP23817286.0A patent/EP4478184A4/en active Pending
- 2023-04-28 US US18/549,718 patent/US20250036463A1/en active Pending
- 2023-04-28 WO PCT/CN2023/091864 patent/WO2024221465A1/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1791862A (zh) * | 2003-04-09 | 2006-06-21 | 扎鲁纳股份有限公司 | 操作系统 |
US20070050770A1 (en) * | 2005-08-30 | 2007-03-01 | Geisinger Nile J | Method and apparatus for uniformly integrating operating system resources |
CN115379512A (zh) * | 2021-05-19 | 2022-11-22 | Oppo广东移动通信有限公司 | 带宽调整方法及装置、电子设备、计算机可读存储介质 |
CN115237480A (zh) * | 2021-11-05 | 2022-10-25 | 科东(广州)软件科技有限公司 | 嵌入式设备的启动方法、装置、嵌入式设备及存储介质 |
CN115421871A (zh) * | 2022-09-26 | 2022-12-02 | 科东(广州)软件科技有限公司 | 一种对系统的硬件资源动态分配的方法、装置及计算设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4478184A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119311463A (zh) * | 2024-12-17 | 2025-01-14 | 麒麟软件有限公司 | Linux系统异常设备排查方法、装置和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN116868167A (zh) | 2023-10-10 |
EP4478184A1 (en) | 2024-12-18 |
KR20240159791A (ko) | 2024-11-06 |
US20250036463A1 (en) | 2025-01-30 |
EP4478184A4 (en) | 2024-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4481563A1 (en) | Start control method and apparatus for embedded system, and storage medium and electronic device | |
US20250036463A1 (en) | Method and apparatus for controlling running of operating system, and embedded system and chip | |
CN116244229B (zh) | 硬件控制器的访问方法、装置、存储介质和电子设备 | |
CN116257364B (zh) | 系统间的资源占用方法、装置、存储介质及电子装置 | |
CN116541227B (zh) | 故障诊断方法、装置、存储介质、电子装置及bmc芯片 | |
CN116243995B (zh) | 通信方法、装置、计算机可读存储介质以及电子设备 | |
CN116243996B (zh) | 业务的运行切换方法、装置、存储介质及电子装置 | |
CN116302617B (zh) | 共享内存的方法、通信方法、嵌入式系统以及电子设备 | |
US20240362083A1 (en) | Embedded system running method and apparatus, and embedded system and chip | |
CN116627520B (zh) | 基板管理控制器的系统运行方法以及基板管理控制器 | |
CN116302141B (zh) | 串口切换方法、芯片及串口切换系统 | |
CN116521209B (zh) | 操作系统的升级方法及装置、存储介质及电子设备 | |
WO2012016472A1 (zh) | 多核CPU加载Linux操作系统的方法及系统 | |
CN116521324B (zh) | 中断虚拟化处理方法、装置及电子设备 | |
CN118885307A (zh) | 共享资源的访问控制方法及装置、存储介质及电子设备 | |
CN112306652A (zh) | 带有上下文提示的功能的唤醒和调度 | |
WO2022204873A1 (zh) | 电子装置、系统级芯片和物理核分配方法 | |
EP4478204A1 (en) | Hardware interface signal generation method and apparatus and electronic device | |
CN117149472B (zh) | 通信方法、装置、计算机可读存储介质以及电子设备 | |
CN117149471B (zh) | 通信方法、装置、嵌入式系统、存储介质以及电子设备 | |
WO2022204897A1 (zh) | 一种闪存访问方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2023817286 Country of ref document: EP Effective date: 20231212 |