Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present application based on the embodiments of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The embodiment of the application provides a resource allocation method, and an execution main body of the resource allocation method can be a resource allocation device provided by the embodiment of the application or electronic equipment integrated with the resource allocation device, wherein the resource allocation device can be realized in a hardware or software mode. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of a resource allocation method according to an embodiment of the application. The specific flow of the resource allocation method provided by the embodiment of the application can be as follows:
In 101, when an acceleration start prompt message is received, determining a target thread indicated by the acceleration start prompt message, and marking the target thread as a thread of a preset type.
In the embodiment of the application, the operating system of the electronic device may be a system based on a linux kernel, for example, an android operating system and the like. When an application installed in an electronic device is running, the system creates a process for it and allocates resources for that process.
A thread is an execution path of a process, and is a minimum unit when a program is executed, and is also a basic unit of CPU scheduling and dispatch. A process may have multiple threads, but at least one thread. In the embodiment of the application, when a process has a task to be executed, a new thread is created to execute the task.
For threads with different priorities, thread scheduling is performed according to different thread scheduling rules. E.g., CFS (complete fair schedule, full fair scheduling) scheduling rules, etc. In this embodiment, a thread scheduled by using a CFS scheduling rule is referred to as a CFS thread, and the scheduling rule of the CFS scheduler is optimized to improve the execution efficiency of the related thread of the preset event.
If the processor of the electronic device is a multi-core processor, each processor core may be considered an independent processing unit. For example, if the electronic device is an eight-core processor, each core is a separate processing unit. Each processing unit has a respective task queue containing tasks assigned to the processing unit, each task being executed by a corresponding thread.
One or more programs may be running simultaneously in an electronic device, each program having at least one corresponding process, and one process having at least one thread executing tasks. Thus, the electronic device may have multiple threads to execute, and CPU resources may need to be allocated for execution of these threads. The main scheduling policy of the CFS scheduler installs the corresponding scheduling mechanism to allocate processing units for the threads, and selects the optimal process to preempt the processor resources. After the thread is allocated a processing unit, if the thread enters a ready state, it is added to the task queue of the allocated processing unit and waits for execution.
The life cycle of the Thread can be divided into 5 states, namely a New state (New), wherein after a Thread object is established by using a New keyword and a Thread class or subclass thereof, the Thread object is in the New state. It holds this state until the program start () is the thread. Ready state (Runnable) when the thread object invokes the start () method, the thread enters the ready state. The thread in ready state is added to the ready queue waiting for scheduling by the thread scheduler. Running state (Running) if the thread in ready state acquires CPU resources, the run () method can be executed, and at this time, the thread is in Running state, and the thread in Running state can be changed into blocking state, ready state and dead state. Blocking state (Blocked) if a thread performs sleep, suspend, etc. methods, the thread enters blocking state from running state after losing occupied resources. The ready state may be re-entered after the sleep time has arrived or resources are acquired. Death state (read) when a thread in one run state completes a task or other termination condition occurs, the thread switches to the termination state.
The CFS scheduler defaults scheduling rules are as follows, wherein processing units are allocated to threads according to a load balancing principle, and CPU service time is allocated to the threads according to the priority of the threads after the processing units are allocated. For example, if two threads with the same priority are running on one CPU, each thread is allocated 50% of the CPU running time, i.e. fair scheduling is achieved. And when the priority of the threads is different, the CPU running time is distributed according to the weight proportion of the threads. Wherein the weight represents the priority of the thread, the higher the weight, the higher the priority, and the greater the proportion of the assigned CPU run time. The thread weight is typically represented by a nice value, which is a specific number, typically within a predetermined range. Smaller values represent greater priority and also mean greater weight values.
If the CFS scheduler schedules CFS threads according to this rule all the time, the allocation of device resources cannot be matched with the usage site Jing Shi. For example, when an application is started (here, cold start is taken as an example), the linux kernel creates several threads related to the starting process of the application, and the threads follow the command of a kernel thread scheduler in the kernel, and fairly compete for resources with other threads, and the resource allocation is determined according to the relevant priority and load. That is, the kernel is the same as the other background processes for the threads related to the starting, and does not allocate more resources for the threads related to the starting process, and the overall scheduling and frequency modulation strategy does not take other measures, so that the application is slow to start under the condition of heavy application starting load, and the use of users is affected.
When an application is started, a background process without the application is needed to recreate a new process to be distributed to the application, and the starting mode of the application is cold start.
In some embodiments, when a preset event is detected to be executed, a target thread created by executing the preset event is determined, and acceleration start prompt information is generated based on the target thread.
For example, the preset event includes, but is not limited to, a mobile phone cold start process, a mobile phone hot start process, an application program installation process, a touch response process, a payment process, an application program start process, and the like, wherein the application program start process includes an application hot start process, an application warm start process, and an application cold start process.
In embodiments of the present application, the relevant threads when the electronic device executes some events may be marked as core threads for which the original scheduling rules of the CFS scheduler are optimized, so that the CFS scheduler can allocate resources for these core threads faster and more, where the resources include, but are not limited to, CPU resources (i.e., processor resources). For example, by monitoring events performed by the electronic device, it is determined which threads are marked as core threads. For example, to enable the acceleration of the application cold start process, when an application cold start is detected, threads created by executing the application cold start are determined and marked as core threads. Wherein, because whether the system will have a card, the user experience is directly affected. The target thread created by the executable preset event is denoted as ux (user experience) thread.
Wherein in some embodiments, the target thread may be marked as a preset type of thread by adding a preset tag to the target thread. For example, a ux tag is added to a thread to mark the thread as a ux thread.
In some embodiments, in the embodiments of the present application, when a frame layer of a system architecture detects a preset event, a target thread is determined, and a prompt message for accelerating start is sent to a kernel layer, and after receiving the prompt message, the kernel layer marks a corresponding target thread as a ux thread.
After this operation is performed, for the CFS scheduler, at least two types of threads, a ux thread and a non-ux thread exist in all CFS threads, wherein the importance of the ux thread is higher than that of the non-ux thread, and when the kernel performs resource allocation, the kernel preferentially allocates the ux thread, and the resource thereof is inclined to the ux thread.
In 102, when a thread to be scheduled needs to be allocated with processor resources, it is determined whether the thread to be scheduled is a thread of a preset type.
And in 103, when the thread to be scheduled is the thread of the preset type, allocating processor resources for the thread to be scheduled according to a first rule.
And in 104, when the thread to be scheduled is not the thread of the preset type, allocating processor resources for the thread to be scheduled according to a second rule, wherein the speed or the number of the processor resources allocated for the thread based on the first rule is greater than the speed or the number of the processing resources allocated for the thread based on the second rule.
In the embodiment of the application, two different rules are adopted for processor resource allocation for ux threads and non-ux threads. For ux threads, according to a first rule, and for non-ux threads, according to a second rule, the speed or amount of processor resources allocated to the threads based on the first rule is greater than the speed or amount of processing resources for the thread allocator based on the second rule, that is, the first rule corresponds to the second rule, so that the resources can be allocated to the threads more quickly.
When it is determined that a processing unit needs to be allocated to a thread to be scheduled according to a main scheduling policy of the CFS scheduler, a transition core needs to be performed on the thread to be scheduled, or the current working frequency of the processing unit needs to be lifted, it may be determined that a processor resource needs to be allocated to the thread to be scheduled. The core migration refers to migrating a thread from a core where the thread is currently located to another core, taking an eight-core CPU as an example, wherein the eight cores comprise four large cores and four small cores, the four large cores are in a group, the four small cores are in a group, and when the cores are migrated, the cores can be intra-group cores or inter-group cores.
For example, when a processor needs to be allocated for a ux thread, a small or large load core may be preferentially selected for allocation. Or when the ux thread needs to migrate the core, preferentially selecting the large core for migration. For non-ux threads, resource allocation can be performed according to the default load balancing rule of the CFS scheduler. For example, allocating the processor resource to the thread to be scheduled according to the second rule may include allocating the processor resource to the thread to be scheduled according to a preset thread scheduling rule.
In particular, the application is not limited by the order of execution of the steps described, as some of the steps may be performed in other orders or concurrently without conflict.
In some embodiments, the allocating the processor resource is a change processing unit, and the allocating the processor resource for the thread to be scheduled according to the first rule includes determining a candidate processing unit with the largest computing power from the candidate processing units as a target processing unit, and migrating the thread to be scheduled from the processing unit where the thread is currently located to the target processing unit for execution.
In this embodiment, the CFS scheduler may determine that a thread needs to change a processing unit when it detects that the processing unit of the thread needs to be changed, for example, when it detects that the current computing power of the processing unit of the thread is insufficient to execute the thread, or when it detects that the load of the processing unit of the thread is too high, or when it needs to perform the transition processing after the thread is awakened, which is commonly called transition. At this time, a processing unit with higher performance or better computing power needs to be selected from other processing units, and the thread is migrated from the currently located processing unit to the newly selected processing unit. And firstly judging whether the thread is a ux thread or not as a thread to be scheduled, for example, judging whether the thread has a ux tag or not, if the thread has the ux tag, selecting a candidate processing unit with highest performance or best computing capability from candidate processing units after determining the candidate processing units from other processing units except the current processing unit, taking the candidate processing unit as a target processing unit, and migrating the thread from the current processing unit to a newly selected processing unit, so that the ux thread can obtain more processor resources, further can be rapidly executed, and the cold start speed of application is accelerated. If the thread does not have the ux tag, the CFS scheduler selects a target processing unit from other processing units except the current processing unit according to a default load balancing rule, and carries out the transition processing.
The Linux describes and records threads by using task_struct structures, and each thread has a task_struct structure which uniquely belongs to the Linux. the task_struct records information such as an identifier, a state, a priority, a memory pointer, context data and the like of a thread, each task_struct has a sched _entity structure, and virtual running time (the virtual running time of the thread recorded by the CFS scheduler and obtained according to a certain rule and according to actual running time) and weight of the process are stored in the structure. The kernel may add a ux tag in the task_struct data to mark the thread as a ux thread.
In some embodiments, the allocating the processor resource is increasing the operating frequency of the processing unit, and the allocating the processor resource to the thread to be scheduled according to the first rule includes obtaining a frequency adjustment value corresponding to the thread to be scheduled, and increasing the operating frequency of the processing unit where the thread to be scheduled is currently located according to the frequency adjustment value.
In this embodiment, the CFS scheduler sets a corresponding frequency adjustment value (boost value) for each ux thread, where the ux threads may have the same frequency adjustment value, or may set different boost values according to different priorities of the ux threads. When the CFS scheduler detects that the working frequency of a processing unit where a certain thread is located needs to be increased, namely commonly called frequency raising, the thread is used as a thread to be scheduled, whether the thread is a ux thread is judged first, if the thread is the ux thread, a frequency adjustment value corresponding to the ux thread is obtained, and the working frequency of the processing unit where the ux thread is located is increased according to the frequency adjustment value. If the thread is not a ux thread, the CFS scheduler frequency-increases the processing unit where the thread is currently located according to a default load balancing rule.
In some embodiments, the allocating the processor resource is an allocating processing unit, and the allocating the processor resource to the thread to be scheduled according to the first rule includes determining a candidate processing unit with the largest computing power from candidate processing units as a target processing unit, and allocating the thread to be scheduled to the target processing unit for execution.
In this embodiment, when the CFS scheduler needs to allocate a processing unit to a thread in a ready state, the thread is used as a thread to be scheduled, whether the thread is a ux thread is first determined, if the thread is a ux thread, all processing units are used as candidate processing units, a candidate processing unit with the largest computing capability is determined from the candidate processing units and is used as a target processing unit, and the ux is allocated to the target processing unit for execution. If the thread is not a ux thread, the CFS scheduler allocates processing units for the thread according to default load balancing rules.
As can be seen from the above, in the resource allocation method provided by the embodiment of the present application, when the acceleration start prompt message is received, a target thread indicated by the acceleration start prompt message is determined, and the target thread is marked as a thread of a preset type. When the processor resource is required to be allocated to the thread to be scheduled, judging whether the thread to be scheduled is a thread of a preset type, if so, allocating the processor resource to the thread to be scheduled according to a first rule, and if not, allocating the processor resource to the thread according to a second rule, wherein the speed or the quantity of the processor resource allocated to the thread based on the first rule is larger than the speed or the quantity of the processor resource allocated to the thread based on the second rule. In this way, some core threads are marked as preset types of threads according to the acceleration start prompt information, and the threads are further distinguished from other non-core threads. And, in comparison with non-core threads, more and faster processor resources are allocated to core threads in an acceleration period, so that the threads can execute tasks more efficiently, and the phenomenon of jamming of the electronic equipment is reduced.
The method described in the previous examples is described in further detail below by way of example.
Referring to fig. 2, fig. 2 is a schematic diagram of a second flow of a resource allocation method according to an embodiment of the present invention.
In this embodiment, the system architecture of the electronic device includes at least an application framework layer and a kernel layer, wherein the application framework layer executes 201 and 202 and the kernel layer executes 203 to 207. The method comprises the following steps:
in 201, when detecting an execution preset event, determining a target thread created by executing the preset event, and generating acceleration start prompt information based on the target thread.
In 202, when it is detected that the execution of the preset event is completed, acceleration termination prompt information is generated based on the target thread.
The preset event comprises, but is not limited to, a mobile phone cold start process, a mobile phone hot start process, an application program installation process, a touch response process, a payment process, an application program starting process and the like, wherein the application program starting process comprises an application hot start process, an application hot start process and an application cold start process.
When the electronic equipment executes the events, determining a target thread created by the electronic equipment executing the preset events, generating acceleration start prompt information based on the target thread, and sending the acceleration start prompt information to the kernel layer. When the fact that the electronic equipment finishes executing the preset event is detected, the application framework layer generates acceleration termination prompt information and sends the acceleration termination prompt information to the kernel layer.
In 203, when the acceleration start prompt information is received, determining a target thread indicated by the acceleration start prompt information, and marking the target thread as a thread of a preset type.
After receiving the prompt information, the kernel layer marks the corresponding target thread as a ux thread.
In 204, when the to-be-scheduled thread needs to allocate processor resources, it is determined whether the to-be-scheduled thread is a thread of a preset type.
Executing 205 when the thread to be scheduled is the thread of the preset type, and executing 206 when the thread to be scheduled is not the thread of the preset type.
At 205, processor resources are allocated for the threads to be scheduled according to a first rule.
At 206, processor resources are allocated to the threads to be scheduled according to a second rule, wherein a speed or amount of processor resources allocated to the threads based on the first rule is greater than a speed or amount of processing resources allocated to the threads based on the second rule.
In the embodiment of the application, two different rules are adopted for processor resource allocation for ux threads and non-ux threads. For ux threads, according to a first rule, and for non-ux threads, according to a second rule, the speed or amount of processor resources allocated to the threads based on the first rule is greater than the speed or amount of processing resources for the thread allocator based on the second rule, that is, the first rule corresponds to the second rule, so that the resources can be allocated to the threads more quickly.
When the processing unit needs to be allocated to the thread to be scheduled according to the main scheduling policy of the CFS scheduler, the kernel migration needs to be performed on the thread to be scheduled, or the working frequency of the current processing unit of the thread to be scheduled needs to be improved, the processor resource needs to be allocated to the thread to be scheduled can be determined. In these cases, the resource allocation may be performed according to the corresponding first rule, respectively. The specific allocation is referred to the above embodiments, and will not be described herein.
In 207, when the acceleration termination prompt information is received, determining a target thread indicated by the acceleration termination prompt information, and deleting a preset tag of the target thread.
When the execution of the preset event is completed, the application framework layer may send an acceleration termination prompt message to the kernel layer, and after the kernel layer receives the prompt message, the kernel layer cancels the mark of the thread of the preset type of the corresponding target thread, for example, deletes the ux tag, so that the thread is also called a common thread.
Wherein the CFS scheduler performs periodic scheduling, for example, the scheduling period is recorded as a tick, and the CFS scheduler performs scheduling once every tick. The CFS scheduler performs scheduling management according to the execution condition of the threads and the loads of the processing units at each interval.
After one ux thread is changed into a non-ux thread in the execution process, the CFS scheduler resumes to the default load balancing rule to schedule the thread in the next scheduling period.
As can be seen from the above, in the resource allocation method provided by the embodiment of the present invention, when the application framework layer detects a specific event, the kernel sends an acceleration notification to the threads based on the specific event, and the kernel marks the related threads of the specific event as threads of a preset type so as to distinguish the threads from other non-core threads. Processor resources are allocated more and faster for core threads than for non-core threads during acceleration periods, so that the threads can perform tasks more efficiently to reduce the occurrence of stuck electronic devices.
In one embodiment, a resource allocation apparatus is also provided. Referring to fig. 3, fig. 3 is a schematic structural diagram of a resource allocation apparatus 300 according to an embodiment of the application. Wherein the resource allocation apparatus 300 is applied to an electronic device, the resource allocation apparatus 300 includes a thread marking module 301, a thread judging module 302, and a resource allocation module 303, as follows:
The thread marking module 301 is configured to determine a target thread indicated by the acceleration start prompt message when the acceleration start prompt message is received, and mark the target thread as a thread of a preset type;
the thread judging module 302 is configured to judge whether a thread to be scheduled is a thread of a preset type when a processor resource needs to be allocated to the thread to be scheduled;
the resource allocation module 303 is configured to allocate processor resources to the thread to be scheduled according to a first rule when the thread to be scheduled is the thread of the preset type;
And when the thread to be scheduled is not the thread of the preset type, allocating processor resources for the thread to be scheduled according to a second rule, wherein the speed or the number of the processor resources allocated for the thread based on the first rule is larger than the speed or the number of the processing resources allocated for the thread based on the second rule.
In some embodiments, the allocation processor resource is a change processing unit, the resource allocation module 303 is further configured to determine a candidate processing unit with the largest computing power from among the candidate processing units, to be used as a target processing unit, and migrate the thread to be scheduled from the processing unit currently located to the target processing unit for execution.
In some embodiments, the allocating the processor resource is increasing the operating frequency of the processing unit, and the resource allocation module 303 is further configured to obtain a frequency adjustment value corresponding to the thread to be scheduled, and increase the operating frequency of the processing unit where the thread to be scheduled is currently located according to the frequency adjustment value.
In some embodiments, the allocating processor resource is an allocating processing unit, and the resource allocation module 303 is further configured to determine a candidate processing unit with the largest computing power from the candidate processing units, and allocate the thread to be scheduled to the target processing unit for execution.
In some embodiments, the resource allocation apparatus 300 further comprises:
and the information generation module is used for determining a target thread created by executing the preset event when detecting the execution of the preset event and generating acceleration start prompt information based on the target thread.
In some embodiments, the apparatus is applied to an electronic device, and the preset event is the electronic device start, application start or application installation.
In some embodiments, the thread marking module 301 is further configured to add a preset tag to the target thread to mark the target thread as a thread of a preset type;
and when receiving the acceleration termination prompt information, determining a target thread indicated by the acceleration termination prompt information, and deleting a preset label of the target thread.
In some embodiments, the resource allocation module 303 is further to:
And allocating processor resources for the threads to be scheduled according to a preset thread scheduling rule.
In the implementation, each module may be implemented as an independent entity, or may be combined arbitrarily, and implemented as the same entity or several entities, and the implementation of each module may be referred to the foregoing method embodiment, which is not described herein again.
It should be noted that, the resource allocation device provided in the embodiment of the present application and the resource allocation method in the foregoing embodiment belong to the same concept, and any method provided in the embodiment of the resource allocation method may be run on the resource allocation device, and the specific implementation process of the method is detailed in the embodiment of the resource allocation method, which is not described herein again.
As can be seen from the above, the resource allocation device provided in the embodiment of the present application includes a thread marking module 301, a thread judging module 302, and when receiving the acceleration start prompt message, the thread marking module 301 determines a target thread indicated by the acceleration start prompt message, and marks the target thread as a thread of a preset type. When the processor resource needs to be allocated to the thread to be scheduled, the thread judging module 302 judges whether the thread to be scheduled is a thread of a preset type, if so, the resource allocating module 303 allocates the processor resource to the thread to be scheduled according to a first rule, and if not, the resource allocating module 303 allocates the processor resource to the thread to be scheduled according to a second rule, wherein the speed or the number of the processor resource allocated to the thread based on the first rule is greater than the speed or the number of the processor resource allocated to the thread based on the second rule. In this way, some core threads are marked as preset types of threads according to the acceleration start prompt information, and the threads are further distinguished from other non-core threads. And, in comparison with non-core threads, more and faster processor resources are allocated to core threads in an acceleration period, so that the threads can execute tasks more efficiently, and the phenomenon of jamming of the electronic equipment is reduced.
The embodiment of the application also provides electronic equipment. The electronic equipment can be a smart phone, a tablet personal computer and other equipment. Referring to fig. 4, fig. 4 is a schematic diagram of a first structure of an electronic device according to an embodiment of the application. The electronic device 400 comprises a processor 401 and a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 401 is a control center of the electronic device 400, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling computer programs stored in the memory 402, and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
Memory 402 may be used to store computer programs and data. The memory 402 stores a computer program having instructions executable in a processor. The computer program may constitute various functional modules. The processor 401 executes various functional applications and data processing by calling a computer program stored in the memory 402.
In this embodiment, the processor 401 in the electronic device 400 loads the instructions corresponding to the processes of one or more computer programs into the memory 402 according to the following steps, and the processor 401 executes the computer programs stored in the memory 402, so as to implement various functions:
when acceleration start prompt information is received, determining a target thread indicated by the acceleration start prompt information, and marking the target thread as a thread of a preset type;
When the processor resources are required to be allocated to the thread to be scheduled, judging whether the thread to be scheduled is a thread of a preset type or not;
when the thread to be scheduled is the thread of the preset type, allocating processor resources for the thread to be scheduled according to a first rule;
and when the thread to be scheduled is not the thread of the preset type, allocating processor resources for the thread to be scheduled according to a second rule, wherein the speed or the number of the processor resources allocated for the thread based on the first rule is larger than the speed or the number of the processing resources allocated for the thread based on the second rule.
In some embodiments, referring to fig. 5, fig. 5 is a schematic diagram of a second structure of an electronic device according to an embodiment of the application. The electronic device 400 further comprises a radio frequency circuit 403, a display 404, a control circuit 405, an input unit 406, an audio circuit 407, a sensor 408 and a power supply 409. The processor 401 is electrically connected to the radio frequency circuit 403, the display 404, the control circuit 405, the input unit 406, the audio circuit 407, the sensor 408, and the power supply 409, respectively.
The radio frequency circuit 403 is used to transmit and receive radio frequency signals to communicate with a network device or other electronic device through wireless communication.
The display 404 may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of images, text, icons, video, and any combination thereof.
The control circuit 405 is electrically connected to the display screen 404, and is used for controlling the display screen 404 to display information.
The input unit 406 may be used to receive entered numbers, character information, or user characteristic information (e.g., fingerprints), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 406 may include a fingerprint recognition module.
The audio circuit 407 may provide an audio interface between the user and the electronic device through a speaker, microphone. Wherein the audio circuit 407 comprises a microphone. The microphone is electrically connected to the processor 401. The microphone is used for receiving voice information input by a user.
The sensor 408 is used to collect external environmental information. The sensor 408 may include one or more of an ambient brightness sensor, an acceleration sensor, a gyroscope, and the like.
The power supply 409 is used to power the various components of the electronic device 400. In some embodiments, power supply 409 may be logically connected to processor 401 through a power management system, thereby performing functions such as managing charging, discharging, and power consumption through the power management system.
Although not shown in the drawings, the electronic device 400 may further include a camera, a bluetooth module, etc., which will not be described herein.
In this embodiment, the processor 401 in the electronic device 400 loads the instructions corresponding to the processes of one or more computer programs into the memory 402 according to the following steps, and the processor 401 executes the computer programs stored in the memory 402, so as to implement various functions:
when acceleration start prompt information is received, determining a target thread indicated by the acceleration start prompt information, and marking the target thread as a thread of a preset type;
When the processor resources are required to be allocated to the thread to be scheduled, judging whether the thread to be scheduled is a thread of a preset type or not;
when the thread to be scheduled is the thread of the preset type, allocating processor resources for the thread to be scheduled according to a first rule;
and when the thread to be scheduled is not the thread of the preset type, allocating processor resources for the thread to be scheduled according to a second rule, wherein the speed or the number of the processor resources allocated for the thread based on the first rule is larger than the speed or the number of the processing resources allocated for the thread based on the second rule.
In some embodiments, the allocating the processor resource is a change processing unit, and when allocating the processor resource for the thread to be scheduled according to the first rule, the processor 401 performs:
Determining a candidate processing unit with the maximum computing capacity from the candidate processing units as a target processing unit; and migrating the thread to be scheduled from the current processing unit to the target processing unit for execution.
In some embodiments, the allocating the processor resource is increasing the operating frequency of the processing unit, and when allocating the processor resource for the thread to be scheduled according to the first rule, the processor 401 performs:
And increasing the working frequency of a processing unit where the thread to be scheduled is currently located according to the frequency adjustment value.
In some embodiments, the allocating processor resources is allocating processor resources to the threads to be scheduled according to a first rule, and the processor 401 performs:
determining a candidate processing unit with the maximum computing capacity from the candidate processing units as a target processing unit; and distributing the thread to be scheduled to the target processing unit for execution.
In some embodiments, when the acceleration start prompt is received, before determining the target thread indicated by the acceleration start prompt and marking the target thread as a thread of a preset type, the processor 401 further executes:
When the execution of a preset event is detected, determining a target thread created by executing the preset event, and generating acceleration start prompt information based on the target thread.
In some embodiments, the preset event is the electronic device start, application start, or application installation.
In some embodiments, the processor 401 further performs adding a preset tag to the target thread to mark the target thread as a thread of a preset type;
when the acceleration termination prompt information is received, determining a target thread indicated by the acceleration termination prompt information, and deleting a preset label of the target thread.
As can be seen from the foregoing, the embodiment of the present application provides an electronic device, which determines a target thread indicated by an acceleration start prompt message when receiving the acceleration start prompt message, and marks the target thread as a thread of a preset type. When the processor resource is required to be allocated to the thread to be scheduled, judging whether the thread to be scheduled is a thread of a preset type, if so, allocating the processor resource to the thread to be scheduled according to a first rule, and if not, allocating the processor resource to the thread according to a second rule, wherein the speed or the quantity of the processor resource allocated to the thread based on the first rule is larger than the speed or the quantity of the processor resource allocated to the thread based on the second rule. In this way, some core threads are marked as preset types of threads according to the acceleration start prompt information, and the threads are further distinguished from other non-core threads. And, in comparison with non-core threads, more and faster processor resources are allocated to core threads in an acceleration period, so that the threads can execute tasks more efficiently, and the phenomenon of jamming of the electronic equipment is reduced.
The embodiment of the application also provides a storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer executes the resource allocation method according to any one of the embodiments.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium, and the storage medium may include, but is not limited to, a Read Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or the like.
Furthermore, the terms "first," "second," and "third," and the like, herein, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to the particular steps or modules listed and certain embodiments may include additional steps or modules not listed or inherent to such process, method, article, or apparatus.
The resource allocation method, the device, the storage medium and the electronic equipment provided by the embodiment of the application are described in detail. While the principles and embodiments of the present application have been described in detail in this application, the foregoing embodiments are provided to facilitate understanding of the principles and concepts underlying the application, and variations in terms of specific embodiments and applications are apparent to those skilled in the art in light of the teachings herein, and in light of these teachings, this disclosure should not be construed to limit the application.