[go: up one dir, main page]

CN111666153B - Cache task management method, terminal device and storage medium - Google Patents

Cache task management method, terminal device and storage medium Download PDF

Info

Publication number
CN111666153B
CN111666153B CN202010453097.7A CN202010453097A CN111666153B CN 111666153 B CN111666153 B CN 111666153B CN 202010453097 A CN202010453097 A CN 202010453097A CN 111666153 B CN111666153 B CN 111666153B
Authority
CN
China
Prior art keywords
cache
task
cache task
memory
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010453097.7A
Other languages
Chinese (zh)
Other versions
CN111666153A (en
Inventor
张航志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN202010453097.7A priority Critical patent/CN111666153B/en
Publication of CN111666153A publication Critical patent/CN111666153A/en
Application granted granted Critical
Publication of CN111666153B publication Critical patent/CN111666153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a cache task management method, which comprises the following steps: acquiring use information and/or operation parameters of a system corresponding to a current cache task, and acquiring a residual memory of the system; determining the priority corresponding to the cache task according to the use information and/or the operation parameters of the system; and cleaning the cache task according to the priority and the residual memory. The invention also discloses a terminal device and a computer readable storage medium, which achieve the effect of improving the application switching speed.

Description

Cache task management method, terminal device and storage medium
Technical Field
The present invention relates to the field of electrical appliances, and in particular, to a method for managing a cache task, a terminal device, and a computer readable storage medium.
Background
With the gradual development of television production technology, the functions of televisions are increasingly increased. Currently, televisions based on the android system have become common household appliances for people. The starting speed of the application is not provided, and the television based on the android system can reserve the program process in the background, so that when a user needs to switch to the program running in the background, the corresponding program can be started quickly.
In the traditional television background application management process, background processes are cleaned only according to a pre-stored cleaning sequence. Because the cleaning sequence is a pre-configured fixed parameter, in the process cleaning process, the process which is temporarily placed in the background operation by the user is often cleaned, and when the user switches back to the application which is temporarily placed in the background operation, starting operations such as initializing the application again are required. This has the disadvantage of slow switching speed of the application.
Disclosure of Invention
The invention mainly aims to provide a cache task management method, terminal equipment and a computer readable storage medium, aiming at achieving the effect of improving the application switching speed.
In order to achieve the above object, the present invention provides a method for managing a cache task, the method for managing a cache task comprising the steps of:
Acquiring use information and/or operation parameters of a system corresponding to a current cache task, and acquiring a residual memory of the system;
determining the priority corresponding to the cache task according to the use information and/or the operation parameters of the system;
And cleaning the cache task according to the priority and the residual memory.
Optionally, the step of cleaning the buffer task according to the priority and the remaining memory includes:
Acquiring a memory occupation value corresponding to a cache task;
And selecting at least one cache task as a target cache task according to the priority, and cleaning other cache tasks except the target cache task in the cache tasks, wherein the sum of the memory occupation values of the target cache tasks is smaller than or equal to the residual memory.
Optionally, after the step of cleaning the buffer task according to the priority and the remaining memory, the method further includes:
When the starting of a target application is detected, a triggering mode of the target application is obtained, wherein after a cache task is cleaned, the application corresponding to the cleaned cache task is marked as the target application;
and closing the target application when the triggering mode of the target application is background triggering.
Optionally, the step of obtaining the remaining memory of the system includes:
Acquiring a total memory value and a memory calculation factor of a system;
and determining the residual memory according to the total system memory value and the memory calculation factor.
Optionally, the usage information includes a front end display duration and/or a trigger number corresponding to the cache task; the operating parameters of the system include the mode of operation, the usage scenario and/or the current point in time.
Optionally, the step of determining the priority corresponding to the cache task according to the usage information and/or the operation parameter of the system includes:
Setting the priority of the cache task associated with the working mode, the use scene and/or the current time point in the cache task as a target priority;
the step of cleaning the buffer task according to the priority and the residual memory comprises the following steps:
selecting at least one cache task with the priority as a target cache task, and cleaning other cache tasks except the target cache task in the cache tasks, wherein the sum of memory occupation values of the target cache tasks is smaller than the residual memory.
Optionally, the step of obtaining the usage information and/or the operation parameters of the system corresponding to the current cache task includes:
acquiring system positioning information and/or system parameters;
and determining the use scene according to the positioning information and/or the system parameters.
Optionally, the step of determining the priority corresponding to the cache task according to the usage information and/or the operation parameter of the system includes:
acquiring the weight corresponding to the front-end display duration and the triggering times;
and determining the priority corresponding to the cache task according to the weight, the front-end display duration and the triggering times.
In addition, to achieve the above object, the present invention also provides a terminal device, where the terminal device includes a memory, a processor, and a cache task management program stored on the memory and executable on the processor, and the cache task management program when executed by the processor implements the steps of the cache task management method as described above.
Optionally, the terminal device is a television.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a cache task management program which, when executed by a processor, implements the steps of the cache task management method as described above.
The embodiment of the invention provides a buffer task management method, terminal equipment and a computer readable storage medium, which are used for acquiring use information and/or system operation parameters corresponding to a current buffer task, acquiring a residual memory of the system, determining a priority corresponding to the buffer task according to the use information and/or the system operation parameters, and cleaning the buffer task according to the priority and the residual memory. The priority corresponding to the buffer task can be determined according to the use information and/or the operation parameters of the system, and the buffer task is cleaned according to the residual memory and the priority, so that the cleaning rationality of the buffer task is improved, the phenomenon that a process running in the background is cleaned off temporarily by a user is avoided, and the application corresponding to the buffer task does not need to be restarted when the user switches back to the buffer task temporarily placed in the background from a new state, so that the effect of improving the application switching speed is achieved.
Drawings
FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an embodiment of a method for managing a cache task according to the present invention;
FIG. 3 is a flowchart illustrating a buffer task management method according to another embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the traditional television background application management process, background processes are cleaned only according to a pre-stored cleaning sequence. Because the cleaning sequence is a pre-configured fixed parameter, in the process cleaning process, the process which is temporarily placed in the background operation by the user is often cleaned, and when the user switches back to the application which is temporarily placed in the background operation, starting operations such as initializing the application again are required. This has the disadvantage of slow switching speed of the application.
The embodiment of the invention provides a management method of a cache task, terminal equipment and a computer readable storage medium, wherein the main solution of the management method of the cache task is as follows:
Acquiring use information and/or operation parameters of a system corresponding to a current cache task, and acquiring a residual memory of the system;
determining the priority corresponding to the cache task according to the use information and/or the operation parameters of the system;
And cleaning the cache task according to the priority and the residual memory.
The priority corresponding to the buffer task can be determined according to the use information and/or the operation parameters of the system, and the buffer task is cleaned according to the residual memory and the priority, so that the cleaning rationality of the buffer task is improved, the phenomenon that a process running in the background is cleaned off temporarily by a user is avoided, and the application corresponding to the buffer task does not need to be restarted when the user switches back to the buffer task temporarily placed in the background from a new state, so that the effect of improving the application switching speed is achieved. The remaining memory refers to the difference between the total memory and the occupied memory of the system, i.e. the size of the memory that can be used currently.
As shown in fig. 1, fig. 1 is a schematic diagram of a terminal structure of a hardware running environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be terminal equipment such as a television.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), etc., and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the terminal structure shown in fig. 1 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a cache task management program may be included in the memory 1005, which is a type of computer storage medium.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server and performing data communication with the background server; the processor 1001 may be configured to call a cache task management program stored in the memory 1005 and perform the following operations:
Acquiring use information and/or operation parameters of a system corresponding to a current cache task, and acquiring a residual memory of the system;
determining the priority corresponding to the cache task according to the use information and/or the operation parameters of the system;
And cleaning the cache task according to the priority and the residual memory.
Further, the processor 1001 may call the cache task management program stored in the memory 1005, and further perform the following operations:
Acquiring a memory occupation value corresponding to a cache task;
And selecting at least one cache task as a target cache task according to the priority, and cleaning other cache tasks except the target cache task in the cache tasks, wherein the sum of the memory occupation values of the target cache tasks is smaller than or equal to the residual memory.
Further, the processor 1001 may call the cache task management program stored in the memory 1005, and further perform the following operations:
When the starting of a target application is detected, a triggering mode of the target application is obtained, wherein after a cache task is cleaned, the application corresponding to the cleaned cache task is marked as the target application;
and closing the target application when the triggering mode of the target application is background triggering.
Further, the processor 1001 may call the cache task management program stored in the memory 1005, and further perform the following operations:
Acquiring a total memory value and a memory calculation factor of a system;
and determining the residual memory according to the total system memory value and the memory calculation factor.
Further, the processor 1001 may call the cache task management program stored in the memory 1005, and further perform the following operations:
Setting the priority of the cache task associated with the working mode, the use scene and/or the current time point in the cache task as a target priority;
the step of cleaning the buffer task according to the priority and the residual memory comprises the following steps:
selecting at least one cache task with the priority as a target cache task, and cleaning other cache tasks except the target cache task in the cache tasks, wherein the sum of memory occupation values of the target cache tasks is smaller than the residual memory.
Further, the processor 1001 may call the cache task management program stored in the memory 1005, and further perform the following operations:
acquiring system positioning information and/or system parameters;
and determining the use scene according to the positioning information and/or the system parameters.
Further, the processor 1001 may call the cache task management program stored in the memory 1005, and further perform the following operations:
acquiring the weight corresponding to the front-end display duration and the triggering times;
and determining the priority corresponding to the cache task according to the weight, the front-end display duration and the triggering times.
Referring to fig. 2, in an embodiment of the present invention, the method for managing a cache task includes the following steps:
step S10, obtaining use information and/or operation parameters of a system corresponding to a current cache task, and obtaining a residual memory of the system;
step S20, determining the priority corresponding to the cache task according to the use information and/or the operation parameters of the system;
And step S30, cleaning the cache task according to the priority and the residual memory.
In this embodiment, when the terminal opens a plurality of applications at the same time, only one front-end display screen for acquiring a preset number of applications may be displayed at the front end, and then other applications not displayed at the front end may be run in the background. This results in an increase in the amount of system memory. When the occupied amount of the system memory rises to a preset threshold value or the system currently meets the triggering conditions of other system resource recovery mechanisms, triggering the system resource recovery mechanism. When the system resource recovery mechanism is triggered, the use information and/or the operation parameters of the system corresponding to the system cache task are obtained. The usage information may include front-end display times and/or front-end display duration corresponding to the cache task. It is understood that the front end display times and the front end display duration may be counted by the system when displaying the front end display screen of each application. For example, when a user opens a vacation video, the terminal may display a front-end display screen of the vacation video at the front end. When the system starts to display the front-end display picture of the vacation video, the number of times of front-end display corresponding to the vacation video is increased by one, and/or a timer can be started when the system starts to display the front-end display picture of the vacation video, and the timer stops timing when the front-end display picture of the vacation video is stopped at the front end of the system through the timer, so that the front-end display duration corresponding to the vacation video is recorded through timing. It should be noted that, the timer value of the timer may be cleared after shutdown, so that the front-end display duration is the front-end display duration corresponding to the buffer task in the current use process. This may reduce data storage compliance of the system. The timing value of the timer can be displayed for the accumulated front end of a cache task in a plurality of secondary use processes without zero clearing, so that the accuracy of judging the use habit of the terminal to the user can be improved. In this embodiment, whether the timer is cleared when the timer is turned off is not limited, and the device producer or the user can customize the setting according to the requirement. In addition, the system operating parameters may be a current operating mode, a usage scenario, and/or a current point in time of the system.
When the residual memory is acquired, a system memory total value and a memory calculation factor can be acquired first, and the residual memory is determined according to the system memory total value and the memory calculation factor.
Specifically, the hardware information of the terminal may be directly read, the total memory value is determined according to the hardware information, or the model information or other identification of the terminal may be read first, then a pre-stored total memory value associated according to the model information or identification is obtained, or the model information or identification may be sent to a server, so that when the server receives the model information or identification, the total memory value is fed back to the terminal. Further, the pre-stored memory calculation factor may be further obtained, and then the memory K may be calculated based on the memory calculation factor j and the total memory value Z, where a calculation formula is as follows:
K=j·Z
since the available memory can be calculated according to the memory calculation factor and the memory total value, after any terminal loads the cache task management method described in this embodiment, the remaining memory can be determined according to the mode, so that the terminal suitability of the cache task management method described in this embodiment is improved.
And after the use information and/or the operation parameters of the system are acquired, determining the priority corresponding to the cache task in the current system according to the use information and/or the operation parameters of the system.
In example 1, when the front-end presentation duration is obtained, a priority corresponding to each cache task may be determined according to the front-end presentation duration. For example, the longer the front end presentation duration, the higher the priority.
In example 2, when the front-end presentation number is obtained, the priority of each cache task may be determined according to the front-end presentation number, where the greater the front-end presentation number is, the higher the corresponding priority is.
In example 3, when the number of front-end display times and the front-end display duration are obtained, the priority corresponding to the cache task may be determined according to the number of front-end display times corresponding to the cache task. The greater the front-end display times, the higher the corresponding priority. When the front-end display times are the same, adjusting priorities corresponding to one or more cache tasks with the same front-end display times according to the front-end display time length. Among the cache tasks with the same front-end display times, the larger the front-end display time length is, the higher the priority is. It can be understood that the priority of the buffer task may be determined according to the front-end display duration, and then when the buffer task with the same front-end display duration exists, the priority of the buffer task with the same front-end display duration is adjusted according to the front-end display times.
Example 4, when the front end display time and the front end display time length are obtained, a weight corresponding to the front end display time length and the trigger time length may be obtained, and the priority corresponding to the cache task is determined according to the weight, the front end display time length and the trigger time length. When determining the priority corresponding to the cache task according to the weight, the front-end display duration and the triggering times, the score M corresponding to each cache task may be calculated first, where the score M may be calculated according to the following formula:
M=p1·N+p2·Q
Wherein, p 1 and p 2 are respectively the weight corresponding to the front end display times N and the front end display time Q. And then determining the priority of each buffer storage task according to the scores. For example, the higher the score, the higher the priority may be set.
Example 5 when the operation parameters of the system are acquired, the operation mode, the usage scenario, and/or the current time point may be acquired, the buffer task associated with the operation mode, the usage scenario, and/or the current time point may be set as a target priority (hereinafter described as a first priority), and the buffer task not associated with the operation mode, the usage scenario, and/or the current time point may be set as another priority (hereinafter described as a second priority).
In example 6, after the operation parameter and the usage information are obtained, the priority of the buffer task may be determined according to the usage information, then when there is a buffer task with the same priority, the priority of the buffer task with the same priority is adjusted according to the operation parameter, and the adjusted priority is used as the final priority of a buffer task. Or the priority of the task change can be determined according to the operation parameters, and then when the buffer tasks with the same priority exist, the final priority of the buffer tasks with the same priority can be adjusted according to the use information.
After currently determining the priority of each cache task, the cache task can be cleaned according to the priority and the residual memory.
Specifically, after determining the priority of each cache task, at least one or more cache tasks may be selected as target cache tasks according to the order of the priority from high to low, where when the target cache tasks are selected, the memory occupation value of each cache task may also be obtained, so that the sum of the memory occupation values of the selected target cache tasks is less than or equal to the remaining memory.
After the target cache task is determined, other cache tasks except the target cache task in the cache tasks can be cleaned. Thus, the memory is released, and the current memory occupation amount of the system is reduced.
In the technical scheme disclosed in the embodiment, the use information and/or the operation parameters of the system corresponding to the current cache task are obtained, the residual memory of the system is obtained, the priority corresponding to the cache task is determined according to the use information and/or the operation parameters of the system, and then the cache task is cleaned according to the priority and the residual memory. The priority corresponding to the buffer task can be determined according to the use information and/or the operation parameters of the system, and the buffer task is cleaned according to the residual memory and the priority, so that the cleaning rationality of the buffer task is improved, the phenomenon that a process running in the background is cleaned off temporarily by a user is avoided, and the application corresponding to the buffer task does not need to be restarted when the user switches back to the buffer task temporarily placed in the background from a new state, so that the effect of improving the application switching speed is achieved.
Referring to fig. 3, based on the above embodiment, in another embodiment, after the step S30, the method further includes:
Step S40, when the starting of the target application is detected, a triggering mode of the target application is obtained, wherein after a cache task is cleaned, the application corresponding to the cleaned cache task is marked as the target application;
And step S50, closing the target application when the triggering mode of the target application is background triggering.
In this embodiment, after a cache task is cleaned, an application corresponding to the cache task may be marked, and the marked application may be used as a target application. When the system detects that the application is started currently, whether the application started currently is a target application or not can be judged first. And directly starting the currently started application when the currently started application is not the target application. And when the currently started application is the target application, acquiring a triggering mode of the target application. And directly starting the target application when the triggering mode of the target application is front-end triggering. And when the starting mode of the target application is background triggering, organizing the starting of the target application, namely closing the target application which is currently triggered by the background.
It should be noted that the method for managing a cache task may also be applied to a scenario where a system is started (e.g., an android system), for example, when the system is started, a background cache process configuration is initialized, and the process of starting to manage a cache task in step S1 is shifted to.
In the technical scheme disclosed in the embodiment, when the target application is detected to be started, a triggering mode of the target application is obtained, wherein after a cache task is cleared, the application corresponding to the cleared cache task is marked as the target application, and when the triggering mode of the target application is background triggering, the target application is closed. Thus, the memory consumption and the CPU resource occupation amount are reduced, and the phenomenon that the memory recovery mechanism is frequently triggered is avoided.
In addition, the embodiment of the invention also provides a terminal device, which comprises a memory, a processor and a cache task management program stored on the memory and capable of running on the processor, wherein the cache task management program realizes the steps of the cache task management method in each embodiment when being executed by the processor.
Optionally, the terminal device is a television.
In addition, the embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a cache task management program, and the cache task management program realizes the steps of the cache task management method in each embodiment when being executed by a processor.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a television or the like) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (8)

1. The cache task management method is characterized by comprising the following steps of:
Acquiring use information and/or system operation parameters corresponding to a current cache task, and acquiring a total memory value and a memory calculation factor of the system, wherein the system operation parameters comprise a working mode, a use scene and/or a current time point;
determining a residual memory according to the total system memory value and the memory calculation factor;
determining the priority corresponding to the cache task according to the use information;
When the cache tasks with the same priority are present, setting the priority of the cache task associated with the working mode, the use scene and/or the current time point in the cache tasks as a target priority;
Selecting at least one cache task with the priority as a target cache task, and cleaning other cache tasks except the target cache task in the cache tasks, wherein the sum of memory occupation values of the target cache tasks is smaller than the residual memory;
When the starting of a target application is detected, a triggering mode of the target application is obtained, wherein after a cache task is cleaned, the application corresponding to the cleaned cache task is marked as the target application;
When the triggering mode of the target application is front-end triggering, starting the target application;
and closing the target application when the triggering mode of the target application is background triggering.
2. The method for managing cache tasks as claimed in claim 1, wherein said step of selecting at least one of said cache tasks having said priority as a target cache task and cleaning other cache tasks than said target cache task among said cache tasks comprises:
Acquiring a memory occupation value corresponding to a cache task;
And selecting at least one cache task as a target cache task according to the priority, and cleaning other cache tasks except the target cache task in the cache tasks, wherein the sum of the memory occupation values of the target cache tasks is smaller than or equal to the residual memory.
3. The method for managing a cache task according to claim 1, wherein the usage information includes a front-end presentation duration and/or a trigger number corresponding to the cache task.
4. The method for managing a cache task according to claim 1, wherein the step of obtaining usage information and/or operation parameters of the system corresponding to the current cache task comprises:
acquiring system positioning information and/or system parameters;
and determining the use scene according to the positioning information and/or the system parameters.
5. The method for managing a cache task according to claim 3, wherein said step of determining a priority corresponding to said cache task based on said usage information comprises:
acquiring the weight corresponding to the front-end display duration and the triggering times;
and determining the priority corresponding to the cache task according to the weight, the front-end display duration and the triggering times.
6. A terminal device, characterized in that the terminal device comprises: memory, a processor and a cache task management program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the cache task management method according to any of claims 1 to 5.
7. The terminal device of claim 6, wherein the terminal device is a television.
8. A computer readable storage medium, wherein a cache task management program is stored on the computer readable storage medium, and the cache task management program when executed by a processor implements the steps of the cache task management method according to any one of claims 1 to 5.
CN202010453097.7A 2020-05-25 2020-05-25 Cache task management method, terminal device and storage medium Active CN111666153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010453097.7A CN111666153B (en) 2020-05-25 2020-05-25 Cache task management method, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010453097.7A CN111666153B (en) 2020-05-25 2020-05-25 Cache task management method, terminal device and storage medium

Publications (2)

Publication Number Publication Date
CN111666153A CN111666153A (en) 2020-09-15
CN111666153B true CN111666153B (en) 2024-07-05

Family

ID=72384598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010453097.7A Active CN111666153B (en) 2020-05-25 2020-05-25 Cache task management method, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN111666153B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948073A (en) * 2021-01-29 2021-06-11 京东方科技集团股份有限公司 Optimization method and device for running memory and storage medium
CN112988078B (en) * 2021-04-27 2023-07-14 山东英信计算机技术有限公司 Method and device for managing cache memory occupation in distributed storage applications

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110221921A (en) * 2019-06-13 2019-09-10 深圳Tcl新技术有限公司 EMS memory management process, terminal and computer readable storage medium
CN111124668A (en) * 2019-11-28 2020-05-08 宇龙计算机通信科技(深圳)有限公司 Memory release method and device, storage medium and terminal

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632096B (en) * 2013-11-29 2018-01-16 北京奇虎科技有限公司 A kind of method and apparatus that safety detection is carried out to equipment
CN103714016B (en) * 2014-01-14 2017-10-27 北京猎豹移动科技有限公司 Method for cleaning, device and the client of caching
CN107133094B (en) * 2017-06-05 2021-11-02 努比亚技术有限公司 Application management method, mobile terminal and computer readable storage medium
CN107608785A (en) * 2017-08-15 2018-01-19 深圳天珑无线科技有限公司 Process management method, mobile terminal and readable storage medium
CN107665147B (en) * 2017-09-26 2019-12-06 厦门美图移动科技有限公司 System cleaning method of mobile equipment and mobile equipment
CN110324698A (en) * 2018-03-28 2019-10-11 努比亚技术有限公司 A kind of video recording method, terminal and computer readable storage medium
CN108804231B (en) * 2018-06-13 2020-10-30 奇酷互联网络科技(深圳)有限公司 Memory optimization method and device, readable storage medium and mobile terminal
CN110138959B (en) * 2019-04-10 2022-02-15 荣耀终端有限公司 Method for displaying prompt of human-computer interaction instruction and electronic equipment
CN110704187B (en) * 2019-09-25 2024-10-01 深圳传音控股股份有限公司 System resource adjusting method and device and readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110221921A (en) * 2019-06-13 2019-09-10 深圳Tcl新技术有限公司 EMS memory management process, terminal and computer readable storage medium
CN111124668A (en) * 2019-11-28 2020-05-08 宇龙计算机通信科技(深圳)有限公司 Memory release method and device, storage medium and terminal

Also Published As

Publication number Publication date
CN111666153A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
US9965188B2 (en) Memory cleaning method and apparatus, and terminal device
CN107911798B (en) Message pushing method and device and terminal
US11048681B2 (en) Application suggestion features
CN111666153B (en) Cache task management method, terminal device and storage medium
CN103530220A (en) Display method and system and terminal for application program icons
JP5481503B2 (en) User interface device, user interface method and program
CN106775820A (en) The method and device of application program management
CN107566642A (en) Method and device for switching function modes and intelligent terminal
CN112306256B (en) Application program switching processing method and device and electronic equipment
CN110996154A (en) Video playing method and device and electronic equipment
CN104615531A (en) Statistical method for accumulated terminal using duration time and network system
US20150327179A1 (en) Portable terminal device
CN109358927B (en) Application program display method and device and terminal equipment
CN113382310B (en) Information recommendation method and device, electronic equipment and medium
CN112073820B (en) Method and device for pre-starting television application program and computer readable storage medium
CN111880701B (en) Page switching method and device and electronic equipment
CN111813307B (en) Application program display method and device and electronic equipment
CN111857857A (en) Interface display method, device and equipment
CN113051005B (en) Loading method and device
CN116866661A (en) Video prerendering method, device, equipment and storage medium
CN104618138A (en) Server and terminals
CN109284188B (en) Buffer array maintenance method, device, terminal and readable medium
CN109213930B (en) Method and electronic device for acquiring push information of target application page
CN111045768B (en) Application method of chart module, application device of chart module and storage medium
CN115002530B (en) Picture playing method and device, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant