[go: up one dir, main page]

CN118897654B - File processing method, device, equipment, medium and product - Google Patents

File processing method, device, equipment, medium and product Download PDF

Info

Publication number
CN118897654B
CN118897654B CN202411355308.8A CN202411355308A CN118897654B CN 118897654 B CN118897654 B CN 118897654B CN 202411355308 A CN202411355308 A CN 202411355308A CN 118897654 B CN118897654 B CN 118897654B
Authority
CN
China
Prior art keywords
file
data
target
file processing
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411355308.8A
Other languages
Chinese (zh)
Other versions
CN118897654A (en
Inventor
徐飞
苏志远
甄鹏
魏志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Metabrain Intelligent Technology Co Ltd
Original Assignee
Suzhou Metabrain Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Metabrain Intelligent Technology Co Ltd filed Critical Suzhou Metabrain Intelligent Technology Co Ltd
Priority to CN202411355308.8A priority Critical patent/CN118897654B/en
Publication of CN118897654A publication Critical patent/CN118897654A/en
Application granted granted Critical
Publication of CN118897654B publication Critical patent/CN118897654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请实施例提供了一种文件处理方法、装置、设备、介质及产品,包括:在检测到接收用户进程发起的文件处理请求的情况下,基于所述文件处理请求触发挂载在内核函数上的目标eBPF函数,并基于目标eBPF函数的类型确定目标决策模式,并获取虚拟文件系统当前的文件处理数据;基于所述目标决策模式和所述文件处理数据确定是否针对所述文件处理请求对应的待处理文件数据开启页面缓存处理机制,其中,所述页面缓存处理机制是从内存中处理所述待处理文件数据。本申请可以在内核态读取文件缓存前使用eBPF函数进行缓存读取的动态判断,支持根据实际场景自定义配置的方式定义缓存,避免了大量低热度文件占用系统资源。

The embodiments of the present application provide a file processing method, apparatus, device, medium and product, including: in the case of detecting a file processing request initiated by a user process, triggering a target eBPF function mounted on a kernel function based on the file processing request, and determining a target decision mode based on the type of the target eBPF function, and obtaining the current file processing data of the virtual file system; determining whether to enable a page cache processing mechanism for the pending file data corresponding to the file processing request based on the target decision mode and the file processing data, wherein the page cache processing mechanism processes the pending file data from memory. The present application can use the eBPF function to perform dynamic judgment of cache reading before reading the file cache in kernel mode, supports defining the cache in a customized configuration according to the actual scenario, and avoids a large number of low-heat files occupying system resources.

Description

File processing method, device, equipment, medium and product
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method, an apparatus, a device, a medium, and a product for processing a file.
Background
With the rapid development of computer technology, data storage and processing capabilities have been greatly improved. However, the performance gap between memory (RAM) and disk (HDD/SSD) is still large, which results in data access speed becoming a performance bottleneck in many application scenarios, such as big data processing, cloud computing, and real-time systems. To alleviate this problem, modern operating systems commonly employ page caching (PAGE CACHE) mechanisms that utilize portions of physical memory to store frequently accessed data to reduce disk I/O operations, thereby increasing data access speed.
Although the page caching mechanism greatly improves system performance, its conventional implementation still has limitations. For example, the most recently used (LRU) algorithm, which is commonly used, eliminates pages according to their most recent usage, which can be better done in many cases, but this simple strategy may not achieve optimal performance in the face of diverse, dynamically changing workloads. In addition, the LRU algorithm does not take into account factors such as the actual access frequency of the page, data locality, etc., which may lead to cache pollution and premature elimination of important data.
Currently, in order to solve the above-mentioned problems, it is general to improve a cache replacement algorithm, for example, LFU algorithm, ARC algorithm, and the like. However, these algorithms typically require modification of the operating system kernel, which not only increases implementation complexity, but may introduce stability and security issues, and in addition, these algorithms tend to be optimized for specific types of application scenarios, lack sufficient versatility, and cannot support user-state self-configuration.
Disclosure of Invention
The embodiment of the application aims to provide a file processing method, a device, equipment, a medium and a product, and the specific technical scheme is as follows:
in a first aspect of the present application, there is provided a file processing method, including:
Triggering a target eBPF function mounted on a kernel function based on the file processing request under the condition that the file processing request initiated by a user process is detected to be received, determining a target decision mode based on the type of the target eBPF function, and acquiring current file processing data of a virtual file system;
And determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory.
Optionally, the target decision mode includes a first decision mode and a second decision mode, the file processing request includes a target decision configuration identifier, the triggering the target eBPF function mounted on the kernel function based on the file processing request, and determining the target decision mode based on the type of the target eBPF function includes:
Triggering a target eBPF function mounted on a kernel function based on the file processing request, wherein the target eBPF function comprises preset user state configuration information corresponding to a first decision mode or historical system performance data corresponding to a second decision mode;
determining a target eBPF function based on the target decision configuration identifier carried by the file processing request;
A target decision mode is determined based on the type of the target eBPF function.
Optionally, the first decision mode includes a preset policy mode, and determining whether to open a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data includes;
if the target decision mode is the preset strategy mode, acquiring preset user mode configuration information based on the preset strategy mode, wherein the preset user mode configuration information comprises target file processing data for page cache processing, which correspond to each first file processing data respectively;
And determining whether to start a page cache processing mechanism for file data to be processed corresponding to the file processing request based on the current first file processing data of the virtual file system and the target file processing data.
Optionally, the second decision mode includes a self-learning mode, and determining whether to open a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data includes;
If the target decision mode is the self-learning mode, determining second file processing data corresponding to the self-learning mode;
And determining whether to start a page cache processing mechanism for file data to be processed corresponding to the file processing request or not based on the current second file processing data of the virtual file system, preset scoring functions corresponding to the second file processing data and weights corresponding to the second file processing data.
Optionally, the determining whether to open a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the current second file processing data of the virtual file system, a preset scoring function corresponding to each second file processing data, and a weight corresponding to each second file processing data includes:
generating a target score corresponding to each second file processing data based on the current second file processing data of the virtual file system and a preset scoring function corresponding to each second file processing data;
Summing the target scores corresponding to the second file processing data and the weights corresponding to the second file processing data to generate target total scores;
And determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target total score and a preset threshold value.
Optionally, the determining whether to open a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target total score and a preset threshold includes:
And under the condition that the target total score is detected to be larger than a preset threshold value, carrying out page caching processing on the file data to be processed corresponding to the file processing request.
Optionally, the second file processing data includes a file path, a file type, a file extension, a file descriptor, a file capacity, a file update time, and file source information.
Optionally, when the second file processing data is a file path, the preset scoring function is a file path scoring function, and the file path scoring function is set based on the file path depth and the matching degree of the target directory.
Optionally, when the second file processing data is a file type, the preset scoring function is a file type scoring function, and the file type scoring function is set based on preset priorities and file access frequencies corresponding to the file types.
Optionally, when the second file processing data is a file extension, the preset scoring function is a file extension scoring function, and the file extension scoring function is set based on popularity of the file extension and relevance of the file extension to the current context.
Optionally, when the second document processing data is document source information, the preset scoring function is a document source information scoring function, and the document source information scoring function is set based on a user role score and a user authority score.
Optionally, when the second file processing data is a file size, the preset scoring function is a file size function, and the file size function is set based on a file size and a preset file size maximum value.
Optionally, when the second file processing data is a file update time, the preset scoring function is a file update time scoring function, and the file update time scoring function is set based on a current time, a file download time and a preset decay rate.
Optionally, after the step of determining, based on the target decision mode and the file processing data, whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request, the method includes:
And under the condition that a page cache processing mechanism is started for the file data to be processed corresponding to the file processing request is detected, reading the file data to be processed in a memory based on the file data to be processed corresponding to the file processing request.
Optionally, after the step of determining, based on the target decision mode and the file processing data, whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request, the method includes:
And under the condition that a page cache processing mechanism is closed for the file data to be processed corresponding to the file processing request is detected, reading the file data to be processed in a target disk based on the file data to be processed corresponding to the file processing request.
Optionally, the reading the file data to be processed in the target disk based on the file data to be processed corresponding to the file processing request includes:
and reading the file data to be processed from the target disk in a preset reading mode.
In a second aspect of the present application, there is also provided a file processing apparatus applied to the blockchain server system of the first aspect, the apparatus including:
The determining module is used for triggering a target eBPF function mounted on the kernel function based on the file processing request under the condition that the file processing request initiated by the user process is detected to be received, determining a target decision mode based on the type of the target eBPF function, and acquiring current file processing data of the virtual file system;
and the processing module is used for determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory.
In a third aspect of the present application, there is also provided a communication device comprising a transceiver, a memory, a processor and a program stored on the memory and executable on the processor;
The processor is configured to read a program in the memory to implement the power backup method of the memory resource integrated machine according to any one of the first aspect.
In a fourth aspect of the present application, there is also provided a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to implement the memory resource integrated machine power up method according to any one of the first aspects.
In a fifth aspect of the present application, there is also provided a computer program product comprising a computer program/instruction which, when executed by a processor, implements a memory resource integrated machine power method as in any of the first aspects.
According to the file processing method provided by the embodiment of the application, under the condition that the file processing request initiated by the user process is detected to be received, the target eBPF function mounted on the kernel function is triggered based on the file processing request, the target decision mode is determined based on the type of the target eBPF function, the current file processing data of the virtual file system is obtained, and whether a page cache processing mechanism is started for the file data to be processed corresponding to the file processing request is determined based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory. According to the embodiment of the application, the allocation strategy of the page cache mechanism is dynamically adjusted according to the real-time state and the application requirement of the virtual file system, the current file processing data of the virtual file system can be monitored and acquired in real time through eBPF programs loaded to the kernel, a eBPF program can determine a target decision mode based on the information, and can determine how to process a file processing request initiated by a user process based on different target decision modes, and eBPF programs can dynamically calculate and adjust parameters such as the threshold value of page cache, the replacement strategy and the like so as to realize the optimal allocation of cache resources. The method can use eBPF functions to dynamically judge cache reading before the file is cached in the kernel state, support the definition of the cache according to the mode of the custom configuration of the actual scene, and avoid the occupation of system resources by a large number of low-heat files.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flowchart illustrating steps of a method for processing a file according to an embodiment of the present application;
FIG. 2 is a block diagram of a document processing apparatus according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a communication device according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of a dynamic page buffer mechanism according to an embodiment of the present application;
fig. 5 is a schematic diagram of a dynamic page buffer mechanism according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present application, numerous technical details have been set forth in order to provide a better understanding of the present application. The claimed application may be practiced without these specific details and with various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not be construed as limiting the specific implementation of the present application, and the embodiments can be mutually combined and referred to without contradiction.
Referring to fig. 1, a flowchart illustrating steps of a file processing method according to an embodiment of the present application may include:
step 101, under the condition that a file processing request initiated by a user process is detected to be received, triggering a target eBPF function mounted on a kernel function based on the file processing request, determining a target decision mode based on the type of the target eBPF function, and acquiring current file processing data of a virtual file system.
It should be noted that, in the embodiment of the present application, reference may be made to fig. 5, and fig. 5 is a schematic flow diagram of a dynamic page buffer mechanism, and it may be seen that, in a user state, a user process may initiate a file processing request, where the file processing request may include a file reading request, a file writing request, and so on.
In order to facilitate understanding of the technical solution of the present application by those skilled in the art, a document reading will be described as an example.
First, it should be noted that the filter is eBPF (Extended Berkeley PACKET FILTER, an extended berkeley package filter), and eBPF is a powerful programming framework for securely running sandboxes in Linux kernels without modifying kernel code. eBPF was originally developed for Linux and is still the most mature and widely used platform in this technical field. eBPF is a revolutionary technique that can run sandboxed programs in the Linux kernel without having to be implemented by modifying the kernel source code or loading the kernel module. eBPF is a register-based virtual machine that can run a just-in-time locally compiled BPF program within a Linux kernel using a custom 64-bit RISC instruction set and can access a subset of kernel functions and memory.
Therefore, in the embodiment of the application, the monitoring of the overall performance information of the system can be realized through eBPF program codes, for example, the service condition of the page cache, the system load and the application access mode can be monitored in real time through eBPF programs loaded into the kernel.
In addition, eBPF programs can be essentially thought of as eBPF functions, eBPF functions can run in a Linux kernel, and are used to perform specific tasks or operations, i.e., eBPF programs are essentially a set of functions executing in the kernel that interact with the kernel through specific hooks to perform various tasks. The security and performance of these programs is guaranteed by the verifier of the kernel and the JIT compiler.
Thus, in the embodiment of the application, in the case of detecting that a file processing request initiated by a user process is received, the target eBPF function mounted on the kernel function is triggered based on the file processing request.
Specifically, the user process initiates a file read request, for example, the user process initiates the file read request through a system call (such as read () or pread ()), and triggers a file read operation of the kernel of the operating system based on the file read request, so that when the kernel receives the file read request, a eBPF function mounted on the kernel function is triggered, further, a target decision mode is determined based on the type of the target eBPF function, and current file processing data of the virtual file system is acquired.
Further, the target decision module includes a first decision mode and a second decision mode, the file processing request includes a target decision configuration identifier, the triggering of the target eBPF function mounted on the kernel function based on the file processing request, and the determining the target decision mode based on the type of the target eBPF function includes:
Triggering a target eBPF function mounted on a kernel function based on the file processing request, wherein the target eBPF function comprises preset user state configuration information corresponding to a first decision mode or historical system performance data corresponding to a second decision mode;
determining a target eBPF function based on the target decision configuration identifier carried by the file processing request;
A target decision mode is determined based on the type of the target eBPF function.
It should be noted that, in the embodiment of the present application, referring to fig. 5, it can be seen that fig. 5 includes eBPF a management module, where the eBPF management module manages the attachment point of eBPF according to different modes, and different eBPF functions are attached in the different modes.
The different modes comprise a first decision mode and a second decision mode, wherein the first decision mode is a preset strategy mode, the second decision mode is a self-learning mode, a eBPF function of a file operation function is mounted in the preset strategy mode, and a eBPF function of a memory page core function is allocated and released by a mounting kernel in the self-learning mode.
Further, the first decision mode comprises a preset strategy mode, the step of determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data comprises the step of acquiring preset user mode configuration information based on the preset strategy mode if the target decision mode is the preset strategy mode, wherein the preset user mode configuration information comprises target file processing data which are respectively corresponding to each first file processing data and are used for carrying out page cache processing, and the step of determining whether to start the page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the current first file processing data and the target file processing data of the virtual file system.
Further, the second decision mode includes a self-learning mode, and the determining whether to open a page cache processing mechanism for the to-be-processed file data corresponding to the file processing request based on the target decision mode and the file processing data includes determining second file processing data corresponding to the self-learning mode if the target decision mode is the self-learning mode, and determining whether to open a page cache processing mechanism for the to-be-processed file data corresponding to the file processing request based on the current second file processing data of the virtual file system, a preset scoring function corresponding to each second file processing data, and weights corresponding to each second file processing data.
It should be noted that, in the embodiment of the present application, eBPF function maps store the policies read by the policy module or the collected system performance data, and may include different eBPF functions for different target decision modes.
The policy read by the policy module is used in a preset policy mode, and policy content supports whether to read PAGECACHE for the system file path, the file type, the file extension, the specific file descriptor, the file size threshold, the file update time, the file owner and the like.
The collected system performance data, i.e., historical system performance data, is used in a self-learning mode, including the current memory footprint of the system, the most recent read frequency of the current read file, and the reference count.
Step 102, determining whether to start a page buffer processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data, wherein the page buffer processing mechanism is used for processing the file data to be processed from a memory.
It should be noted that, in the case where the target decision mode has been determined and the file processing data has been acquired in step 101, it may be determined whether to open the page buffer processing mechanism for the file data to be processed corresponding to the file processing request based on different target decision modes.
Specifically, based on the above policies and data, the eBPF function makes a decision whether to read the page caching mechanism.
In the embodiment of the application, the page buffer mechanism directly stores the data into the memory without reading the data from the disk, thereby reducing the frequent access of the disk and improving the read-write performance of the system.
Specifically, the disk is used as a long-term storage medium and is responsible for the durability and final consistency of data, and the memory is used as a cache and is responsible for improving the read-write performance of files. The page caching mechanism reduces frequent access of the disk by caching the disk data into the memory, thereby remarkably improving the overall performance of the system.
In addition, in the embodiment of the present application, by dynamically determining whether to open a page buffer mechanism for a file processing request by mounting eBPF functions on a VFS (Virtual file system), specifically, referring to fig. 4, it can be seen that a rule-based dynamic buffer determination is mounted on the VFS, where the Virtual file system (Virtual FILE SYSTEM, abbreviated as VFS) is an abstraction layer in the kernel of the operating system, and provides a unified interface for different file systems. VFS allows applications to access different types of file systems in a consistent manner without concern for the specific implementation of the underlying file system.
Further, the determining whether to open a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the current second file processing data of the virtual file system, the preset scoring function corresponding to each second file processing data, and the weight corresponding to each second file processing data includes:
generating a target score corresponding to each second file processing data based on the current second file processing data of the virtual file system and a preset scoring function corresponding to each second file processing data;
Summing the target scores corresponding to the second file processing data and the weights corresponding to the second file processing data to generate target total scores;
And determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target total score and a preset threshold value.
Further, before the handling of the file handling request in the above-mentioned second decision mode is elucidated, it is to be confirmed that in the second decision mode, i.e. in the self-learning mode, key factors influencing the decision include the file path (FilePath), i.e. the path of the system file, the file type (FileType), i.e. the type of the file, e.g. text, picture, video, etc., the file extension (FileExt), i.e. the extension of the file, e.g. txt, jpg, mp4, etc., the specific file descriptor (FileDescriptor), i.e. the identifier of some specific file, the file size threshold (FileSizeThreshold), i.e. the threshold of the file size, the file update time (FileUpdateTime), i.e. the time of the last update of the file, the file owner (FileOwner), i.e. the owner of the file, so that it is possible to confirm that the second file handling data to be acquired includes the file path, the file type, the file extension, the file descriptor, the file size, the file content, the file update time, the file source information.
Thus, first, a weight is defined for each of the above-described second file processing data, and these weights can be adjusted according to actual conditions.
Specifically, the method can comprise WFP (file path weight), WFT (file type weight), WFE (file extension name weight), WFD (specific file descriptor weight), WFS (file size weight), WFU (file update time weight) and WFO (file owner weight).
Further, for each factor, a scoring function needs to be defined, for example, a scoring range may be set from 0 to 1, and the final score is summed by multiplying the scores of all the factors by the corresponding weights, and whether to read the file based on the page caching mechanism is determined according to whether the total score is greater than a threshold.
Further, determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target total score and a preset threshold value includes performing page cache processing for the file data to be processed corresponding to the file processing request when the target total score is detected to be greater than the preset threshold value.
Further, when the second file processing data is a file path, the preset scoring function is a file path scoring function, and the file path scoring function is set based on the file path depth and the matching degree of the target directory.
Further, when the second file processing data is a file type, the preset scoring function is a file type scoring function, and the file type scoring function is set based on preset priorities and file access frequencies corresponding to the file types.
Further, when the second file processing data is a file extension, the preset scoring function is a file extension scoring function, and the file extension scoring function is set based on popularity of the file extension and relevance of the file extension to the current context.
Further, when the second document processing data is document source information, the preset scoring function is a document source information scoring function, and the document source information scoring function is set based on a user role score and a user authority score.
Further, when the second file processing data is a file size, the preset scoring function is a file size function, and the file size function is set based on a file size and a preset file size maximum value.
Further, when the second file processing data is a file update time, the preset scoring function is a file update time scoring function, and the file update time scoring function is set based on the current time, the file download time and a preset decay rate.
In particular, with reference to the above description, it can be seen that the scoring function may include the following:
first, file path scoring function (S_FP)
Considering that a file path may contain multiple subdirectories, we can use the path depth and the degree of matching of a particular directory to design a scoring function:
(equation 1)
Wherein depth (FilePath) in equation 1 represents the depth of the path, matchScore is a function used to calculate the file pathAnd an important directoryIs a degree of matching of (a).
Second, file type scoring function (S_FT)
The file types have different priorities, and the scoring function can be designed according to the purposes and the access frequency of the files:
(equation 2)
In the above formula 2, freq (FileType _i) represents the access frequency of the file type i, importance (FileType _i) represents the importance of the file type i, and specifically, the priority corresponding to each preset file type may be set.
Third, file extension scoring function (S_FE)
Scoring of file extensions may be based on popularity and relevance of the extensions:
(equation 3)
Wherein popularity (FileExt) denotes popularity of the file extension, relevance (FileExt, currentContext) denotes the file extensionWith the current contextIs a correlation of (3).
Fourth, a specific file descriptor scoring function (S_FD)
A score for a particular file descriptor may be set.
Fifth, file Capacity scoring function (S_FS)
The file size is the file size, and the score of the file size can be set by setting a nonlinear scoring function to distinguish the influence of files with different sizes on PAGECACHE, specifically, the following formula 4 can be referred to, wherein,For the file capacity to be available,Is the maximum value of the file capacity.
(Equation 4)
Sixth, the file updates the time scoring function (S_FU)
The update time of a file may employ an exponential decay model to reflect the freshness of the file.
(Equation 5)
Wherein, Is a preset attenuation rate, the preset attenuation degree can be adjusted according to the requirement of the system,For the current time period of time,Is the file download time.
Seventh, file source information scoring function
The file source information, i.e., the file owner, thus, its score may be constructed based on the role and authority of the user, specifically, with reference to the following equation 6.
(Equation 6)
Wherein roleScore and permissionScore are scoring functions for user roles and permissions, respectively.
Optionally, after the step of determining, based on the target decision mode and the file processing data, whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request, the method includes:
If so, reading the file data to be processed in a page cache (memory) based on the file data to be processed corresponding to the file processing request.
Optionally, after the step of determining, based on the target decision mode and the file processing data, whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request, the method includes:
If not, reading the file data to be processed in a target disk based on the file data to be processed corresponding to the file processing request.
Optionally, the reading the file data to be processed in the target disk based on the file data to be processed corresponding to the file processing request includes:
and reading the file data to be processed from the target disk in a preset reading mode.
It should be noted that, in the embodiment of the present application, after determining whether to process the file processing request through the page buffer mechanism, the kernel processes the file read request.
Specifically, based on the decision of the dynamic cache eBPF function, the reading scheduling module triggers the actual reading operation, when the decision PAGECACHE is yes, the data of PAGECACHE are preferentially read according to the original reading flow of the system, when the decision PAGECACHE is no, an O_DIRECT mark is added to the flags parameter of open (), and the data are directly read from the disk by bypassing PAGECACHE in a DIRECT I/O mode.
According to the file processing method provided by the embodiment of the application, under the condition that the file processing request initiated by the user process is detected to be received, the target eBPF function mounted on the kernel function is triggered based on the file processing request, the target decision mode is determined based on the type of the target eBPF function, the current file processing data of the virtual file system is obtained, and whether a page cache processing mechanism is started for the file data to be processed corresponding to the file processing request is determined based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory. According to the embodiment of the application, the allocation strategy of the page cache mechanism is dynamically adjusted according to the real-time state and the application requirement of the virtual file system, the current file processing data of the virtual file system can be monitored and acquired in real time through eBPF programs loaded to the kernel, a eBPF program can determine a target decision mode based on the information, and can determine how to process a file processing request initiated by a user process based on different target decision modes, and eBPF programs can dynamically calculate and adjust parameters such as the threshold value of page cache, the replacement strategy and the like so as to realize the optimal allocation of cache resources. The method can use eBPF functions to dynamically judge cache reading before the file is cached in the kernel state, support the definition of the cache according to the mode of the custom configuration of the actual scene, and avoid the occupation of system resources by a large number of low-heat files.
In another embodiment, referring to fig. 5, a file processing system may include a eBPF management module, a configuration parsing module, a policy management module in a user state, and a memory monitoring module and a file monitoring module based on a filter eBPF function in a kernel state, and a policy matching and reading scheduling module in the kernel state, where each module may refer to the following.
The configuration analysis module is used in a preset strategy mode and is responsible for loading the configuration of a user mode, and the configuration content supports whether the configuration of a system file path, a file type, a file extension, a specific file descriptor, a file size threshold, a file update time and a file owner is read PAGECACHE or not.
And the policy management module is responsible for processing the updating action of the user state configuration and informing eBPF functions of the kernel state in real time based on the refreshing of the configuration.
And the filter program management module is used for managing eBPF mounting points according to different modes (a preset strategy mode or a self-learning mode), mounting different filter programs eBPF functions in different modes, mounting eBPF functions of file operation functions in the preset strategy mode, and mounting eBPF functions of kernel allocation and release memory page core functions in the self-learning mode.
And the memory monitoring module is used for acquiring the memory occupation condition of the system in real time by monitoring the functions of allocating and releasing the memory pages in the kernel, and obtaining the running condition of the system by taking the memory occupation condition as an important reference of the current performance state of the system. Based on this information, the resource requirements of the system are more accurately assessed.
And the file monitoring module is used for counting the reading frequency of the specific file and recording the heat of the file and the access mode of the file. By monitoring and analyzing these data in real time, a reasonable decision can be dynamically made in combination with the current performance state of the system.
Policy matching-based on preloaded profiles or monitoring data collected via a self-learning mode, the system is able to intelligently identify the current environment and needs. By analyzing and comparing the data, the system can be matched with the most suitable decision result, and the optimal configuration and the efficient utilization of the resources are realized.
And the reading scheduling is responsible for intelligently selecting to read data from the cache or the I/O device according to the decision result. If PAGECACHE is yes, the data of PAGECACHE are read preferentially according to the original reading flow of the system, if PAGECACHE is no, an O_DIRECT mark is added into flags parameters of open (), and the data is read from a disk directly by bypassing PAGECACHE in a DIRECT I/O mode.
Referring to fig. 2, fig. 2 is a block diagram of a file processing apparatus according to an embodiment of the present application, where the apparatus includes:
The determining module 201 is configured to trigger, based on a file processing request initiated by a user process, a target eBPF function mounted on a kernel function, determine a target decision mode based on a type of the target eBPF function, and obtain current file processing data of a virtual file system when detecting that the file processing request is received;
the processing module 202 is configured to determine, based on the target decision mode and the file processing data, whether to open a page cache processing mechanism for file data to be processed corresponding to the file processing request, where the page cache processing mechanism is configured to process the file data to be processed from a memory.
The embodiment of the present application also provides a communication device, as shown in fig. 3, including a processor 701, a communication interface 702, a memory 703 and a communication bus 704, where the processor 701, the communication interface 702, and the memory 703 perform communication with each other through the communication bus 704,
A memory 703 for storing a computer program;
the processor 701, when executing the program stored in the memory 703, may implement the following steps:
Triggering a target eBPF function mounted on a kernel function based on the file processing request under the condition that the file processing request initiated by a user process is detected to be received, determining a target decision mode based on the type of the target eBPF function, and acquiring current file processing data of a virtual file system;
And determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory.
Where the memory and the processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors and the memory together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor may be transmitted over a wired medium or through an antenna on a wireless medium, and the antenna further receives and transmits data to the processor. The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory may be used to store data used by the processor in performing operations.
The communication bus mentioned by the above terminal may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, abbreviated as PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The memory may include random access memory (Random Access Memory, RAM) or may include non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central Processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a digital signal processor (DIGITAL SIGNAL Processing, DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In yet another embodiment of the present application, a computer readable storage medium is provided, where instructions are stored, when the computer readable storage medium runs on a computer, to cause the computer to execute the memory resource integrated machine power-up method of any one of the foregoing embodiments.
In yet another embodiment of the present application, a computer program product containing instructions that, when executed on a computer, cause the computer to perform the memory resource all-in-one power-on method of any of the above embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (19)

1. A method of processing a document, the method comprising:
Triggering a target eBPF function mounted on a kernel function based on the file processing request under the condition that the file processing request initiated by a user process is detected to be received, determining a target decision mode based on the type of the target eBPF function, and acquiring current file processing data of a virtual file system, wherein the target decision mode comprises a first decision mode and a second decision mode, and the file processing request comprises a target decision configuration identifier;
Determining whether to open a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory;
the triggering the target eBPF function mounted on the kernel function based on the file processing request, and determining the target decision mode based on the type of the target eBPF function includes:
Triggering a target eBPF function mounted on a kernel function based on the file processing request, wherein the target eBPF function comprises preset user state configuration information corresponding to a first decision mode or historical system performance data corresponding to a second decision mode;
determining a target eBPF function based on the target decision configuration identifier carried by the file processing request;
A target decision mode is determined based on the type of the target eBPF function.
2. The method of claim 1, wherein the first decision mode comprises a preset policy mode, and wherein determining whether to open a page cache processing mechanism for the to-be-processed file data corresponding to the file processing request based on the target decision mode and the file processing data comprises;
if the target decision mode is the preset strategy mode, acquiring preset user mode configuration information based on the preset strategy mode, wherein the preset user mode configuration information comprises target file processing data for page cache processing, which correspond to each first file processing data respectively;
And determining whether to start a page cache processing mechanism for file data to be processed corresponding to the file processing request based on the current first file processing data of the virtual file system and the target file processing data.
3. The method of claim 1, wherein the second decision mode comprises a self-learning mode, and wherein determining whether to turn on a page cache processing mechanism for the to-be-processed file data corresponding to the file processing request based on the target decision mode and the file processing data comprises;
If the target decision mode is the self-learning mode, determining second file processing data corresponding to the self-learning mode;
And determining whether to start a page cache processing mechanism for file data to be processed corresponding to the file processing request or not based on the current second file processing data of the virtual file system, preset scoring functions corresponding to the second file processing data and weights corresponding to the second file processing data.
4. The method of claim 3, wherein the determining whether to turn on a page cache processing mechanism for the pending file data corresponding to the file processing request based on the current second file processing data of the virtual file system, the preset scoring function corresponding to each of the second file processing data, and the weight corresponding to each of the second file processing data comprises:
generating a target score corresponding to each second file processing data based on the current second file processing data of the virtual file system and a preset scoring function corresponding to each second file processing data;
Summing the target scores corresponding to the second file processing data and the weights corresponding to the second file processing data to generate target total scores;
And determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target total score and a preset threshold value.
5. The method of claim 4, wherein determining whether to open a page cache processing mechanism for the pending file data corresponding to the file processing request based on the target total score and a preset threshold comprises:
And under the condition that the target total score is detected to be larger than a preset threshold value, carrying out page caching processing on the file data to be processed corresponding to the file processing request.
6. The method of claim 4 or 5, wherein the second file processing data includes a file path, a file type, a file extension, a file descriptor, a file capacity, a file update time, and file source information.
7. The method of claim 6, wherein when the second document processing data is a document path, the pre-set scoring function is a document path scoring function, the document path scoring function being set based on a document path depth and a matching degree of a target directory.
8. The method of claim 6, wherein when the second document processing data is a document type, the pre-set scoring function is a document type scoring function, the document type scoring function being set based on a pre-set priority and a document access frequency corresponding to each document type.
9. The method of claim 6, wherein when the second file processing data is a file extension, the pre-set scoring function is a file extension scoring function, the file extension scoring function being set based on popularity of the file extension and relevance of the file extension to a current context.
10. The method of claim 6, wherein when the second document processing data is document source information, the pre-set scoring function is a document source information scoring function, the document source information scoring function being set based on a user role score and a user rights score.
11. The method of claim 6, wherein when the second file processing data is a file size, the pre-set scoring function is a file size function, the file size function being set based on a file size and a pre-set file size maximum.
12. The method of claim 6, wherein when the second file processing data is a file update time, the pre-set scoring function is a file update time scoring function, the file update time scoring function being set based on a current time, a file download time, and a pre-set decay rate.
13. The method according to claim 1, wherein after the step of determining whether to turn on a page cache processing mechanism for pending file data corresponding to the file processing request based on the target decision mode and the file processing data, the method comprises:
And under the condition that a page cache processing mechanism is started for the file data to be processed corresponding to the file processing request is detected, reading the file data to be processed in a memory based on the file data to be processed corresponding to the file processing request.
14. The method according to claim 1, wherein after the step of determining whether to turn on a page cache processing mechanism for pending file data corresponding to the file processing request based on the target decision mode and the file processing data, the method comprises:
And under the condition that a page cache processing mechanism is closed for the file data to be processed corresponding to the file processing request is detected, reading the file data to be processed in a target disk based on the file data to be processed corresponding to the file processing request.
15. The method of claim 14, wherein the reading the pending file data in a target disk based on the pending file data corresponding to the file processing request comprises:
and reading the file data to be processed from the target disk in a preset reading mode.
16. A document processing apparatus, the apparatus comprising:
The determining module is used for triggering a target eBPF function mounted on a kernel function based on the file processing request under the condition that the file processing request initiated by a user process is detected to be received, determining a target decision mode based on the type of the target eBPF function, and acquiring current file processing data of the virtual file system, wherein the target decision mode comprises a first decision mode and a second decision mode, and the file processing request comprises a target decision configuration identifier;
The processing module is used for determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory;
the determining module is specifically configured to:
Triggering a target eBPF function mounted on a kernel function based on the file processing request, wherein the target eBPF function comprises preset user state configuration information corresponding to a first decision mode or historical system performance data corresponding to a second decision mode;
determining a target eBPF function based on the target decision configuration identifier carried by the file processing request;
A target decision mode is determined based on the type of the target eBPF function.
17. A communication device comprising a transceiver, a memory, a processor, and a program stored on the memory and executable on the processor;
the processor being configured to read a program in a memory to implement the file processing method according to any one of claims 1 to 15.
18. A readable storage medium storing a program, wherein the program, when executed by a processor, implements a file processing method according to any one of claims 1-15.
19. A computer program product comprising computer programs/instructions which when executed by a processor implement a file processing method as claimed in any one of claims 1 to 15.
CN202411355308.8A 2024-09-27 2024-09-27 File processing method, device, equipment, medium and product Active CN118897654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411355308.8A CN118897654B (en) 2024-09-27 2024-09-27 File processing method, device, equipment, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411355308.8A CN118897654B (en) 2024-09-27 2024-09-27 File processing method, device, equipment, medium and product

Publications (2)

Publication Number Publication Date
CN118897654A CN118897654A (en) 2024-11-05
CN118897654B true CN118897654B (en) 2024-12-20

Family

ID=93269663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411355308.8A Active CN118897654B (en) 2024-09-27 2024-09-27 File processing method, device, equipment, medium and product

Country Status (1)

Country Link
CN (1) CN118897654B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119473564B (en) * 2025-01-14 2025-05-06 麒麟软件有限公司 Scheduling method and system for Direct IO intensive tasks under NUMA architecture
CN119668525B (en) * 2025-02-20 2025-05-13 浪潮云信息技术股份公司 Block storage caching method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110955486A (en) * 2018-09-26 2020-04-03 Oppo广东移动通信有限公司 Method, device, storage medium and terminal for tracking file cache efficiency
CN116244540A (en) * 2022-12-27 2023-06-09 南方电网数字平台科技(广东)有限公司 Intelligent cache management and control method and device for page data

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12182022B2 (en) * 2022-05-10 2024-12-31 Western Digital Tehcnologies, Inc. In-kernel cache request queuing for distributed cache
CN116684385A (en) * 2023-07-17 2023-09-01 浙江大学 A DNS caching method based on eBPF at the kernel level
CN118312227A (en) * 2024-02-28 2024-07-09 苏州元脑智能科技有限公司 EBPF-based ELF file reliability verification method and device, storage medium and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110955486A (en) * 2018-09-26 2020-04-03 Oppo广东移动通信有限公司 Method, device, storage medium and terminal for tracking file cache efficiency
CN116244540A (en) * 2022-12-27 2023-06-09 南方电网数字平台科技(广东)有限公司 Intelligent cache management and control method and device for page data

Also Published As

Publication number Publication date
CN118897654A (en) 2024-11-05

Similar Documents

Publication Publication Date Title
CN118897654B (en) File processing method, device, equipment, medium and product
US10831715B2 (en) Selective downloading of shared content items in a constrained synchronization system
US9519585B2 (en) Methods and systems for implementing transcendent page caching
US9058212B2 (en) Combining memory pages having identical content
US20170208125A1 (en) Method and apparatus for data prefetch in cloud based storage system
US20170206218A1 (en) Method and apparatus for data deduplication in cloud based storage system
US20170208052A1 (en) Hybrid cloud file system and cloud based storage system having such file system therein
JP6475295B2 (en) Storage constrained synchronization of shared content items
CN110837479B (en) Data processing method, related equipment and computer storage medium
US8769205B2 (en) Methods and systems for implementing transcendent page caching
US9069876B2 (en) Memory caching for browser processes
CN103294609B (en) Signal conditioning package and storage management method
CN113835616B (en) Application data management method, system and computer device
US9563638B2 (en) Selective downloading of shared content items in a constrained synchronization system
US8510510B1 (en) File cache optimization using element prioritization
US6317818B1 (en) Pre-fetching of pages prior to a hard page fault sequence
TW202314498A (en) Computing system for memory management opportunities and memory swapping tasks and method of managing the same
CN109495432B (en) An authentication method and server for an anonymous account
CN112114962A (en) Method and device for allocating memory
CN114003374B (en) Node scheduling method and device based on cloud platform, electronic equipment and storage medium
US8433694B1 (en) File cache optimization using element de-prioritization
JP6636623B2 (en) Selective download of shared content items in a constrained synchronization system
US11467730B1 (en) Method and system for managing data storage on non-volatile memory media
CN119292764A (en) Data processing method, device, electronic device, and computer-readable storage medium
CN117687975A (en) Method and device for judging credibility of cache file of operating system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant