Disclosure of Invention
The embodiment of the application aims to provide a file processing method, a device, equipment, a medium and a product, and the specific technical scheme is as follows:
in a first aspect of the present application, there is provided a file processing method, including:
Triggering a target eBPF function mounted on a kernel function based on the file processing request under the condition that the file processing request initiated by a user process is detected to be received, determining a target decision mode based on the type of the target eBPF function, and acquiring current file processing data of a virtual file system;
And determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory.
Optionally, the target decision mode includes a first decision mode and a second decision mode, the file processing request includes a target decision configuration identifier, the triggering the target eBPF function mounted on the kernel function based on the file processing request, and determining the target decision mode based on the type of the target eBPF function includes:
Triggering a target eBPF function mounted on a kernel function based on the file processing request, wherein the target eBPF function comprises preset user state configuration information corresponding to a first decision mode or historical system performance data corresponding to a second decision mode;
determining a target eBPF function based on the target decision configuration identifier carried by the file processing request;
A target decision mode is determined based on the type of the target eBPF function.
Optionally, the first decision mode includes a preset policy mode, and determining whether to open a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data includes;
if the target decision mode is the preset strategy mode, acquiring preset user mode configuration information based on the preset strategy mode, wherein the preset user mode configuration information comprises target file processing data for page cache processing, which correspond to each first file processing data respectively;
And determining whether to start a page cache processing mechanism for file data to be processed corresponding to the file processing request based on the current first file processing data of the virtual file system and the target file processing data.
Optionally, the second decision mode includes a self-learning mode, and determining whether to open a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data includes;
If the target decision mode is the self-learning mode, determining second file processing data corresponding to the self-learning mode;
And determining whether to start a page cache processing mechanism for file data to be processed corresponding to the file processing request or not based on the current second file processing data of the virtual file system, preset scoring functions corresponding to the second file processing data and weights corresponding to the second file processing data.
Optionally, the determining whether to open a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the current second file processing data of the virtual file system, a preset scoring function corresponding to each second file processing data, and a weight corresponding to each second file processing data includes:
generating a target score corresponding to each second file processing data based on the current second file processing data of the virtual file system and a preset scoring function corresponding to each second file processing data;
Summing the target scores corresponding to the second file processing data and the weights corresponding to the second file processing data to generate target total scores;
And determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target total score and a preset threshold value.
Optionally, the determining whether to open a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target total score and a preset threshold includes:
And under the condition that the target total score is detected to be larger than a preset threshold value, carrying out page caching processing on the file data to be processed corresponding to the file processing request.
Optionally, the second file processing data includes a file path, a file type, a file extension, a file descriptor, a file capacity, a file update time, and file source information.
Optionally, when the second file processing data is a file path, the preset scoring function is a file path scoring function, and the file path scoring function is set based on the file path depth and the matching degree of the target directory.
Optionally, when the second file processing data is a file type, the preset scoring function is a file type scoring function, and the file type scoring function is set based on preset priorities and file access frequencies corresponding to the file types.
Optionally, when the second file processing data is a file extension, the preset scoring function is a file extension scoring function, and the file extension scoring function is set based on popularity of the file extension and relevance of the file extension to the current context.
Optionally, when the second document processing data is document source information, the preset scoring function is a document source information scoring function, and the document source information scoring function is set based on a user role score and a user authority score.
Optionally, when the second file processing data is a file size, the preset scoring function is a file size function, and the file size function is set based on a file size and a preset file size maximum value.
Optionally, when the second file processing data is a file update time, the preset scoring function is a file update time scoring function, and the file update time scoring function is set based on a current time, a file download time and a preset decay rate.
Optionally, after the step of determining, based on the target decision mode and the file processing data, whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request, the method includes:
And under the condition that a page cache processing mechanism is started for the file data to be processed corresponding to the file processing request is detected, reading the file data to be processed in a memory based on the file data to be processed corresponding to the file processing request.
Optionally, after the step of determining, based on the target decision mode and the file processing data, whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request, the method includes:
And under the condition that a page cache processing mechanism is closed for the file data to be processed corresponding to the file processing request is detected, reading the file data to be processed in a target disk based on the file data to be processed corresponding to the file processing request.
Optionally, the reading the file data to be processed in the target disk based on the file data to be processed corresponding to the file processing request includes:
and reading the file data to be processed from the target disk in a preset reading mode.
In a second aspect of the present application, there is also provided a file processing apparatus applied to the blockchain server system of the first aspect, the apparatus including:
The determining module is used for triggering a target eBPF function mounted on the kernel function based on the file processing request under the condition that the file processing request initiated by the user process is detected to be received, determining a target decision mode based on the type of the target eBPF function, and acquiring current file processing data of the virtual file system;
and the processing module is used for determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory.
In a third aspect of the present application, there is also provided a communication device comprising a transceiver, a memory, a processor and a program stored on the memory and executable on the processor;
The processor is configured to read a program in the memory to implement the power backup method of the memory resource integrated machine according to any one of the first aspect.
In a fourth aspect of the present application, there is also provided a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to implement the memory resource integrated machine power up method according to any one of the first aspects.
In a fifth aspect of the present application, there is also provided a computer program product comprising a computer program/instruction which, when executed by a processor, implements a memory resource integrated machine power method as in any of the first aspects.
According to the file processing method provided by the embodiment of the application, under the condition that the file processing request initiated by the user process is detected to be received, the target eBPF function mounted on the kernel function is triggered based on the file processing request, the target decision mode is determined based on the type of the target eBPF function, the current file processing data of the virtual file system is obtained, and whether a page cache processing mechanism is started for the file data to be processed corresponding to the file processing request is determined based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory. According to the embodiment of the application, the allocation strategy of the page cache mechanism is dynamically adjusted according to the real-time state and the application requirement of the virtual file system, the current file processing data of the virtual file system can be monitored and acquired in real time through eBPF programs loaded to the kernel, a eBPF program can determine a target decision mode based on the information, and can determine how to process a file processing request initiated by a user process based on different target decision modes, and eBPF programs can dynamically calculate and adjust parameters such as the threshold value of page cache, the replacement strategy and the like so as to realize the optimal allocation of cache resources. The method can use eBPF functions to dynamically judge cache reading before the file is cached in the kernel state, support the definition of the cache according to the mode of the custom configuration of the actual scene, and avoid the occupation of system resources by a large number of low-heat files.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present application, numerous technical details have been set forth in order to provide a better understanding of the present application. The claimed application may be practiced without these specific details and with various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not be construed as limiting the specific implementation of the present application, and the embodiments can be mutually combined and referred to without contradiction.
Referring to fig. 1, a flowchart illustrating steps of a file processing method according to an embodiment of the present application may include:
step 101, under the condition that a file processing request initiated by a user process is detected to be received, triggering a target eBPF function mounted on a kernel function based on the file processing request, determining a target decision mode based on the type of the target eBPF function, and acquiring current file processing data of a virtual file system.
It should be noted that, in the embodiment of the present application, reference may be made to fig. 5, and fig. 5 is a schematic flow diagram of a dynamic page buffer mechanism, and it may be seen that, in a user state, a user process may initiate a file processing request, where the file processing request may include a file reading request, a file writing request, and so on.
In order to facilitate understanding of the technical solution of the present application by those skilled in the art, a document reading will be described as an example.
First, it should be noted that the filter is eBPF (Extended Berkeley PACKET FILTER, an extended berkeley package filter), and eBPF is a powerful programming framework for securely running sandboxes in Linux kernels without modifying kernel code. eBPF was originally developed for Linux and is still the most mature and widely used platform in this technical field. eBPF is a revolutionary technique that can run sandboxed programs in the Linux kernel without having to be implemented by modifying the kernel source code or loading the kernel module. eBPF is a register-based virtual machine that can run a just-in-time locally compiled BPF program within a Linux kernel using a custom 64-bit RISC instruction set and can access a subset of kernel functions and memory.
Therefore, in the embodiment of the application, the monitoring of the overall performance information of the system can be realized through eBPF program codes, for example, the service condition of the page cache, the system load and the application access mode can be monitored in real time through eBPF programs loaded into the kernel.
In addition, eBPF programs can be essentially thought of as eBPF functions, eBPF functions can run in a Linux kernel, and are used to perform specific tasks or operations, i.e., eBPF programs are essentially a set of functions executing in the kernel that interact with the kernel through specific hooks to perform various tasks. The security and performance of these programs is guaranteed by the verifier of the kernel and the JIT compiler.
Thus, in the embodiment of the application, in the case of detecting that a file processing request initiated by a user process is received, the target eBPF function mounted on the kernel function is triggered based on the file processing request.
Specifically, the user process initiates a file read request, for example, the user process initiates the file read request through a system call (such as read () or pread ()), and triggers a file read operation of the kernel of the operating system based on the file read request, so that when the kernel receives the file read request, a eBPF function mounted on the kernel function is triggered, further, a target decision mode is determined based on the type of the target eBPF function, and current file processing data of the virtual file system is acquired.
Further, the target decision module includes a first decision mode and a second decision mode, the file processing request includes a target decision configuration identifier, the triggering of the target eBPF function mounted on the kernel function based on the file processing request, and the determining the target decision mode based on the type of the target eBPF function includes:
Triggering a target eBPF function mounted on a kernel function based on the file processing request, wherein the target eBPF function comprises preset user state configuration information corresponding to a first decision mode or historical system performance data corresponding to a second decision mode;
determining a target eBPF function based on the target decision configuration identifier carried by the file processing request;
A target decision mode is determined based on the type of the target eBPF function.
It should be noted that, in the embodiment of the present application, referring to fig. 5, it can be seen that fig. 5 includes eBPF a management module, where the eBPF management module manages the attachment point of eBPF according to different modes, and different eBPF functions are attached in the different modes.
The different modes comprise a first decision mode and a second decision mode, wherein the first decision mode is a preset strategy mode, the second decision mode is a self-learning mode, a eBPF function of a file operation function is mounted in the preset strategy mode, and a eBPF function of a memory page core function is allocated and released by a mounting kernel in the self-learning mode.
Further, the first decision mode comprises a preset strategy mode, the step of determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data comprises the step of acquiring preset user mode configuration information based on the preset strategy mode if the target decision mode is the preset strategy mode, wherein the preset user mode configuration information comprises target file processing data which are respectively corresponding to each first file processing data and are used for carrying out page cache processing, and the step of determining whether to start the page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the current first file processing data and the target file processing data of the virtual file system.
Further, the second decision mode includes a self-learning mode, and the determining whether to open a page cache processing mechanism for the to-be-processed file data corresponding to the file processing request based on the target decision mode and the file processing data includes determining second file processing data corresponding to the self-learning mode if the target decision mode is the self-learning mode, and determining whether to open a page cache processing mechanism for the to-be-processed file data corresponding to the file processing request based on the current second file processing data of the virtual file system, a preset scoring function corresponding to each second file processing data, and weights corresponding to each second file processing data.
It should be noted that, in the embodiment of the present application, eBPF function maps store the policies read by the policy module or the collected system performance data, and may include different eBPF functions for different target decision modes.
The policy read by the policy module is used in a preset policy mode, and policy content supports whether to read PAGECACHE for the system file path, the file type, the file extension, the specific file descriptor, the file size threshold, the file update time, the file owner and the like.
The collected system performance data, i.e., historical system performance data, is used in a self-learning mode, including the current memory footprint of the system, the most recent read frequency of the current read file, and the reference count.
Step 102, determining whether to start a page buffer processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data, wherein the page buffer processing mechanism is used for processing the file data to be processed from a memory.
It should be noted that, in the case where the target decision mode has been determined and the file processing data has been acquired in step 101, it may be determined whether to open the page buffer processing mechanism for the file data to be processed corresponding to the file processing request based on different target decision modes.
Specifically, based on the above policies and data, the eBPF function makes a decision whether to read the page caching mechanism.
In the embodiment of the application, the page buffer mechanism directly stores the data into the memory without reading the data from the disk, thereby reducing the frequent access of the disk and improving the read-write performance of the system.
Specifically, the disk is used as a long-term storage medium and is responsible for the durability and final consistency of data, and the memory is used as a cache and is responsible for improving the read-write performance of files. The page caching mechanism reduces frequent access of the disk by caching the disk data into the memory, thereby remarkably improving the overall performance of the system.
In addition, in the embodiment of the present application, by dynamically determining whether to open a page buffer mechanism for a file processing request by mounting eBPF functions on a VFS (Virtual file system), specifically, referring to fig. 4, it can be seen that a rule-based dynamic buffer determination is mounted on the VFS, where the Virtual file system (Virtual FILE SYSTEM, abbreviated as VFS) is an abstraction layer in the kernel of the operating system, and provides a unified interface for different file systems. VFS allows applications to access different types of file systems in a consistent manner without concern for the specific implementation of the underlying file system.
Further, the determining whether to open a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the current second file processing data of the virtual file system, the preset scoring function corresponding to each second file processing data, and the weight corresponding to each second file processing data includes:
generating a target score corresponding to each second file processing data based on the current second file processing data of the virtual file system and a preset scoring function corresponding to each second file processing data;
Summing the target scores corresponding to the second file processing data and the weights corresponding to the second file processing data to generate target total scores;
And determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target total score and a preset threshold value.
Further, before the handling of the file handling request in the above-mentioned second decision mode is elucidated, it is to be confirmed that in the second decision mode, i.e. in the self-learning mode, key factors influencing the decision include the file path (FilePath), i.e. the path of the system file, the file type (FileType), i.e. the type of the file, e.g. text, picture, video, etc., the file extension (FileExt), i.e. the extension of the file, e.g. txt, jpg, mp4, etc., the specific file descriptor (FileDescriptor), i.e. the identifier of some specific file, the file size threshold (FileSizeThreshold), i.e. the threshold of the file size, the file update time (FileUpdateTime), i.e. the time of the last update of the file, the file owner (FileOwner), i.e. the owner of the file, so that it is possible to confirm that the second file handling data to be acquired includes the file path, the file type, the file extension, the file descriptor, the file size, the file content, the file update time, the file source information.
Thus, first, a weight is defined for each of the above-described second file processing data, and these weights can be adjusted according to actual conditions.
Specifically, the method can comprise WFP (file path weight), WFT (file type weight), WFE (file extension name weight), WFD (specific file descriptor weight), WFS (file size weight), WFU (file update time weight) and WFO (file owner weight).
Further, for each factor, a scoring function needs to be defined, for example, a scoring range may be set from 0 to 1, and the final score is summed by multiplying the scores of all the factors by the corresponding weights, and whether to read the file based on the page caching mechanism is determined according to whether the total score is greater than a threshold.
Further, determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target total score and a preset threshold value includes performing page cache processing for the file data to be processed corresponding to the file processing request when the target total score is detected to be greater than the preset threshold value.
Further, when the second file processing data is a file path, the preset scoring function is a file path scoring function, and the file path scoring function is set based on the file path depth and the matching degree of the target directory.
Further, when the second file processing data is a file type, the preset scoring function is a file type scoring function, and the file type scoring function is set based on preset priorities and file access frequencies corresponding to the file types.
Further, when the second file processing data is a file extension, the preset scoring function is a file extension scoring function, and the file extension scoring function is set based on popularity of the file extension and relevance of the file extension to the current context.
Further, when the second document processing data is document source information, the preset scoring function is a document source information scoring function, and the document source information scoring function is set based on a user role score and a user authority score.
Further, when the second file processing data is a file size, the preset scoring function is a file size function, and the file size function is set based on a file size and a preset file size maximum value.
Further, when the second file processing data is a file update time, the preset scoring function is a file update time scoring function, and the file update time scoring function is set based on the current time, the file download time and a preset decay rate.
In particular, with reference to the above description, it can be seen that the scoring function may include the following:
first, file path scoring function (S_FP)
Considering that a file path may contain multiple subdirectories, we can use the path depth and the degree of matching of a particular directory to design a scoring function:
(equation 1)
Wherein depth (FilePath) in equation 1 represents the depth of the path, matchScore is a function used to calculate the file pathAnd an important directoryIs a degree of matching of (a).
Second, file type scoring function (S_FT)
The file types have different priorities, and the scoring function can be designed according to the purposes and the access frequency of the files:
(equation 2)
In the above formula 2, freq (FileType _i) represents the access frequency of the file type i, importance (FileType _i) represents the importance of the file type i, and specifically, the priority corresponding to each preset file type may be set.
Third, file extension scoring function (S_FE)
Scoring of file extensions may be based on popularity and relevance of the extensions:
(equation 3)
Wherein popularity (FileExt) denotes popularity of the file extension, relevance (FileExt, currentContext) denotes the file extensionWith the current contextIs a correlation of (3).
Fourth, a specific file descriptor scoring function (S_FD)
A score for a particular file descriptor may be set.
Fifth, file Capacity scoring function (S_FS)
The file size is the file size, and the score of the file size can be set by setting a nonlinear scoring function to distinguish the influence of files with different sizes on PAGECACHE, specifically, the following formula 4 can be referred to, wherein,For the file capacity to be available,Is the maximum value of the file capacity.
(Equation 4)
Sixth, the file updates the time scoring function (S_FU)
The update time of a file may employ an exponential decay model to reflect the freshness of the file.
(Equation 5)
Wherein, Is a preset attenuation rate, the preset attenuation degree can be adjusted according to the requirement of the system,For the current time period of time,Is the file download time.
Seventh, file source information scoring function
The file source information, i.e., the file owner, thus, its score may be constructed based on the role and authority of the user, specifically, with reference to the following equation 6.
(Equation 6)
Wherein roleScore and permissionScore are scoring functions for user roles and permissions, respectively.
Optionally, after the step of determining, based on the target decision mode and the file processing data, whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request, the method includes:
If so, reading the file data to be processed in a page cache (memory) based on the file data to be processed corresponding to the file processing request.
Optionally, after the step of determining, based on the target decision mode and the file processing data, whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request, the method includes:
If not, reading the file data to be processed in a target disk based on the file data to be processed corresponding to the file processing request.
Optionally, the reading the file data to be processed in the target disk based on the file data to be processed corresponding to the file processing request includes:
and reading the file data to be processed from the target disk in a preset reading mode.
It should be noted that, in the embodiment of the present application, after determining whether to process the file processing request through the page buffer mechanism, the kernel processes the file read request.
Specifically, based on the decision of the dynamic cache eBPF function, the reading scheduling module triggers the actual reading operation, when the decision PAGECACHE is yes, the data of PAGECACHE are preferentially read according to the original reading flow of the system, when the decision PAGECACHE is no, an O_DIRECT mark is added to the flags parameter of open (), and the data are directly read from the disk by bypassing PAGECACHE in a DIRECT I/O mode.
According to the file processing method provided by the embodiment of the application, under the condition that the file processing request initiated by the user process is detected to be received, the target eBPF function mounted on the kernel function is triggered based on the file processing request, the target decision mode is determined based on the type of the target eBPF function, the current file processing data of the virtual file system is obtained, and whether a page cache processing mechanism is started for the file data to be processed corresponding to the file processing request is determined based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory. According to the embodiment of the application, the allocation strategy of the page cache mechanism is dynamically adjusted according to the real-time state and the application requirement of the virtual file system, the current file processing data of the virtual file system can be monitored and acquired in real time through eBPF programs loaded to the kernel, a eBPF program can determine a target decision mode based on the information, and can determine how to process a file processing request initiated by a user process based on different target decision modes, and eBPF programs can dynamically calculate and adjust parameters such as the threshold value of page cache, the replacement strategy and the like so as to realize the optimal allocation of cache resources. The method can use eBPF functions to dynamically judge cache reading before the file is cached in the kernel state, support the definition of the cache according to the mode of the custom configuration of the actual scene, and avoid the occupation of system resources by a large number of low-heat files.
In another embodiment, referring to fig. 5, a file processing system may include a eBPF management module, a configuration parsing module, a policy management module in a user state, and a memory monitoring module and a file monitoring module based on a filter eBPF function in a kernel state, and a policy matching and reading scheduling module in the kernel state, where each module may refer to the following.
The configuration analysis module is used in a preset strategy mode and is responsible for loading the configuration of a user mode, and the configuration content supports whether the configuration of a system file path, a file type, a file extension, a specific file descriptor, a file size threshold, a file update time and a file owner is read PAGECACHE or not.
And the policy management module is responsible for processing the updating action of the user state configuration and informing eBPF functions of the kernel state in real time based on the refreshing of the configuration.
And the filter program management module is used for managing eBPF mounting points according to different modes (a preset strategy mode or a self-learning mode), mounting different filter programs eBPF functions in different modes, mounting eBPF functions of file operation functions in the preset strategy mode, and mounting eBPF functions of kernel allocation and release memory page core functions in the self-learning mode.
And the memory monitoring module is used for acquiring the memory occupation condition of the system in real time by monitoring the functions of allocating and releasing the memory pages in the kernel, and obtaining the running condition of the system by taking the memory occupation condition as an important reference of the current performance state of the system. Based on this information, the resource requirements of the system are more accurately assessed.
And the file monitoring module is used for counting the reading frequency of the specific file and recording the heat of the file and the access mode of the file. By monitoring and analyzing these data in real time, a reasonable decision can be dynamically made in combination with the current performance state of the system.
Policy matching-based on preloaded profiles or monitoring data collected via a self-learning mode, the system is able to intelligently identify the current environment and needs. By analyzing and comparing the data, the system can be matched with the most suitable decision result, and the optimal configuration and the efficient utilization of the resources are realized.
And the reading scheduling is responsible for intelligently selecting to read data from the cache or the I/O device according to the decision result. If PAGECACHE is yes, the data of PAGECACHE are read preferentially according to the original reading flow of the system, if PAGECACHE is no, an O_DIRECT mark is added into flags parameters of open (), and the data is read from a disk directly by bypassing PAGECACHE in a DIRECT I/O mode.
Referring to fig. 2, fig. 2 is a block diagram of a file processing apparatus according to an embodiment of the present application, where the apparatus includes:
The determining module 201 is configured to trigger, based on a file processing request initiated by a user process, a target eBPF function mounted on a kernel function, determine a target decision mode based on a type of the target eBPF function, and obtain current file processing data of a virtual file system when detecting that the file processing request is received;
the processing module 202 is configured to determine, based on the target decision mode and the file processing data, whether to open a page cache processing mechanism for file data to be processed corresponding to the file processing request, where the page cache processing mechanism is configured to process the file data to be processed from a memory.
The embodiment of the present application also provides a communication device, as shown in fig. 3, including a processor 701, a communication interface 702, a memory 703 and a communication bus 704, where the processor 701, the communication interface 702, and the memory 703 perform communication with each other through the communication bus 704,
A memory 703 for storing a computer program;
the processor 701, when executing the program stored in the memory 703, may implement the following steps:
Triggering a target eBPF function mounted on a kernel function based on the file processing request under the condition that the file processing request initiated by a user process is detected to be received, determining a target decision mode based on the type of the target eBPF function, and acquiring current file processing data of a virtual file system;
And determining whether to start a page cache processing mechanism for the file data to be processed corresponding to the file processing request based on the target decision mode and the file processing data, wherein the page cache processing mechanism is used for processing the file data to be processed from a memory.
Where the memory and the processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors and the memory together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor may be transmitted over a wired medium or through an antenna on a wireless medium, and the antenna further receives and transmits data to the processor. The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory may be used to store data used by the processor in performing operations.
The communication bus mentioned by the above terminal may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, abbreviated as PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The memory may include random access memory (Random Access Memory, RAM) or may include non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central Processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a digital signal processor (DIGITAL SIGNAL Processing, DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In yet another embodiment of the present application, a computer readable storage medium is provided, where instructions are stored, when the computer readable storage medium runs on a computer, to cause the computer to execute the memory resource integrated machine power-up method of any one of the foregoing embodiments.
In yet another embodiment of the present application, a computer program product containing instructions that, when executed on a computer, cause the computer to perform the memory resource all-in-one power-on method of any of the above embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application are included in the protection scope of the present application.