[go: up one dir, main page]

CN114296933B - Implementation method of lightweight container under end-edge cloud architecture and data processing system - Google Patents

Implementation method of lightweight container under end-edge cloud architecture and data processing system Download PDF

Info

Publication number
CN114296933B
CN114296933B CN202111649189.3A CN202111649189A CN114296933B CN 114296933 B CN114296933 B CN 114296933B CN 202111649189 A CN202111649189 A CN 202111649189A CN 114296933 B CN114296933 B CN 114296933B
Authority
CN
China
Prior art keywords
container
mirror image
node
instruction
master node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111649189.3A
Other languages
Chinese (zh)
Other versions
CN114296933A (en
Inventor
牛思杰
庞涛
崔思静
潘碧莹
陈梓荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202111649189.3A priority Critical patent/CN114296933B/en
Publication of CN114296933A publication Critical patent/CN114296933A/en
Application granted granted Critical
Publication of CN114296933B publication Critical patent/CN114296933B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Stored Programmes (AREA)

Abstract

The disclosure relates to the technical field of computers, in particular to a method for realizing a lightweight container under a terminal edge cloud architecture, a data processing system and a storage medium. The method comprises the following steps: the cloud management platform sends a container creation instruction to the edge server; the edge server determines a mirror image pulling strategy according to the instruction of creating the container, and sends a control instruction to each node according to the mirror image pulling strategy; the control instruction comprises a mirror image pull instruction and a container creation instruction; and each node pulls the mirror image layers according to the control instruction, and generates a container layer by combined mounting according to the hierarchical relation among the mirror image layers so as to start the container. According to the scheme, the problem that some resource-limited terminals cannot smoothly operate the container can be effectively avoided under the end-edge cloud architecture, limited resources are reasonably utilized, and a feasible scheme is provided for realizing end-edge cloud container cooperation.

Description

Implementation method of lightweight container under end-edge cloud architecture and data processing system
Technical Field
The disclosure relates to the technical field of computers, in particular to a method for realizing a lightweight container under a terminal edge cloud architecture, a data processing system and a storage medium.
Background
The end-edge cloud architecture is an integrated architecture coordinated by a terminal device, an edge server and a cloud server. The container scheme of the mainstream at present, such as a dock container, needs to pull the container mirror before creating the container, and creates the container based on the container mirror, which is equivalent to instantiating the mirror. For a container mirror image, taking a common nginx mirror image as an example, the size is 127MB, for many IoT (Internet of Things ) terminal devices with limited disk space, for example, a camera usually has a flash size of only 64MB or lower, and cannot bear most mirror images, and the container scheme at the present stage is not feasible.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a method for implementing a lightweight container under a cloud-end architecture, a data processing system, and a storage medium, and thus, at least to some extent, overcome the drawbacks due to the limitations and disadvantages of the related art.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a method for implementing a lightweight container under a terminal edge cloud architecture, the method including:
the cloud management platform sends a container creation instruction to the edge server;
the edge server determines a mirror image pulling strategy according to the instruction of creating the container, and sends a control instruction to each node according to the mirror image pulling strategy; the control instruction comprises a mirror image pull instruction and a container creation instruction;
and each node pulls the mirror image layers according to the control instruction, and generates a container layer by combined mounting according to the hierarchical relation among the mirror image layers so as to start the container.
In one exemplary embodiment of the present disclosure, the edge server includes a master node, a container deployment policy module;
the edge server determines a mirror image pulling strategy according to the instruction for creating the container, and the method comprises the following steps:
The master node receives the container creation instruction, analyzes the container creation instruction to obtain container creation parameters, and sends the container creation parameters to a container deployment policy module;
The container deployment strategy module formulates a mirror image pulling strategy according to the container creation parameters and preset rules, and returns the mirror image pulling strategy to the master node; the mirror image pulling strategies comprise a master node mirror image pulling strategy and a node mirror image pulling strategy.
In one exemplary embodiment of the present disclosure, the container creation parameters include: container mirror configuration information, node resource status, number of containers to be created, mirror name of the created container.
In an exemplary embodiment of the present disclosure, the container deployment policy module formulates a mirrored pulling policy according to a preset rule according to the container creation parameter, including:
the nodes are subjected to priority ranking according to the resource states of the nodes;
Sequentially dividing corresponding nodes according to the number of containers to be created and the priority order to obtain container division results;
Evaluating each node based on the container division result and the node resource state, and determining container mirror image pulling information corresponding to each node when the evaluation passes;
And generating a mirror image pulling strategy according to the container dividing result and the container mirror image pulling information, and sending the mirror image pulling strategy to a master node.
In an exemplary embodiment of the present disclosure, the sending, according to the mirror pull policy, a control instruction to each node includes:
And the master node sends a control instruction to each node according to the mirror image pulling strategy.
In an exemplary embodiment of the disclosure, the node pulls the mirror image layers according to the control instruction, and generates a container layer by joint mounting according to a hierarchical relationship between the mirror image layers, including:
The mirror image layer of the master node is remotely mounted on the node, and the node and the local mirror image layer are jointly mounted according to the inheritance relationship between the mirror image layers to generate a container layer.
In an exemplary embodiment of the present disclosure, the method further comprises:
And the cloud management platform responds to the container service request to create a container creation task so as to execute the container creation task and send a container creation instruction to the edge server.
According to a second aspect of the present disclosure there is provided a data processing system, the system comprising:
the cloud management platform is used for sending a container creation instruction to the edge server;
the edge server is used for determining a mirror image pulling strategy according to the instruction for creating the container and sending a control instruction to each node according to the mirror image pulling strategy; the control instruction comprises a mirror image pull instruction and a container creation instruction;
and the nodes are used for pulling the mirror image layers according to the control instructions, and generating a container layer by joint mounting according to the hierarchical relation among the mirror image layers so as to start the container.
In one exemplary embodiment of the present disclosure, the edge server includes a master node, a container deployment policy module;
The master node is configured to receive the container creation instruction, parse the container creation instruction to obtain a container creation parameter, and send the container creation parameter to a container deployment policy module;
the container deployment strategy module is used for making a mirror image pulling strategy according to the container creation parameters and preset rules and returning the mirror image pulling strategy to the master node; the mirror image pulling strategies comprise a master node mirror image pulling strategy and a node mirror image pulling strategy.
According to a third aspect of the present disclosure, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described method for implementing a lightweight container under a cloud-end architecture.
In the implementation method of the lightweight container under the end-edge cloud architecture provided by the embodiment of the disclosure, a container creation instruction is sent to an edge server through a cloud management platform; the edge server determines a mirror image pulling strategy according to the instruction of creating the container, and sends a control instruction to each node according to the mirror image pulling strategy; each node pulls the mirror image layers according to the control instruction, and generates a container layer by combined mounting according to the hierarchical relation among the mirror image layers; therefore, layered pulling of the container mirror images is realized, the container mirror images pulled to the local in the traditional container scheme are stored in the edge server and the local terminal in a distributed mode, and the problem of operation of the container under the condition that terminal resources are limited is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 schematically illustrates a schematic diagram of an implementation method of a lightweight container under a terminal edge cloud architecture in an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a schematic diagram of an end-edge cloud system architecture in an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic diagram of a method of determining a mirrored pulling policy in an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a diagram of a mirrored pulling policy flow in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of a data processing system in an exemplary embodiment of the present disclosure;
fig. 6 schematically illustrates a schematic diagram of a storage medium in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
In the related art, the mainstream container scheme (taking a dock as an example) needs to pull the container mirror before creating the container, and creating the container based on the container mirror is equivalent to instantiating the mirror. The container mirror image is a set of binary files and dependency packages required by the running of the container, and is stored in a layered manner on a system, and files and configuration information of each layer are overlapped together to form the mirror image. The bottommost layer of the image is a base image (base), which is usually a file system of a linux operating system, and the base image is usually quite large because the base image usually provides a dependent package and a common instruction set of executable files of other layers; the other layers of the mirror image are typically modifications based on the previous layers, and the volume is typically relatively small. Taking a common nginx mirror image as an example, the size is 127MB, for many IoT terminal devices with limited disk space, for example, the flash size of a camera is typically only 64MB or lower, so that most mirrors cannot be carried, and the container scheme in the present stage is not feasible. The container virtualization transformation of the terminal is an important method for incorporating the terminal into a terminal-edge cloud collaborative integrated architecture, and with the development of a 5G network, more IoT devices except mobile phones, such as cameras and home routers, are supported to be connected, and the hardware resources of the terminal devices are relatively limited, so that the running conditions of a general container scheme cannot be met.
In order to overcome the above-mentioned drawbacks and disadvantages of the prior art, the present exemplary embodiment provides a method for implementing a lightweight container under a cloud-on-edge architecture. Referring to fig. 1, the implementation method of the lightweight container under the end-edge cloud architecture may include the following steps:
step S11, the cloud management platform sends a container creation instruction to an edge server;
Step S12, the edge server determines a mirror image pulling strategy according to the instruction of creating the container, and sends a control instruction to each node according to the mirror image pulling strategy; the control instruction comprises a mirror image pull instruction and a container creation instruction;
And S13, pulling the mirror image layers by the nodes according to the control instruction, and jointly mounting the mirror image layers according to the hierarchical relation between the mirror image layers to generate a container layer so as to start the container.
In the implementation method of the lightweight container under the end-edge cloud architecture provided by the embodiment of the invention, based on the characteristics of layered storage, layered pulling and read-only of a mirror layer of the container mirror image, the container mirror image pulled to the local in the traditional container scheme is stored in an edge server and a local terminal in a distributed manner; the running problem of the container under the condition of limited terminal resources is effectively solved.
Hereinafter, each step of the method for realizing the lightweight container under the end-to-edge cloud architecture in the present exemplary embodiment will be described in more detail with reference to the accompanying drawings and examples.
In step S11, the cloud management platform sends a create container instruction to the edge server.
In this example embodiment, referring to the end-edge cloud system architecture shown in fig. 2, a management platform 211 and a mirror repository 212 may be deployed at the cloud end 21. The cloud management platform can interact with the edge 22 equipment and the terminal 23. The cloud end 21 may be a cloud server. The edge 22 may be a deployed edge server; the edge server may include a master node (master node) 221 and a container deployment policy module 222. The terminal 23 may include a plurality of terminal devices, and each terminal device may be a node 231 (slave node). For example, the terminal may be a smart terminal device on the user side, such as a mobile phone, a tablet computer, etc.
In this example embodiment, specifically, the cloud management platform creates a container creation task in response to a container service request, to execute the container creation task, and sends a container creation instruction to an edge server.
For example, when an application program in a user terminal needs to create a container, a container service request may be generated and sent to the cloud management platform. The container service request may include the number of containers that the application needs to create, the application name and configuration information, and so on. After receiving the request, the cloud management platform can create a task in a container corresponding to the cloud server bold general and send the task data to the edge server. The create container instruction may include information such as a mirror name of the create container, the number of containers that need to be created, and the like. Specifically, the create container instruction may be sent to a master node in the edge server.
In step S12, the edge server determines a mirror image pulling policy according to the instruction for creating a container, and sends a control instruction to each node according to the mirror image pulling policy; the control instruction comprises a mirror image pull instruction and a container creation instruction.
In this example embodiment, the edge server includes a master node, a container deployment policy module. Referring to fig. 3, the determining, by the edge server, a mirror pull policy according to the create container instruction may include:
Step S121, a master node receives the container creation instruction, analyzes the container creation instruction to obtain container creation parameters, and sends the container creation parameters to a container deployment policy module; and
Step S122, a container deployment strategy module formulates a mirror image pulling strategy according to preset rules according to the container creation parameters, and returns the mirror image pulling strategy to a master node; the mirror image pulling strategies comprise a master node mirror image pulling strategy and a node mirror image pulling strategy.
Specifically, the container deployment policy module may interact with the master node in the form of http restful api, and record the interaction information in the form of json, where the recorded information mainly may include: 1) Desirable pull container mirror configuration information the number of containers that need to be activated; each node resource status information. The container deployment policy module may return to the master node the mirror layer and other configuration information needed by the master node and node. For example, the master node sending data to the container deployment policy module may include node resource status information such as an IP address of the node, memory occupancy of the node, disk occupancy, and the like. The information returned by the container deployment policy module to the master node can include the node or the IP address of the master node, and the coding of the mirror layer.
In this example embodiment, the container creation parameters may include: container mirror configuration information, node resource status, number of containers to be created, mirror names to create containers, etc. The node resource status may be a resource status of a node, for example: CPU utilization, memory usage, memory configuration, disk usage, disk configuration, etc.
In this example embodiment, referring to fig. 4, the foregoing container deployment policy module may specifically set a mirror image pulling policy according to a preset rule according to the container creation parameter, where the mirror image pulling policy may include:
step S21, the nodes are subjected to priority ranking according to the resource state of each node;
step S22, sequentially dividing corresponding nodes according to the number of containers to be created and the priority order to obtain container division results;
step S23, evaluating each node based on the container division result and the node resource state, and determining container mirror image pulling information corresponding to each node when the evaluation is passed;
And step S24, generating a mirror image pulling strategy according to the container division result and the container mirror image pulling information, and sending the mirror image pulling strategy to a master node.
Specifically, the method for specifying the mirror pull policy may include:
3.1, the node nodes are prioritized according to the resource status of each node, and the node nodes with more sufficient resources have higher priority;
3.2, dividing the containers to node nodes according to the number of the containers to be created and the priority in turn; for example, a number of containers of 5 is created, there are 8 node nodes in total, then 1 container is divided for node nodes with priority of 1-5, respectively;
3.3, if the number of the containers is larger than the number of the node nodes, allocating [ the number of the containers/the number of the node ] ([ ] represents rounding) to each node, and repeating the step 3.2; for example, creating a number of containers of 9, and a total of 4 node nodes, then dividing 3 containers for node nodes with priority of 1, respectively, and dividing 2 containers for the rest node nodes;
3.4, evaluating each node according to the dividing result and node resource condition, and generating container mirror image pulling information of the node by representing that the node can be deployed; the information comprises which mirror layers the node needs to pull and which mirror layers the master node needs to pull); if the node evaluation result is not passed, the number of container divisions of the node is reduced by 1, and the node is reevaluated until the number of container divisions or the pass evaluation is reduced to 0;
3.5, reassigning all node nodes in the step 3.4 to node nodes without the condition of no passing in the evaluation process according to the steps 3.2 and 3.3 because of the reduced number of containers which are not passed in the evaluation;
3.6, repeating the steps 3.4 and 3.5 until the evaluation results of all nodes are passed, and if the evaluation results still cannot be met after 10 times of repetition, returning the result of the policy which cannot be formulated by the container deployment policy module to the master node;
And 3.7, sorting the container division result and the container mirror image pulling information of each node to generate a mirror image pulling strategy, and returning the mirror image pulling strategy to the master node.
In this example embodiment, the sending, according to the mirror pull policy, a control instruction to each node includes: and the master node sends a control instruction to each node according to the mirror image pulling strategy.
In step S13, each node pulls the mirror image layers according to the control instruction, and generates a container layer by joint mounting according to the hierarchical relationship between the mirror image layers, so as to start the container.
In this example embodiment, specifically, the control instruction may include a mirror layer and a configuration parameter that the node needs to pull; each node can always pull a specified number of mirror layers to a mirror warehouse of the cloud. The node RPC is used for remotely mounting the mirror image layer of the master node and is used for jointly mounting the mirror image layer and the local mirror image layer according to the inheritance relationship between the mirror image layers to generate a container layer.
For example, a base image may be deployed on the master node; node nodes may also be selected based on node resource status. For example, if 3 Node nodes are included, if Node2 memory occupancy is too high, then containers may be created by nodes 1 and 3; or when the occupation of the Node1 disk is higher, three layers of images can be downloaded locally, and two layers of images on the Master Node are mounted; or when the Node3 disk occupation is relatively low, the four-layer mirror image can be downloaded locally, and one-layer mirror image (namely base mirror image) on the Master Node is mounted; or if Node1 is more than Node3 with one mirror image, the memory consumption will be more.
According to the implementation method of the lightweight container under the end-to-end cloud architecture, the container mirror images pulled to the local in the traditional container scheme are stored in the edge server and the local terminal in a distributed mode based on the characteristics of layered storage, layered pulling and mirror image layer read-only of the container mirror images. Through a container deployment policy module deployed on an edge server node, when the edge receives an instruction for starting a terminal container from a cloud management platform, the container deployment policy module determines a pulling and storing policy of the mirror image according to a resource condition of the terminal node and configuration information of a mirror image of a target container, for example, a base layer with a larger volume is pulled to the edge server, and a mirror image layer with a smaller volume is pulled to the terminal. When the terminal starts the container, the base layer in the edge server is designated as rootfs of the container by an RPC remote mounting mode, and a writable container layer is generated after combined mounting is carried out on the base layer and other mirror image layers in the terminal, so that the starting of the container is completed. If there are multiple nodes in the cluster, the container deployment policy module will decide on which node(s) to launch the container and the number of mirror layers the nodes pull may be different. The intelligent and flexible deployment of the cluster container is realized through a container deployment strategy module; in addition, multiple nodes in the cluster can multiplex the mirror image layer on the master, so that the space is saved; in addition, the configuration container deployment strategy module can be modified according to the actual situation and the requirement of the cluster. Compared with the mainstream container scheme in the industry, which mainly focuses on cloud and edge operation, the method has the characteristic that a plurality of limitations exist for supporting on some resource-limited devices such as mobile terminals, intelligent terminals and the like; the technical scheme can effectively avoid the problem that some resource-constrained terminals cannot smoothly operate the container, reasonably utilizes limited resources, and provides a feasible foundation for realizing the cooperation of the terminal-edge cloud container.
It is noted that the above-described figures are only schematic illustrations of processes involved in a method according to an exemplary embodiment of the invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Further, referring to fig. 5, in the embodiment of the present example, there is further provided a data processing system 50, which may include: cloud server 501, edge server 502, terminal 503; wherein,
The cloud server 501 is configured to carry a cloud management platform 5011 and send a container creation instruction to the edge server 502;
The edge server 502 is configured to determine a mirror pull policy according to the create container instruction, and send a control instruction to each node 5031 according to the mirror pull policy; the control instruction comprises a mirror image pull instruction and a container creation instruction;
And the terminal 503 is configured to provide the node 5031, pull the mirror image layers according to the control instruction, and jointly mount and generate a container layer according to a hierarchical relationship between the mirror image layers, so as to start the container.
In some example embodiments, the edge server 502 may include a master node 5022, a container deployment policy module 5021.
The master node 5022 may be configured to receive the create container instruction, parse the create container instruction to obtain a container creation parameter, and send the container creation parameter to a container deployment policy module.
The container deployment policy module 5021 may be configured to formulate a mirror image pulling policy according to the container creation parameter and a preset rule, and return the mirror image pulling policy to the master node; the mirror image pulling strategies comprise a master node mirror image pulling strategy and a node mirror image pulling strategy.
In some exemplary embodiments, the cloud server 501 includes a mirror repository 5012 for the node 5031 to pull the mirror into the mirror repository 5012.
The method for implementing the lightweight container under the end-edge cloud architecture is applied to the data processing system 50; the specific details of each module in the data processing system 50 have been described in detail in the implementation method of the lightweight container under the corresponding end-edge cloud architecture, and thus will not be described herein.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
Referring to fig. 6, a program product 600 for implementing the above-described method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (7)

1. The method for realizing the lightweight container under the end-edge cloud architecture is characterized by comprising the following steps:
The cloud management platform sends a container creation instruction to an edge server, wherein the edge server comprises a master node and a container deployment policy module;
The master node receives the container creation instruction, analyzes the container creation instruction to obtain container creation parameters, and sends the container creation parameters to a container deployment policy module;
The container deployment strategy module formulates a mirror image pulling strategy according to the container creation parameters and preset rules, and returns the mirror image pulling strategy to the master node; the mirror image pulling strategy comprises a master node mirror image pulling strategy and a node mirror image pulling strategy;
The master node sends a control instruction to each node according to the mirror image pulling strategy; the control instruction comprises a mirror image pull instruction and a container creation instruction;
the mirror image layer of the node remote mount master node and the local mirror image layer are jointly mounted according to the inheritance relation between the mirror image layers to generate a container layer so as to start the container.
2. The method for implementing a lightweight container under a terminal edge cloud architecture according to claim 1, wherein the container creation parameters include: container mirror configuration information, node resource status, number of containers to be created, mirror name of the created container.
3. The method for implementing the lightweight container under the end-edge cloud architecture according to claim 2, wherein the container deployment policy module formulates a mirrored pulling policy according to a preset rule according to the container creation parameter, and the method comprises the following steps:
the nodes are subjected to priority ranking according to the resource states of the nodes;
Sequentially dividing corresponding nodes according to the number of containers to be created and the priority order to obtain container division results;
Evaluating each node based on the container division result and the node resource state, and determining container mirror image pulling information corresponding to each node when the evaluation passes;
And generating a mirror image pulling strategy according to the container dividing result and the container mirror image pulling information, and sending the mirror image pulling strategy to a master node.
4. The method for implementing the lightweight container under the end-edge cloud architecture according to claim 1, wherein the sending a control instruction to each node according to the mirror pulling policy includes:
And the master node sends a control instruction to each node according to the mirror image pulling strategy.
5. The method for implementing a lightweight container under a terminal edge cloud architecture of claim 1, further comprising:
And the cloud management platform responds to the container service request to create a container creation task so as to execute the container creation task and send a container creation instruction to the edge server.
6. A data processing system, the system comprising:
The cloud server is used for bearing the cloud management platform and sending a container creation instruction to the edge server, and the edge server comprises a master node and a container deployment policy module;
The edge server comprises a master node and a container deployment policy module, wherein the master node is used for receiving the container creation instruction, analyzing the container creation instruction to obtain container creation parameters, and sending the container creation parameters to the container deployment policy module; the container deployment strategy module is used for making a mirror image pulling strategy according to the container creation parameters and preset rules, and returning the mirror image pulling strategy to the master node; the mirror image pulling strategy comprises a master node mirror image pulling strategy and a node mirror image pulling strategy; the master node is used for sending control instructions to each node according to the mirror image pulling strategy; the control instruction comprises a mirror image pull instruction and a container creation instruction;
And the terminal is used for providing a node, remotely mounting the mirror image layer of the master node, and jointly mounting the mirror image layer and the local mirror image layer according to the inheritance relationship between the mirror image layers to generate a container layer so as to start the container.
7. A storage medium having stored thereon a computer program which, when executed by a processor, implements a method of implementing a lightweight container under a end-edge cloud architecture according to any of claims 1 to 5.
CN202111649189.3A 2021-12-30 2021-12-30 Implementation method of lightweight container under end-edge cloud architecture and data processing system Active CN114296933B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111649189.3A CN114296933B (en) 2021-12-30 2021-12-30 Implementation method of lightweight container under end-edge cloud architecture and data processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111649189.3A CN114296933B (en) 2021-12-30 2021-12-30 Implementation method of lightweight container under end-edge cloud architecture and data processing system

Publications (2)

Publication Number Publication Date
CN114296933A CN114296933A (en) 2022-04-08
CN114296933B true CN114296933B (en) 2024-10-11

Family

ID=80973942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111649189.3A Active CN114296933B (en) 2021-12-30 2021-12-30 Implementation method of lightweight container under end-edge cloud architecture and data processing system

Country Status (1)

Country Link
CN (1) CN114296933B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115357254B (en) * 2022-07-08 2025-04-25 华南理工大学 A container image update method and system for cloud-edge application collaboration
CN115426370A (en) * 2022-08-08 2022-12-02 南京航空航天大学 Heterogeneous container cloud platform and edge manufacturing service subscription implementation method thereof
CN115587394B (en) * 2022-08-24 2023-08-08 广州红海云计算股份有限公司 Human resource data processing method and device of cloud native architecture
CN115665172B (en) * 2022-10-31 2023-04-28 北京凯思昊鹏软件工程技术有限公司 Management system of embedded terminal equipment
CN115617006B (en) * 2022-12-16 2023-03-17 广州翼辉信息技术有限公司 Industrial robot controller design method based on distributed safety container architecture
CN118101765A (en) * 2023-12-12 2024-05-28 天翼云科技有限公司 A method and system for realizing dynamic image caching based on incremental snapshot
CN118260006B (en) * 2024-03-19 2025-02-28 广东林泽科技股份有限公司 A data deployment method and system based on Xinchuang cloud desktop

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704162A (en) * 2019-09-27 2020-01-17 北京百度网讯科技有限公司 Method, device and equipment for sharing container mirror image by physical machine and storage medium
CN111885122A (en) * 2020-07-03 2020-11-03 中移(杭州)信息技术有限公司 Remote push method, system, server, computer-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12407574B2 (en) * 2019-04-05 2025-09-02 Mimik Technology Canada Inc. Method and system for distributed edge cloud computing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704162A (en) * 2019-09-27 2020-01-17 北京百度网讯科技有限公司 Method, device and equipment for sharing container mirror image by physical machine and storage medium
CN111885122A (en) * 2020-07-03 2020-11-03 中移(杭州)信息技术有限公司 Remote push method, system, server, computer-readable storage medium

Also Published As

Publication number Publication date
CN114296933A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN114296933B (en) Implementation method of lightweight container under end-edge cloud architecture and data processing system
EP3958606B1 (en) Model pushing method and device, model requesting method and device, storage medium and electronic device
CN113645262A (en) Cloud computing service system and method
US7716590B1 (en) Method and apparatus for dynamically updating a secondary form element based on a selection in a primary form element
CN114625479B (en) Cloud-edge collaborative application management method and corresponding device in edge computing
CN113010428B (en) Method, device, medium and electronic equipment for testing server cluster
US12074918B2 (en) Network-based Media Processing (NBMP) workflow management through 5G Framework for Live Uplink Streaming (FLUS) control
CN114238237B (en) Task processing method, device, electronic device and computer readable storage medium
US11349729B2 (en) Network service requests
CN112114804A (en) Application program generation method, device and system
WO2023066053A1 (en) Service request processing method, network device and computer-readable storage medium
CN113791766B (en) Method for combining data interfaces, electronic device and readable storage medium
CN117407041B (en) Interconnection methods, electronic devices and storage media
CN114785851A (en) Resource calling processing method and device, storage medium and electronic equipment
CN113835828A (en) AI reasoning method, system, electronic device, readable storage medium and product
CN114968339B (en) Method for software infrastructure resource deployment based on directed acyclic graph characteristics
CN114356462B (en) A visual inspection method, device, equipment and medium based on flowchart
KR100897353B1 (en) A mobile application providing method and a computer readable recording medium recording a program for realizing the method
US11573770B2 (en) Container file creation based on classified non-functional requirements
CN116846889A (en) Container arrangement and data access method, electronic device and storage medium
CN116755799A (en) Service arrangement system and method
CN116954878A (en) Method, apparatus, device, storage medium and program product for managing container clusters
CN115643608A (en) Service providing method, system, gateway, device and storage medium
CN115037626B (en) Policy management method, device and system and electronic equipment
CN116932225B (en) Micro-service resource scheduling method, micro-service resource scheduling device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant