[go: up one dir, main page]

CN110583004B - Method, processing system and storage medium for server to provide data values - Google Patents

Method, processing system and storage medium for server to provide data values Download PDF

Info

Publication number
CN110583004B
CN110583004B CN201880029063.6A CN201880029063A CN110583004B CN 110583004 B CN110583004 B CN 110583004B CN 201880029063 A CN201880029063 A CN 201880029063A CN 110583004 B CN110583004 B CN 110583004B
Authority
CN
China
Prior art keywords
server
value
asynchronous
computation
response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880029063.6A
Other languages
Chinese (zh)
Other versions
CN110583004A (en
Inventor
A·K·伊恩格尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/584,345 external-priority patent/US10540282B2/en
Priority claimed from US15/584,381 external-priority patent/US10437724B2/en
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CN110583004A publication Critical patent/CN110583004A/en
Application granted granted Critical
Publication of CN110583004B publication Critical patent/CN110583004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A processing system server and a method for providing data values using the same. The server maintains a cache of objects. The server performs an asynchronous calculation to determine the value of an object. Returning a value of the object from the cache in response to a request for the object before the asynchronous computation has determined the value of the object. After the asynchronous calculation determines the value of the object, returning the value of the object determined by the asynchronous calculation in response to a request for the object.

Description

Method, processing system and storage medium for server to provide data values
Background
The present invention relates generally to data caching in computer systems. Data storage may be very delayed. This is particularly true for cloud storage, where the latency of retrieving and storing data may be high, as the storage server may be remote from the client. There is a need for a method of reducing the latency of data storage operations.
Disclosure of Invention
According to various embodiments, a processing system including a server, and a method of providing data values for the server, is disclosed, comprising: the server maintains a cache of objects; the server performing an asynchronous calculation to determine a value of the object; returning the value of the object from the cache in response to a request for the object before the asynchronous calculation determines the value of the object; and returning the value of the object determined by the asynchronous calculation in response to a request for the object after the value of the object is determined by the asynchronous calculation.
According to various embodiments of the present invention, there is provided a processing system comprising: a server; a persistent memory; a network interface device for communicating with one or more networks; and at least one processor communicatively coupled to the server, the persistent storage, and the network interface device, the at least one processor, in response to executing computer instructions, operable to perform operations comprising: the server maintaining a cache of objects in the persistent storage; the server performing an asynchronous calculation to determine a value of the object; returning the value of the object from the cache in response to a request by the object before the asynchronous calculation determines the value of the object; and returning the value of the object determined by the asynchronous calculation in response to a request for the object after the value of the object is determined by the asynchronous calculation.
Drawings
The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views and which together with the detailed description below are incorporated in and form part of the specification, serve to further illustrate various embodiments and to explain various principles and all advantages all in accordance with the present invention, wherein:
FIG. 1 shows a block diagram of an example of a client-server system according to an embodiment of the invention;
FIG. 2 illustrates a block diagram of an example of a method for asynchronous data storage operations, according to an embodiment of the invention;
FIG. 3 illustrates a block diagram of an example processing system server node, according to an embodiment of the present invention;
FIG. 4 depicts a cloud computing environment suitable for use with embodiments of the present invention; and
fig. 5 depicts abstraction model layers according to the cloud computing embodiment of fig. 4.
Detailed Description
As required, detailed embodiments are disclosed herein; however, it is to be understood that the disclosed embodiments are merely examples and that the systems and methods described below may be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present subject matter in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting but rather to provide an understandable description of the concept.
The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Various embodiments of the present invention are applicable to caching in a variety of environments including cloud environments and non-cloud environments. Fig. 1 shows a server processing system according to the invention.
The first server 101 is accessing data from at least one other server 103. The at least one other server 103 may comprise one server or a plurality of servers. It may take a considerable time for the first server 101 to query the at least one further server 103. Thus, the first server 101 comprises a local cache 102 for storing data retrieved from the at least one further server 103. The information may be retrieved from the local cache 102 at a substantially faster rate than from at least one other server 103 that is remote. The local cache 102 may improve performance.
One problem is that the data fetched from the local cache 102 may not be current. The present invention provides a method and system for alleviating this problem.
FIG. 2 illustrates a method for obtaining data from at least one other server 103, according to one embodiment of the invention. According to this example, the local cache 102 is continuously maintained in step 201. The local cache 102 contains data from at least one other server 103. The assumed object o1 needed by the first server 101 is an important object stored on at least one other server 103. In step 202, the first server 101 creates an asynchronous computation f1 to obtain an object o1 from the at least one other server 103. Asynchronous computing may be a thread or process that can run in parallel with existing computing and does not block existing computing. One example of asynchronous computation is expectation. Expected to represent the results of asynchronous computations that may not have completed execution. They can be implemented in a variety of programming languages, such as Java.
For this example, the Listenable Future provides more functionality.
There are several types of events that may trigger the method step 202 to create an asynchronous calculation f1 to obtain the object o 1. For example, a request for object o1 might trigger method step 202 to create asynchronous calculation f1 to get object o 1. The method step 202 may be invoked periodically. For example, the method step 202 may be invoked after a period of time (i.e., a time interval) has elapsed. If data regarding the frequency of change of the object o1 is available, this can be used to determine when to invoke the method step 202 to create an asynchronous calculation f1 to obtain the object o 1. If object o1 changes frequently, it may be desirable to invoke the method step 202 frequently. If the object o1 changes less frequently, it is recommended that the method step 202 be invoked less frequently to reduce overhead.
The importance of having the current value of object o1 can also be used to decide the frequency with which step 202 is invoked. If it is important to have the current value of object o1, then method step 202 may be invoked more frequently. If the current value with object o1 is less important, the method step 202 may be invoked less frequently.
According to the present embodiment, it is now assumed that a request for object o1 is received before asynchronous calculation f1 has obtained object o1 from at least one other server 103 (step 203). In this case, if present, the value of the object o1 from the cache 102 is returned in step 205.
At step 204, according to an embodiment, a request for object o1 is received after asynchronous calculation f1 has obtained object o1 from at least one other server 103. In this case, in step 206, the value of the object o1 acquired from the at least one other server 103 by the asynchronous calculation f1 is returned.
Once the asynchronous calculation f1 has fetched the object O1 from the at least one other server 103, the first server 101 may optionally update the cache 102 with updated values for the object O1 obtained from the at least one other server 103 through the asynchronous calculation f 1.
Asynchronous computing, such as threads, processes, expectations, etc., may also be used to store data at the at least one other server 103. In this way, the computation is not blocked waiting for the remote storage operation to complete. For example, the first server 101 may invoke expectation f2 to store object o2 on the at least one other server 103. The existing calculations may continue to be performed before the expected f2 completes the storage operation.
Once the expected f2 has stored the object o2 on the at least one other server 103, the first server 101 may optionally update the cache 102 with the value of the object o2 stored on the at least one other server 103 via the expected f 2.
Asynchronous calculations for determining the value of the object may necessitate complex calculations. For example, computing the value of an object may be computationally expensive. Determining the value of an object may involve accessing several databases, which may consume a large amount of latency.
In some cases, the at least one other server 103 may include two or more servers. If the first server 101 requests the object o3 from the at least one other server 103, then multiple servers including the at least one other server 103 may return different values for the object o 3. In this case, according to an embodiment, the asynchronous calculation f3 determines the value of object o3 to determine which value to return for object o 3. There are several ways to do this:
the asynchronous calculation f3 can see which objects o3 have the most frequent values. For example, assume that the asynchronous calculation f3 receives the values of 3 objects o 3: 300. 200 and 200. Since the value 200 occurs most frequently, 200 is the value returned.
The timestamp, according to various embodiments, may be associated with a return value from the server. In this case, the asynchronous calculation f3 returns a value with the most recent timestamp.
Further details of how various embodiments of the present invention are contemplated for implementation in the Java programming language will be discussed below by way of example. Other programming languages may also be used with the expectation. Taking Java as an example, Future represents the result of an asynchronous calculation. Methods are provided to check whether a computation is complete, wait for its completion, and retrieve the computation results.
Assume that the key references an object. A request to obtain the value of the object corresponding to "key 1" through a method call is issued:
MultiValue mv1=lookup(“key1”);
the MultiValue class includes the following fields:
cacchedVal: the value (if any) retrieved from the cache 102. The lookup method does not return before the cache lookup is performed. In some cases, "key 1" may not correspond to any value in the cache, in which case cachedvi is set to the unseen value.
store value future: intended for requesting a value of "KEY 1" from the at least one other server 103. A separate thread is used for the expectation. The lookup method does not wait for the thread to complete before returning, allowing the main computation to continue execution without blocking.
mv1 has a getFast () method for quick return values and is implemented by using the following method:
if(!storeValueFuture.isDone()==&&(cachedVal.exist())then
return cachedVal.value();
else
return storeValueFuture.get();
if the store ValueFuture has not completed execution and there is a buffered value (stored in cachedVal), getFast () will immediately return the buffered value without blocking. Upon completion of the expectation, getFast () will return the value obtained from the at least one other server 103.
Using this approach, the program may use mv1. cachedvi for the value corresponding to key1 before the store valueffuture completes execution. After the store value future completes execution, the program may use the value returned by the store value future as the value corresponding to "key 1". In addition, an error handling method may be provided, which is invoked if the lookup operation in the at least one further server 103 fails. If listenablefutes are used, a callback function may be provided in the application that will execute immediately after the store value function completes execution.
In some cases, it is desirable to get a value from the at least one other server 103, even if this means waiting for the operation to complete. This can be achieved by calling mv1.getfromserver (), which is:
return storeValueFuture.get();
java also has listenablefutes. listenablefutes extends the Java Future interface, allowing callback computations to be performed in anticipation of completing execution. Using listenablefutes, a method of lookup may be implemented such that the cache is automatically updated after the asynchronous computation obtains the value of "KEY 1" from the at least one other server 103 has completed execution.
Now, the object 7(object7) with the KEY "KEY 1" is stored in view of a request in the at least one other server 103:
future1=dataStore.putAsync(“key1”,object7);
this calculation is not blocked from waiting for the putAsync operation to complete. Thus, the calculation can be continued without blocking. We may decide that the write operation to the at least one other server 103 has been completed:
future1.isDone();
wherein true is returned if the write operation to the at least one other server 103 is completed. If future1 is a listenablefute, the callback may be used to update cache 102 after the write operation to the at least one other server 103 has been completed.
Alternatively, the putAsync may cache the object7 immediately without waiting for said write operation to said at least one other server 103 to complete. This is done synchronously so execution does not continue until after object7 is cached. The putAsync also creates an expectation to store the object7 on the at least one other server 103. This expectation then passes back putAsync as expectation 1(future1) and the application continues execution without waiting for the expectation to end storing the object7 at the at least one other server 103. If the object corresponding to "key 1" is requested before storing the object7 in the at least one other server 103, the object7 may be obtained from the cache 102.
An error handling method may be provided which is invoked if storing the object7 in at least one other server 103 fails. If the object7 is not successfully stored on the at least one other server 103, the error handling method may be used to retry the storage operation.
Example of a processing System Server node operating in a network
FIG. 3 illustrates an example of a processing system server node 300 (also referred to as a computer system/server or as a server node) suitable for use with the client-server system illustratively depicted in FIG. 1. According to this embodiment, server node 300 is communicatively coupled to cloud infrastructure 332, which may include one or more communication networks. The cloud infrastructure 332 is communicatively coupled with a storage cloud 334 (which may include one or more storage servers) and a computing cloud 336 (which may include one or more computing servers). This simplified example is not intended to suggest any limitation as to the scope of use or functionality of various exemplary embodiments of the invention described herein.
Computer system/server 300 may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
And more particularly to fig. 3, the following discussion will describe a more detailed view of an exemplary cloud infrastructure server node embodying at least a portion of the client-server system of fig. 1. According to the present embodiment, at least one processor 302 is communicatively coupled to a main system memory 304 and a persistent storage memory 306.
A bus architecture 308 facilitates communicative coupling between the at least one processor 302 and the various components of the server node 300. The bus 308 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Main system memory 304, in one embodiment, may include readable media in the form of volatile memory in a computer system, such as Random Access Memory (RAM) and/or cache memory. By way of example only, persistent memory storage system 306 may be provided for reading from and writing to non-removable, nonvolatile magnetic media (not shown, commonly referred to as a "hard drive"). Although not shown, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk may provide a magnetic disk such as a CD-ROM, DVD-ROM, or other optical media. In which case each may be connected to bus 308 by one or more data media interfaces. As will be further depicted and described below, persistent storage 306 may include at least one program product having a set of program modules (e.g., at least one) configured to carry out the functions of various embodiments of the present invention.
A program/utility having a set (at least one) of program modules may be stored in the persistent storage 306 as well as an operating system 324, one or more application programs 326, other program modules, and program data by way of example, and not limitation. Each operating system 324, one or more application programs 326, other program modules, and program data or some combination thereof, may include an implementation of a network environment. The program modules may generally perform the functions and/or methodologies of the various embodiments of the invention described herein.
The at least one processor 302 is communicatively coupled to one or more network interface devices 316 via a bus architecture 308. According to various embodiments, network interface device 316 is operatively coupled in network communication with one or more cloud infrastructures 332. The cloud infrastructure 332 includes: a storage cloud 334 comprising one or more storage servers (or also referred to as storage server nodes); and a computing cloud 336 that includes one or more computing servers (or also referred to as "storage servers"). The network interface device 316 may communicate with one or more networks, such as a Local Area Network (LAN), a general Wide Area Network (WAN), and/or a public network (e.g., the internet). The network interface device 316 facilitates communication between the server node 300 and other server nodes in the cloud infrastructure 332.
The user interface 310 is communicatively coupled with the at least one processor 302, for example, via the bus architecture 308. According to the present embodiment, the user interface 310 includes a user output interface 312 and a user input interface 314. Examples of elements of the user output interface 312 may include a display, a speaker, one or more indicator lights, one or more transducers that produce audible indicators, and a tactile signal generator. Examples of elements of the user input interface 314 may include a keyboard, a keypad, a mouse, a touch pad, and a microphone that receives audio signals. The received audio signals may be converted to an electronic digital representation and stored in memory, for example, and optionally may be used with speech recognition software executed by the processor 302 to receive user input data and commands.
A computer-readable medium reader/writer device 318 is communicatively coupled with the at least one processor 302. The reader/writer device 318 is communicatively coupled with a computer-readable medium 320. According to various embodiments, server node 300 may generally include any type of computer-readable media 320. Such media may be any available media that is accessible by the computer system/server 300 and may include any one or more of volatile media, nonvolatile media, removable media, and non-removable media.
Computer instructions 307 may be stored, at least in part, at various location server nodes 300. For example, at least some of instructions 307 may be stored in any one or more of: in one or more of the processors 302 internal cache memory, in main memory 304, in persistent storage 306, and in computer-readable media 320.
According to this embodiment, the instructions 307 include computer instructions, data, configuration parameters, and other information that may be used by the at least one processor 302 to perform the features and functions of the server node 300. According to this example, the instructions 307 include an operating system 324, one or more applications 326, a set of listenablefutes methods 328, and a set of mutivalues methods 330, as discussed above in fig. 1-2. Additionally, the instructions 307 include server node configuration data.
According to this embodiment, the at least one processor 302 is communicatively coupled with a server cache memory 322 (also referred to as a local cache), the server cache memory 322 may store server node data, infrastructure messages and data of at least a portion of the network system and cloud in communication with the server node 300, and other data for operation of services and applications coupled with the server node 300. As described above, various functions and features of the present invention may be provided through the use of the server node 300.
Example cloud computing Environment
It is to be understood at the outset that although this disclosure includes a detailed description of cloud computing, implementation of the techniques described therein is not dependent on a cloud computing environment, and may even be implemented in connection with any other type of computing environment now known or later developed.
Cloud computing is a service delivery model for convenient, on-demand network access to a shared pool of configurable computing resources. Configurable computing resources are resources that can be deployed and released quickly with minimal administrative cost or minimal interaction with a service provider, and may be, for example, networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services. Such a cloud model may include at least five features, at least three service models, and at least four deployment models.
Self-service on demand: cloud consumers are unilaterally able to automatically deploy computing power for server time, network storage, and the like on demand without human interaction with the service provider.
Extensive network reception: computing power may be acquired over a network through standard mechanisms that facilitate the use of the cloud through heterogeneous, slim platforms or thick replacement platforms (e.g., mobile phones, replacement computers, Personal Digital Assistants (PDAs)).
Resource pool: the provider's computing resources are relegated to a resource pool and serve multiple consumers through a multi-tenant (multi-tenant) schema in which different physical and virtual resources are dynamically allocated and reallocated as needed. Typically, the consumer cannot control or be confused as to where the resource is provided, but can be at a larger micro-designated location (e.g., country, state, or data center) and thus has location independence.
Quick elasticity: computing power can be deployed quickly, flexibly (and sometimes automatically) to enable rapid expansion, and quickly released to shrink quickly. The computing power available for deployment tends to appear unlimited to consumers and can be available in any amount at any time.
Measurable service: cloud systems automatically control and optimize resource utility by leveraging some level of abstraction metering capabilities for the appropriate service types (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled and reported, providing both service provider and consumer redundancy.
Software as a service (SaaS): the capability provided to the consumer is to use the provider's applications running on the cloud infrastructure. Applications may be accessed from various modalities through a thin interface of a web browser such as web-based email. In addition to limited user-specific application configuration settings, consumers do not manage nor control alternative cloud infrastructures, including networks, servers, operating systems, storage, and even individual application capabilities, and the like.
Platform as a service (PaaS): the ability provided to the consumer is to deploy on the cloud infrastructure applications created or obtained by the consumer, created using programming languages and tools supported by the provider. The consumer does not manage or control the continuous cloud infrastructure including networks, servers, operating systems, or storage, but has control over the applications to be deployed, and possibly also over the application hosting environment configuration.
Infrastructure as a service (IaaS): the consumer is provided with the ability to deploy and run any software, including operating systems and applications, on processing, storage, networking, and other underlying computing resources. The consumer does not manage nor control the underlying cloud infrastructure, but has control over the operating system, storage, and applications deployed thereto, and may have limited control over selected network components (e.g., host firewalls).
The deployment model is as follows:
private cloud: the cloud infrastructure operates solely for an organization. The cloud infrastructure may be managed by the organization or a third party and may exist inside or outside the organization.
Community cloud: the cloud infrastructure is shared by multiple organizations and supports a specific community of common interest relationships, such as task forces, security requirements, policy and compliance considerations. A community cloud may be managed by multiple organizations or third parties within a community and may exist within or outside of the community.
Public cloud: the cloud infrastructure is offered to the public or large industry groups and owned by organizations that sell cloud services.
Mixed cloud: the cloud infrastructure consists of two or more clouds (private, community, or public) of deployment models that remain distinct entities but are bound together by standardized or proprietary technologies that enable data and application instrumentation (e.g., cloud bursting traffic sharing technology for load balancing between clouds).
Cloud computing environments are service-oriented with a focus on stateless, low-commutative, modular, and semantic interoperability. At the heart of cloud computing is an infrastructure that contains a network of interconnected subsystems.
Referring now to FIG. 4, an exemplary cloud computing environment 450 is shown. As shown, the cloud computing environment 450 includes one or more cloud computing strings 410 with which local computing devices used by cloud computing consumers, such as Personal Digital Assistants (PDAs) or mobile phones 454A, desktops 454B, laptops 454C, and/or automobile computer systems 454N may communicate. The cloud computing routers 410 may communicate with each other. Cloud computing routines 410 may be physically or virtually grouped (not shown) in one or more networks including, but not limited to, private, community, public, or hybrid clouds, or combinations thereof, as described above. In this way, a cloud consumer can request infrastructure as a service (IaaS), platform as a service (PaaS), and/or software as a service (SaaS) provided by the cloud computing environment 450 without maintaining resources on the local computing device. It should be appreciated that the types of computing devices 454A-N shown in fig. 4 are merely illustrative and that cloud computing node 410, as well as cloud computing environment 450, may communicate with any type of computing device over any type of network and/or network addressable connection (e.g., using a web browser).
Referring now to FIG. 5, a set of functional abstraction layers provided by a cloud computing environment 450 is shown. It should be understood at the outset that the components, layers, and functions illustrated in FIG. 5 are illustrative only and that embodiments of the present invention are not limited thereto. As shown, the following layers and corresponding functions are provided:
the hardware and software layer 560 includes hardware and software components. Examples of hardware components include: a host 561; a RISC (reduced instruction set computer) architecture based server 562; a server 563; a blade server 564; a storage device 565; networks and network components 566. Examples of software components include: web application server software 567 and database software 568.
Virtual layer 570 provides an abstraction layer that may provide examples of the following virtual entities: virtual server 571, virtual storage 572, virtual network 573 (including virtual private networks), virtual applications and operating system 574, and virtual client 575.
In one example, the management layer 580 may provide the following functionality: the resource provisioning function 581: providing dynamic acquisition of computing resources and other resources for performing tasks in a cloud computing environment; metering and pricing function 582: the use of resources is cost tracked within a cloud computing environment and billing and invoicing is provided for this purpose. In one example, the resource may include an application software license. The safety function is as follows: identity authentication is provided for cloud consumers and tasks, and protection is provided for data and other resources. User portal function 583: access to the cloud computing environment is provided for consumers and system administrators. Service level management function 584: allocation and management of cloud computing resources is provided to meet the requisite level of service. Service Level Agreement (SLA) planning and fulfillment function 585: the future demand for cloud computing resources predicted according to the SLA is prearranged and provisioned.
The workload layer 590 provides examples of functionality that a cloud computing environment may implement. In this layer, examples of workloads or functions that can be provided include: mapping and navigation 591; software development and lifecycle management 592; virtual classroom education offer 593; data analysis process 594; transaction processing 595; and other data communication and delivery services 596. As described above, various functions and features of the present invention may be provided through the use of server node 300 communicatively coupled with cloud infrastructure 332, which may include storage cloud 334 and/or computing cloud 336.
Non-limiting examples
The present invention may be a system, method and/or computer program product in any combination of possible technical details. The computer program product may include a computer readable storage medium having computer readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer-readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, integrated circuit configuration data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Although the present description may describe the components and functions implemented in the embodiments with reference to particular standards and protocols, the present invention is not limited to such standards and protocols. Each standard represents an example of the prior art. Such criteria are sometimes replaced by faster or more effective equivalents that function substantially the same.
The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of various embodiments, and they are not intended to serve as a complete description of all the elements and features of apparatus and systems that might make use of the structures described herein. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The figures are also merely representational and may not be drawn to scale. Some proportions may be exaggerated, while other proportions may be minimized. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. The examples herein are intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments and other embodiments not specifically described herein are contemplated herein.
The abstract is provided to explain, and should be understood as not being used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing detailed description, various features are grouped together in a single exemplary embodiment for the purpose of streamlining the disclosure. The methods of the present disclosure should not be construed as reflecting the intent: the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter.
Although only one processor is shown for the information handling system, an information handling system with multiple CPUs or processors may be used equally effectively. Various embodiments of the present invention may further comprise interfaces that each include a separate, fully programmed microprocessor for offloading processing from the processor. The operating system included in the main memory of the processing system may be a suitable multi-tasking and/or multi-processing operating system, such as, but not limited to, any operating system based on Linux, UNIX, Windows, and Windows Server. Various embodiments of the present invention are capable of using any other suitable operating system. Various embodiments of the present invention utilize an architecture, such as an object oriented framework mechanism, that allows instructions of the components of the operating system to execute on any processor located within the information processing system. Various embodiments of the present invention are capable of being adapted to work with any data communications connections including present day analog and/or digital techniques or through future networking mechanisms.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term another, as used herein, is defined as at least a second or more. The terms including and having, as used herein, are defined as comprising (i.e., open language). The term "coupled," as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. "communicatively coupled" refers to a coupling of components such that the components are capable of communicating with each other via, for example, wired, wireless, or other communication media. The term "communicatively coupled to" or "communicatively coupled" includes, but is not limited to, communicating electronic control signals by which another element may be directed or controlled. The term "configured to" describes hardware, software, or a combination of hardware and software that is adapted to, arranged, constructed, composed, constructed, designed, or having any combination of these features to perform a given function. The term "adapted to" describes hardware, software, or a combination of hardware and software that is capable of, capable of holding, manufacturing, or adapted to perform a given function.
The terms "controller," computer, "" processor, "" server, "" client, "" computer system, "" computing system, "" personal computer system, "" processing system, "or" information handling "system" describe examples of suitably configured processing systems suitable for implementing one or more embodiments herein. Similarly, any suitably configured processing system can be used by embodiments herein, such as, but not limited to, a personal computer, a laptop personal computer (laptop PC), a tablet computer, a smart phone, a mobile phone, a wireless communication device, a personal digital assistant, a workstation, and the like. A processing system may include one or more processing systems or processors. The processing system can be realized in a centralized fashion in one processing system or in a distributed fashion where different elements are spread across several interconnected processing systems.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed.
The description of the present application has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (26)

1. In a processing system comprised of servers, a method for the servers to provide data values, comprising:
the server maintains a cache of objects;
the server performing an asynchronous calculation to determine a value of an object;
in response to a request for the object before the asynchronous computation has determined the value of the object, returning the value of the object from the cache of objects;
in response to a request for the object after the asynchronous computation has determined the value of the object, returning the value of the object determined by the asynchronous computation;
providing a data class including fields for values and fields for expectations; and
looking up a value corresponding to a key, wherein the field for value comprises a cached value corresponding to the key and the field for expectation comprises an expectation of an updated value corresponding to the key.
2. The method of claim 1, wherein the asynchronous computation comprises at least one expectation.
3. The method of claim 1, wherein the asynchronous computation comprises at least one process or thread.
4. The method of claim 1, further comprising:
in response to the asynchronous calculation determining the value of the object, updating the cache with the value of the object determined by the asynchronous calculation.
5. The method of claim 1, wherein the step of the server performing asynchronous computations occurs in response to a request for the object.
6. The method of claim 1, wherein the step of the server performing asynchronous computations occurs periodically.
7. The method of claim 1, wherein the step of the server performing asynchronous calculations occurs after a time interval has elapsed.
8. The method of claim 1, wherein the server performs the step of asynchronous computation occurring at a frequency related to how frequently the object changes.
9. The method of claim 1, wherein the server performs the step of asynchronously calculating occurs with a frequency related to a degree of importance with a current value of the object.
10. The method of claim 1, further comprising at least one additional server, wherein the asynchronous computation determines the value of the object by querying the at least one additional server.
11. The method of claim 10, further comprising
The server stores object o2 in the object's cache;
the server performing an asynchronous computation c2 to store an object o2 on the at least one additional server; and
in response to receiving a request for object o2 before asynchronous computation c2 completes execution, the cached request from the object is satisfied.
12. The method of claim 10, wherein the at least one additional server comprises a plurality of servers and the asynchronous computation receives different values from at least two of the plurality of servers.
13. The method of claim 12, wherein the asynchronous computation determines the value of the object from values most frequently returned by the at least one additional server.
14. The method of claim 12, wherein the different value has a timestamp associated therewith, and wherein the asynchronous calculation determines the value of the object based on the timestamp.
15. The method of claim 10, further comprising
The server performs asynchronous computation c2 to store object o2 on at least one additional server.
16. The method of claim 15, further comprising:
in response to the asynchronous calculation c2 storing an object o2 on the at least one additional server, the object's cache is updated with the value of the object o 2.
17. The method of claim 15, wherein asynchronous computation c2 includes an error handling method further comprising:
in response to the asynchronous calculation c2 failing to store the object o2, the error handling method is used to retry storing the object o2 on at least one additional server.
18. A processing system, comprising:
a server;
a persistent memory;
a network interface device for communicating with one or more networks; and
at least one processor communicatively coupled with the server, the persistent storage, and the network interface device, the at least one processor, in response to executing computer instructions, performing operations comprising:
the server maintains a cache of objects;
the server performing an asynchronous calculation to determine a value of an object;
in response to a request for the object before the asynchronous computation has determined the value of the object, returning the value of the object from the cache of objects;
in response to a request for the object after the asynchronous computation has determined the value of the object, returning the value of the object determined by the asynchronous computation;
providing a data class including fields for values and fields for expectations; and
looking up a value corresponding to a key, wherein the field for value comprises a cached value corresponding to the key and the field for expectation comprises an expectation of an updated value corresponding to the key.
19. The processing system of claim 18, wherein the asynchronous computation comprises at least one expectation.
20. The processing system of claim 18, wherein the asynchronous computation comprises at least one process or thread.
21. The processing system of claim 18, wherein the at least one processor, in response to executing the computer instructions, is configured to perform operations comprising:
in response to the asynchronous calculation determining the value of the object, updating the cache with the value of the object determined by the asynchronous calculation.
22. The processing system of claim 18, wherein the server performing the asynchronous computation occurs in response to a request for the object.
23. The processing system of claim 18, wherein the step of the server performing asynchronous computations occurs periodically.
24. The processing system of claim 18, wherein the step of the server performing asynchronous calculations occurs after a time interval has elapsed.
25. The processing system of claim 18, wherein the server performs the step of asynchronous computation occurring at a frequency related to how frequently the object changes.
26. A computer readable storage medium having program instructions embodied thereon, the program instructions being executable by a processor to cause the processor to perform the method of any of claims 1-17.
CN201880029063.6A 2017-05-02 2018-04-24 Method, processing system and storage medium for server to provide data values Active CN110583004B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US15/584,345 US10540282B2 (en) 2017-05-02 2017-05-02 Asynchronous data store operations including selectively returning a value from cache or a value determined by an asynchronous computation
US15/584,381 2017-05-02
US15/584,381 US10437724B2 (en) 2017-05-02 2017-05-02 Providing data values in a timely fashion using asynchronous data store operations including selectively returning a value from a cache or a value determined by an asynchronous computation
US15/584,345 2017-05-02
PCT/IB2018/052858 WO2018203185A1 (en) 2017-05-02 2018-04-24 Asynchronous data store operations

Publications (2)

Publication Number Publication Date
CN110583004A CN110583004A (en) 2019-12-17
CN110583004B true CN110583004B (en) 2022-08-30

Family

ID=64015993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880029063.6A Active CN110583004B (en) 2017-05-02 2018-04-24 Method, processing system and storage medium for server to provide data values

Country Status (2)

Country Link
CN (1) CN110583004B (en)
WO (1) WO2018203185A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110955390B (en) * 2019-11-22 2023-08-08 北京达佳互联信息技术有限公司 Data processing method, device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104160679A (en) * 2012-03-13 2014-11-19 国际商业机器公司 Object caching for mobile data communication with mobility management

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7698411B2 (en) * 2007-08-22 2010-04-13 International Business Machines Corporation Selectively delivering cached content or processed content to clients based upon a result completed percentage
US8131698B2 (en) * 2008-05-28 2012-03-06 International Business Machines Corporation Method for coordinating updates to database and in-memory cache
CN102110121B (en) * 2009-12-24 2015-09-23 阿里巴巴集团控股有限公司 A kind of data processing method and system thereof
CN105373369A (en) * 2014-08-25 2016-03-02 北京皮尔布莱尼软件有限公司 Asynchronous caching method, server and system
US10542110B2 (en) * 2015-02-19 2020-01-21 International Business Machines Corporation Data communication in a clustered data processing environment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104160679A (en) * 2012-03-13 2014-11-19 国际商业机器公司 Object caching for mobile data communication with mobility management

Also Published As

Publication number Publication date
WO2018203185A1 (en) 2018-11-08
CN110583004A (en) 2019-12-17

Similar Documents

Publication Publication Date Title
US11063815B2 (en) Building and fixing a dynamic application topology in a cloud based environment leveraging log file data
US10624013B2 (en) International Business Machines Corporation
US20170337279A1 (en) Allocating computing resources
US9762672B2 (en) Dynamic node group allocation
US11243936B2 (en) Selectively requesting updated data values
US10394775B2 (en) Order constraint for transaction processing with snapshot isolation on non-transactional NoSQL servers
US10324647B2 (en) Dynamic compression for runtime services
US10778753B2 (en) Deferential support of request driven cloud services
US11157406B2 (en) Methods for providing data values using asynchronous operations and querying a plurality of servers
US10542111B2 (en) Data communication in a clustered data processing environment
US20170285965A1 (en) Tuning memory across database clusters for distributed query stability
CN110583004B (en) Method, processing system and storage medium for server to provide data values
US10693941B2 (en) Session management
CN112148935B (en) Method and apparatus for NBMP function execution for multiple instances
US11847054B2 (en) Providing data values using asynchronous operations and based on timing of occurrence of requests for the data values
US11243764B1 (en) Code deployment
US11281653B2 (en) Selectively sending updated data values
WO2023138384A1 (en) Propagating job contexts to a job execution environment
US10037147B2 (en) Sharing files in a multisystem environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant