U.S. provisional application No. 63/398,134, filed 8/15 at 2022,
U.S. provisional application No. 63/381,262, filed on 10/27 at 2022, and
U.S. non-provisional application No. 18/360,680 filed on 7.27 at 2023.
Each of the applications listed above is incorporated by reference in its entirety.
Detailed Description
In the following description, for purposes of explanation, specific details are set forth in order to provide a thorough understanding of certain embodiments. It may be evident, however, that various embodiments may be practiced without these specific details. The drawings and description are not intended to be limiting. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
Example architecture of cloud infrastructure
The term "cloud service" is generally used to refer to services provided by a Cloud Service Provider (CSP) to users or customers on demand (e.g., via a subscription model) using systems and infrastructure (cloud infrastructure) provided by the CSP. Typically, the servers and systems that make up the CSP infrastructure are separate from the clients' own on-premise servers and systems. Thus, the customer can utilize the cloud services provided by the CSP himself without purchasing separate hardware and software resources for the services. Cloud services are designed to provide subscribing customers with simple, extensible access to applications and computing resources without requiring the customers to invest in purchasing infrastructure for providing the services.
There are several cloud service providers that provide various types of cloud services. There are various different types or models of cloud services, including software as a service (SaaS), platform as a service (PaaS), infrastructure as a service (IaaS), and the like.
A customer may subscribe to one or more cloud services provided by the CSP. The customer may be any entity, such as an individual, organization, business, etc. When a customer subscribes to or registers for a service provided by the CSP, a lease or account will be created for the customer. The customer may then access one or more cloud resources of the subscription associated with the account via this account.
As described above, infrastructure as a service (IaaS) is a specific type of cloud computing service. In the IaaS model, CSPs provide an infrastructure (referred to as a cloud service provider infrastructure or CSPI) that customers can use to build their own customizable networks and deploy customer resources. Thus, the customer's resources and network are hosted in a distributed environment by the CSP's provided infrastructure. This is in contrast to traditional computing, where the customer's resources and network are hosted by the customer's provided infrastructure.
The CSPI may include interconnected high-performance computing resources, including various host machines, memory resources, and network resources that form a physical network, also referred to as a base network or an underlying network. Resources in the CSPI may be spread across one or more data centers, which may be geographically spread across one or more geographic regions. Virtualization software may be executed by these physical resources to provide a virtualized distributed environment. Virtualization creates an overlay network (also referred to as a software-based network, a software-defined network, or a virtual network) on a physical network. The CSPI physical network provides an underlying foundation for creating one or more overlay or virtual networks over the physical network. The virtual or overlay network may include one or more Virtual Cloud Networks (VCNs). Virtual networks are implemented using software virtualization techniques (e.g., a hypervisor, functions performed by a Network Virtualization Device (NVD) (e.g., smartNIC), a top-of-rack (TOR) switch, a smart TOR that implements one or more functions performed by the NVD, and other mechanisms) to create a layer of network abstraction that can run over a physical network. Virtual networks may take many forms, including peer-to-peer networks, IP networks, and the like. The virtual network is typically either a layer 3IP network or a layer 2VLAN. This method of virtual or overlay networking is often referred to as virtual or overlay 3 networking. Examples of protocols developed for virtual networks include IP-in-IP (or Generic Routing Encapsulation (GRE)), virtual extensible LAN (VXLAN-IETF RFC 7348), virtual Private Networks (VPNs) (e.g., MPLS layer 3 virtual private networks (RFC 4364)), NSX of VMware, GENEVE, and the like.
For IaaS, the infrastructure provided by CSP (CSPI) may be configured to provide virtualized computing resources over a public network (e.g., the internet). In the IaaS model, cloud computing service providers may host infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., hypervisor layer), etc.). In some cases, the IaaS provider may also offer various services to accompany those infrastructure components (e.g., billing, monitoring, logging, security, load balancing, and clustering, etc.). Thus, as these services may be policy driven, iaaS users may be able to implement policies to drive load balancing to maintain application availability and performance. CSPI provides an infrastructure and a set of complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available hosted distributed environment. CSPI provides high performance computing resources and capabilities as well as storage capacity in flexible virtual networks that are securely accessible from a variety of networked locations, such as from a customer's in-house deployment network. When a customer subscribes to or registers for an IaaS service provided by the CSP, the lease created for that customer is a secure and sequestered partition within the CSPI in which the customer can create, organize and manage their cloud resources.
Customers may build their own virtual network using the computing, memory, and networking resources provided by the CSPI. One or more customer resources or workloads, such as computing instances, may be deployed on these virtual networks. For example, a customer may use resources provided by the CSPI to build customizable and private virtual network(s), referred to as a Virtual Cloud Network (VCN). A customer may deploy one or more customer resources, such as computing instances, on a customer VCN. The computing instances may take the form of virtual machines, bare metal instances, and the like. Thus, CSPI provides an infrastructure and a set of complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available virtual hosted environment. Customers do not manage or control the underlying physical resources provided by the CSPI, but have control over the operating system, storage, and deployed applications, and may have limited control over selected networking components (e.g., firewalls).
CSP may provide a console that enables clients and network administrators to use CSPI resources to configure, access, and manage resources deployed in the cloud. In some embodiments, the console provides a web-based user interface that may be used to access and manage the CSPI. In some implementations, the console is a web-based application provided by the CSP.
The CSPI may support single-lease or multi-lease architectures. In a single lease architecture, software (e.g., applications, databases) or hardware components (e.g., host machines or servers) provide services for a single customer or tenant. In a multi-tenancy architecture, software or hardware components provide services for multiple customers or tenants. Thus, in a multi-tenancy architecture, the CSPI resources are shared among multiple customers or tenants. In the multi-tenancy case, precautions are taken and safeguards are implemented in the CSPI to ensure that each tenant's data is isolated from and remains invisible to other tenants.
In a physical network, an endpoint ("endpoint") refers to a computing device or system that connects to and communicates back and forth with the network to which it is connected. Network endpoints in a physical network may be connected to a Local Area Network (LAN), wide Area Network (WAN), or other type of physical network. Examples of traditional endpoints in a physical network include modems, hubs, bridges, switches, routers and other network devices, physical computers (or host machines), and the like. Each physical device in the physical network has a fixed network address that can be used to communicate with the device. This fixed network address may be a layer 2 address (e.g., a MAC address), a fixed layer 3 address (e.g., an IP address), etc. In a virtualized environment or virtual network, endpoints may include various virtual endpoints, such as virtual machines hosted by components of a physical network (e.g., by physical host machines). These endpoints in the virtual network are addressed by overlay addresses, such as overlay 2 addresses (e.g., overlay MAC addresses) and overlay 3 addresses (e.g., overlay IP addresses). Network coverage enables flexibility by allowing a network administrator to move around an overlay address associated with a network endpoint using software management (e.g., via software implementing a control plane for a virtual network). Thus, unlike in a physical network, in a virtual network, an overlay address (e.g., an overlay IP address) may be moved from one endpoint to another endpoint using network management software. Because the virtual network builds on top of the physical network, communication between components in the virtual network involves both the virtual network and the underlying physical network. To facilitate such communications, components of the CSPI are configured to learn and store mappings that map overlay addresses in the virtual network to actual physical addresses in the base network, and vice versa. These mappings are then used to facilitate communications. Customer traffic is encapsulated to facilitate routing in the virtual network.
Thus, a physical address (e.g., a physical IP address) is associated with a component in the physical network, and an overlay address (e.g., an overlay IP address) is associated with an entity in the virtual network. Both the physical IP address and the overlay IP address are types of real IP addresses. They are separate from the virtual IP addresses, which map to multiple real IP addresses. The virtual IP address provides a one-to-many mapping between the virtual IP address and the plurality of real IP addresses.
The cloud infrastructure or CSPI is physically hosted in one or more data centers in one or more regions of the world. The CSPI may include components in a physical or base network and virtualized components located in a virtual network built upon the physical network components (e.g., virtual networks, computing instances, virtual machines, etc.). In certain embodiments, the CSPI is organized and hosted in the domain, region, and availability domains. A region is typically a localized geographic area containing one or more data centers. Regions are generally independent of each other and can be far apart, e.g., across countries or even continents. For example, a first region may be in australia, another in japan, yet another in india, etc. The CSPI resources are divided between regions such that each region has its own independent subset of CSPI resources. Each region may provide a set of core infrastructure services and resources such as computing resources (e.g., bare machine servers, virtual machines, containers, and related infrastructure, etc.), storage resources (e.g., block volume storage, file storage, object storage, archive storage), networking resources (e.g., virtual Cloud Network (VCN), load balancing resources, connections to an on-premise network), database resources, edge networking resources (e.g., DNS), and access management and monitoring resources, etc. Each region typically has multiple paths connecting it to other regions in the field.
In general, an application is deployed in an area where it is most frequently used (i.e., on the infrastructure associated with the area) because resources in the vicinity are used faster than resources in the distance. Applications may also be deployed in different areas for various reasons, such as redundancy to mitigate risk of regional-wide events (such as large weather systems or earthquakes) to meet different requirements of legal jurisdictions, tax domains, and other business or social standards, and so forth.
Data centers within a region may be further organized and subdivided into Availability Domains (ADs). The availability domain may correspond to one or more data centers located within the region. A region may be comprised of one or more availability domains. In such a distributed environment, the CSPI resources are either region-specific (such as Virtual Cloud Networks (VCNs)) or availability domain-specific (such as computing instances).
ADs within a region are isolated from each other, have fault tolerance capability, and are configured such that they are highly unlikely to fail simultaneously. This is achieved by the ADs not sharing critical infrastructure resources (such as networking, physical cables, cable paths, cable entry points, etc.) so that a failure at one AD within a region is less likely to affect the availability of other ADs within the same region. ADs within the same region may be connected to each other through low latency, high bandwidth networks, which makes it possible to provide high availability connectivity for other networks (e.g., the internet, customer's on-premise network, etc.) and build replication systems in multiple ADs to achieve both high availability and disaster recovery. Cloud services use multiple ADs to ensure high availability and prevent resource failures. As the infrastructure provided by IaaS providers grows, more regions and ADs and additional capacity can be added. Traffic between availability domains is typically encrypted.
In some embodiments, regions are grouped into domains. A domain is a logical collection of regions. The domains are isolated from each other and do not share any data. Regions in the same domain may communicate with each other, but regions in different domains may not. Customers reside in a single domain at a CSP's lease or account and may be spread across one or more regions belonging to that domain. Typically, when a customer subscribes to an IaaS service, a lease or account is created for the customer in a region designated by the customer in the domain (referred to as the "home" region). The customer may extend the customer's lease across one or more other regions within the domain. The customer cannot access areas that are not in the area of the customer's rental agency.
The IaaS provider may offer a plurality of domains, each domain satisfying a particular set of customers or users. For example, business fields may be provided for business clients. As another example, a domain may be provided for a particular country for clients within that country. As yet another example, government fields may be provided for governments and the like. For example, a government domain may satisfy a particular government and may have a higher level of security than a business domain. For example, oracle cloud infrastructure (Oracle Cloud Infrastructure, OCI) currently provides a domain for business regions, and two domains (e.g., fedRAMP-authorized and IL 5-authorized) for government cloud regions.
In some embodiments, an AD may be subdivided into one or more fault domains. A fault domain is a grouping of infrastructure resources within an AD to provide counteraffinity. The failure domain allows for the distribution of computing instances such that they are not located on the same physical hardware within a single AD. This is called counteraffinity. A failure domain refers to a group of hardware components (computers, switches, etc.) that share a single point of failure. The computing pool is logically divided into fault domains. Thus, a hardware failure or computing hardware maintenance event affecting one failure domain does not affect instances in other failure domains. The number of fault domains for each AD may vary depending on the embodiment. For example, in some embodiments, each AD contains three fault domains. The failure domain acts as a logical data center within the AD.
When a customer subscribes to the IaaS service, resources from the CSPI are provisioned to the customer and associated with the customer's lease. Clients can use these provisioned resources to build private networks and deploy resources on these networks. Customer networks hosted in the cloud by CSPI are referred to as Virtual Cloud Networks (VCNs). A customer may establish one or more Virtual Cloud Networks (VCNs) using CSPI resources allocated for the customer. VCNs are virtual or software defined private networks. Customer resources deployed in a customer's VCN may include computing instances (e.g., virtual machines, bare metal instances) and other resources. These computing instances may represent various customer workloads, such as applications, load balancers, databases, and the like. Computing instances deployed on a VCN may communicate with publicly accessible endpoints ("public endpoints"), with other instances in the same VCN or other VCNs (e.g., other VCNs of customers or VCNs not belonging to customers), with customer's in-house deployment data centers or networks, and with service endpoints and other types of endpoints through a public network such as the internet.
CSP may use CSPI to provide various services. In some cases, the customers of the CSPI themselves may act like service providers and provide services using CSPI resources. The service provider may expose a service endpoint that is characterized by identifying information (e.g., IP address, DNS name, and port). The customer's resources (e.g., computing instances) may consume the particular service by accessing the service endpoints exposed by the service for the particular service. These service endpoints are typically endpoints that are publicly accessible to users via a public communication network, such as the internet, using a public IP address associated with the endpoint. Publicly accessible network endpoints are sometimes referred to as public endpoints.
In some embodiments, a service provider may expose a service via an endpoint for the service (sometimes referred to as a service endpoint). The customer of the service may then use this service endpoint to access the service. In some embodiments, a service endpoint that provides a service may be accessed by multiple clients that intend to consume the service. In other embodiments, a dedicated service endpoint may be provided for a customer such that only the customer may use the dedicated service endpoint to access a service.
In some embodiments, when the VCN is created, it is associated with a private overlay classless inter-domain routing (CIDR) address space, which is a private overlay IP address (e.g., 10.0/16) assigned to the scope of the VCN. The VCN includes associated subnets, routing tables, and gateways. The VCNs reside within a single region, but may span one or more or all of the availability domains of that region. A gateway is a virtual interface configured for a VCN and enables communication of traffic to and from the VCN to one or more endpoints external to the VCN. One or more different types of gateways may be configured for the VCN to enable communications to and from different types of endpoints.
The VCN may be subdivided into one or more subnetworks, such as one or more subnetworks. Thus, a subnet is a configured unit or subdivision that can be created within a VCN. The VCN may have one or more subnets. Each subnet within a VCN is associated with a contiguous range of overlay IP addresses (e.g., 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other subnets in the VCN and represent a subset of the address space within the address space of the VCN.
Each computing instance is associated with a Virtual Network Interface Card (VNIC) that enables the computing instance to participate in a subnet of the VCN. VNICs are logical representations of physical Network Interface Cards (NICs). Generally, a VNIC is an interface between an entity (e.g., a computing instance, a service) and a virtual network. The VNICs exist in a subnet with one or more associated IP addresses and associated security rules or policies. The VNICs correspond to layer 2 ports on the switch. The VNICs are attached to the computing instance and to a subnet within the VCN. The VNICs associated with the computing instance enable the computing instance to be part of a subnet of the VCN and enable the computing instance to communicate (e.g., send and receive packets) with endpoints that are on the same subnet as the computing instance, with endpoints in different subnets in the VCN, or with endpoints external to the VCN. Thus, the VNICs associated with the computing instance determine how the computing instance connects with endpoints internal and external to the VCN. When a computing instance is created and added to a subnet within the VCN, a VNIC for the computing instance is created and associated with the computing instance. For a subnet that includes a set of computing instances, the subnet contains VNICs corresponding to the set of computing instances, each VNIC attached to a computing instance within the set of computing instances.
Each computing instance is assigned a private overlay IP address via the VNIC associated with the computing instance. This private overlay network IP address is assigned to the VNIC associated with the computing instance when the computing instance is created and is used to route traffic to and from the computing instance. All VNICs in a given subnetwork use the same routing table, security list, and DHCP options. As described above, each subnet within a VCN is associated with a contiguous range of overlay IP addresses (e.g., 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other subnets in the VCN and represent a subset of the address space within the address space of the VCN. For a VNIC on a particular subnet of a VCN, the private overlay IP address assigned to that VNIC is an address from a contiguous range of overlay IP addresses allocated for the subnet.
In some embodiments, in addition to private overlay IP addresses, the computing instance may optionally be assigned additional overlay IP addresses, such as, for example, one or more public IP addresses if in a public subnet. These multiple addresses are assigned either on the same VNIC or on multiple VNICs associated with the computing instance. But each instance has a master VNIC that is created during instance startup and is associated with an overlay private IP address assigned to the instance—this master VNIC cannot be removed. Additional VNICs, referred to as secondary VNICs, may be added to existing instances in the same availability domain as the primary VNIC. All VNICs are in the same availability domain as this example. The auxiliary VNICs may be located in a subnet in the same VCN as the main VNIC or in a different subnet in the same VCN or a different VCN.
If the computing instance is in a public subnet, it may optionally be assigned a public IP address. When creating a subnet, the subnet may be designated as either a public subnet or a private subnet. A private subnet means that resources (e.g., compute instances) and associated VNICs in the subnet cannot have a public overlay IP address. A public subnet means that resources in a subnet and associated VNICs may have a public IP address. A customer may specify that a subnet exists in a single availability domain or multiple availability domains in a cross-region or domain.
As described above, the VCN may be subdivided into one or more subnets. In some embodiments, a Virtual Router (VR) configured for a VCN (referred to as a VCN VR or simply VR) enables communication between subnets of the VCN. For a subnet within a VCN, VR represents a logical gateway for that subnet that enables that subnet (i.e., the computing instance on that subnet) to communicate with endpoints on other subnets within the VCN as well as other endpoints outside the VCN. The VCN VR is configured as a logical entity that routes traffic between the VNICs in the VCN and a virtual gateway ("gateway") associated with the VCN. The gateway is further described below with respect to fig. 1. VCN VR is a layer 3/IP layer concept. In one embodiment, there is one VCN VR for the VCN, where the VCN VR has a potentially unlimited number of ports addressed by the IP address, one for each subnet of the VCN. In this way, the VCN VR has a different IP address for each subnet in the VCN to which the VCN VR is attached. The VR is also connected to various gateways configured for the VCN. In some embodiments, a particular overlay IP address in the overlay IP address range for a subnet is reserved for a port of a VCN VR for that subnet. For example, consider that a VCN has two subnets with associated address ranges of 10.0/16 and 10.1/16, respectively. For the first subnet in the VCN with an address range of 10.0/16, addresses within this range are reserved for ports of the VCN VR for that subnet. In some cases, the first IP address within range may be reserved for VCN VR. For example, for a subnet covering an IP address range of 10.0/16, an IP address of 10.0.0.1 may be reserved for ports of the VCN VR for that subnet. For a second subnet in the same VCN with an address range of 10.1/16, the VCN VR may have a port for the second subnet with an IP address of 10.1.0.1. The VCN VR has a different IP address for each subnet in the VCN.
In some other embodiments, each subnet within the VCN may have its own associated VR that is addressable by the subnet using a reserved or default IP address associated with the VR. The reserved or default IP address may be, for example, the first IP address in the range of IP addresses associated with the subnet. The VNICs in the subnet may use this default or reserved IP address to communicate (e.g., send and receive packets) with the VR associated with the subnet. In such an embodiment, the VR is the entry/exit point of the subnet. The VR associated with a subnet within the VCN may communicate with other VRs associated with other subnets within the VCN. The VR may also communicate with a gateway associated with the VCN. The VR functions of the subnetwork are run on or performed by one or more NVDs that perform VNIC functions for VNICs in the subnetwork.
The VCN may be configured with routing tables, security rules, and DHCP options. The routing table is a virtual routing table for the VCN and includes rules for routing traffic from a subnet within the VCN to a destination outside the VCN by way of a gateway or specially configured instance. The routing tables of the VCNs may be customized to control how packets are forwarded/routed to and from the VCNs. DHCP options refer to configuration information that is automatically provided to an instance at instance start-up.
The security rules configured for the VCN represent overlay firewall rules for the VCN. Security rules may include ingress and egress rules and specify the type of traffic (e.g., protocol and port based) that is allowed to enter and exit the VCN instance. The client may choose whether a given rule is stateful or stateless. For example, a client may allow incoming SSH traffic from anywhere to a set of instances by setting state entry rules with source CIDR 0.0.0.0/0 and destination TCP ports 22. The security rules may be implemented using a network security group or security list. A network security group consists of a set of security rules that apply only to the resources in the group. In another aspect, the security list includes rules applicable to all resources in any subnet that uses the security list. The VCN may be provided with a default security list with default security rules. The DHCP options configured for the VCN provide configuration information that is automatically provided to the instances in the VCN at instance start-up.
In some embodiments, configuration information for the VCN is determined and stored by the VCN control plane. For example, configuration information for a VCN may include information regarding address ranges associated with the VCN, subnets and associated information within the VCN, one or more VRs associated with the VCN, computing instances in the VCN and associated VNICs, NVDs (e.g., VNICs, VRs, gateways) that perform various virtualized network functions associated with the VCN, status information for the VCN, and other VCN-related information. In certain embodiments, the VCN distribution service publishes configuration information stored by the VCN control plane or portion thereof to the NVD. The distributed information may be used to update information (e.g., forwarding tables, routing tables, etc.) stored and used by the NVD to forward packets to and from computing instances in the VCN.
In some embodiments, the creation of VCNs and subnets is handled by the VCN Control Plane (CP) and the launching of compute instances is handled by the compute control plane. The compute control plane is responsible for allocating physical resources for the compute instance and then invoking the VCN control plane to create and attach the VNICs to the compute instance. The VCN CP also sends the VCN data map to a VCN data plane configured to perform packet forwarding and routing functions. In some embodiments, the VCN CP provides a distribution service responsible for providing updates to the VCN data plane. Examples of VCN control planes are also depicted in fig. 18, 19, 20, and 21 (see reference numerals 1816, 1916, 2016, and 2116) and described below.
A customer may create one or more VCNs using resources hosted by the CSPI. Computing instances deployed on a client VCN may communicate with different endpoints. These endpoints may include endpoints hosted by the CSPI and endpoints external to the CSPI.
Various different architectures for implementing cloud-based services using CSPI are depicted in fig. 1, 2,3, 4, 5, 18, 19, 20, and 21 and described below. Fig. 1 is a high-level diagram illustrating a distributed environment 100 of an overlay or customer VCN hosted by a CSPI, in accordance with some embodiments. The distributed environment depicted in fig. 1 includes a plurality of components in an overlay network. The distributed environment 100 depicted in FIG. 1 is merely an example and is not intended to unduly limit the scope of the claimed embodiments. Many variations, alternatives, and modifications are possible. For example, in some embodiments, the distributed environment depicted in fig. 1 may have more or fewer systems or components than those shown in fig. 1, may combine two or more systems, or may have a different configuration or arrangement of systems.
As shown in the example depicted in fig. 1, distributed environment 100 includes CSPI 101 that provides services and resources that customers can subscribe to and use to build their Virtual Cloud Network (VCN). In some embodiments, CSPI 101 provides IaaS services to subscribing clients. Data centers within CSPI 101 may be organized into one or more regions. An example zone "zone US"102 is shown in fig. 1. The customer has configured a customer VCN 104 for the region 102. A customer may deploy various computing instances on the VCN 104, where the computing instances may include virtual machine or bare machine instances. Examples of instances include applications, databases, load balancers, and the like.
In the embodiment depicted in fig. 1, customer VCN 104 includes two subnets, namely, "subnet-1" and "subnet-2," each having its own CIDR IP address range. In FIG. 1, the overlay IP address range for subnet-1 is 10.0/16 and the address range for subnet-2 is 10.1/16.VCN virtual router 105 represents a logical gateway for the VCN that enables communication between the subnetworks of VCN 104 and with other endpoints external to the VCN. The VCN VR 105 is configured to route traffic between the VNICs in the VCN 104 and gateways associated with the VCN 104. The VCN VR 105 provides a port for each subnet of the VCN 104. For example, VR 105 may provide a port for subnet-1 with IP address 10.0.0.1 and a port for subnet-2 with IP address 10.1.0.1.
Multiple computing instances may be deployed on each subnet, where the computing instances may be virtual machine instances and/or bare machine instances. Computing instances in a subnet may be hosted by one or more host machines within CSPI 101. The computing instance participates in the subnet via the VNIC associated with the computing instance. For example, as shown in fig. 1, computing instance C1 becomes part of subnet-1 via the VNIC associated with the computing instance. Likewise, computing instance C2 becomes part of subnet-1 via the VNIC associated with C2. In a similar manner, multiple computing instances (which may be virtual machine instances or bare machine instances) may be part of subnet-1. Each computing instance is assigned a private overlay IP address and a MAC address via its associated VNIC. For example, in fig. 1, the overlay IP address of the computing instance C1 is 10.0.0.2 and the MAC address is M1, while the private overlay IP address of the computing instance C2 is 10.0.0.3 and the MAC address is M2. Each compute instance in subnet-1 (including compute instance C1 and C2) has a default route to VCN VR 105 using IP address 10.0.0.1, which is the IP address of the port of VCN VR 105 for subnet-1.
Subnet-2 may have multiple computing instances deployed thereon, including virtual machine instances and/or bare machine instances. For example, as shown in fig. 1, computing instances D1 and D2 become part of subnet-2 via VNICs associated with the respective computing instances. In the embodiment depicted in fig. 1, the overlay IP address of compute instance D1 is 10.1.0.2 and the MAC address is MM1, while the private overlay IP address of compute instance D2 is 10.1.0.3 and the MAC address is MM2. Each computing instance in subnet-2 (including computing instances D1 and D2) has a default route to VCN VR 105 using IP address 10.1.0.1, which is the IP address of the port of VCN VR 105 for subnet-2.
The VCN a 104 may also include one or more load balancers. For example, a load balancer may be provided for a subnet and may be configured to load balance traffic across multiple compute instances on the subnet. A load balancer may also be provided to load balance traffic across subnets in the VCN.
A particular computing instance deployed on VCN 104 may communicate with a variety of different endpoints. These endpoints may include endpoints hosted by CSPI 200 and endpoints external to CSPI 200. Endpoints hosted by CSPI 101 may include endpoints on the same subnet as a particular computing instance (e.g., communication between two computing instances in subnet-1), endpoints on a different subnet but within the same VCN (e.g., communication between a computing instance in subnet-1 and a computing instance in subnet-2), endpoints in a different VCN in the same region (e.g., communication between a computing instance in subnet-1 and an endpoint in a VCN in the same region 106 or 110, communication between a computing instance in subnet-1 and an endpoint in service network 110 in the same region), or endpoints in a VCN in a different region (e.g., communication between a computing instance in subnet-1 and an endpoint in a VCN in a different region 108). Computing instances in a subnet hosted by CSPI 101 may also communicate with endpoints that are not hosted by CSPI 101 (i.e., external to CSPI 101). These external endpoints include endpoints in customer's on-premise network 116, endpoints within other remote cloud-hosted networks 118, public endpoints 114 accessible via a public network (such as the internet), and other endpoints.
Communication between computing instances on the same subnet is facilitated using VNICs associated with the source computing instance and the destination computing instance. For example, compute instance C1 in subnet-1 may want to send a packet to compute instance C2 in subnet-1. For a packet that originates from a source computing instance and whose destination is another computing instance in the same subnet, the packet is first processed by the VNIC associated with the source computing instance. The processing performed by the VNICs associated with the source computing instance may include determining destination information for the packet from a packet header, identifying any policies (e.g., security lists) configured for the VNICs associated with the source computing instance, determining a next hop for the packet, performing any packet encapsulation/decapsulation functions as needed, and then forwarding/routing the packet to the next hop for the purpose of facilitating communication of the packet to its intended destination. When the destination computing instance and the source computing instance are located in the same subnet, the VNIC associated with the source computing instance is configured to identify the VNIC associated with the destination computing instance and forward the packet to the VNIC for processing. The VNIC associated with the destination computing instance is then executed and the packet is forwarded to the destination computing instance.
For packets to be transmitted from computing instances in a subnet to endpoints in different subnets in the same VCN, communication is facilitated by VNICs associated with source and destination computing instances and VCN VR. For example, if computing instance C1 in subnet-1 in FIG. 1 wants to send a packet to computing instance D1 in subnet-2, then the packet is first processed by the VNIC associated with computing instance C1. The VNIC associated with computing instance C1 is configured to route packets to VCN VR 105 using a default route or port 10.0.0.1 of the VCN VR. The VCN VR 105 is configured to route packets to subnet-2 using port 10.1.0.1. The VNIC associated with D1 then receives and processes the packet and the VNIC forwards the packet to computing instance D1.
For packets to be communicated from a computing instance in VCN 104 to an endpoint external to VCN 104, communication is facilitated by a VNIC associated with the source computing instance, VCN VR 105, and a gateway associated with VCN 104. One or more types of gateways may be associated with VCN 104. A gateway is an interface between a VCN and another endpoint that is external to the VCN. The gateway is a layer 3/IP layer concept and enables the VCN to communicate with endpoints external to the VCN. Thus, the gateway facilitates traffic flow between the VCN and other VCNs or networks. Various different types of gateways may be configured for the VCN to facilitate different types of communications with different types of endpoints. Depending on the gateway, the communication may be through a public network (e.g., the internet) or through a private network. Various communication protocols may be used for these communications.
For example, computing instance C1 may want to communicate with endpoints external to VCN 104. The packet may be first processed by the VNIC associated with the source computing instance C1. The VNIC processing determines that the destination of the packet is outside of subnet-1 of C1. The VNIC associated with C1 may forward the packet to the VCN VR 105 for VCN 104. The VCN VR 105 then processes the packet and, as part of the processing, determines a particular gateway associated with the VCN 104 as the next hop for the packet based on the destination of the packet. The VCN VR 105 may then forward the packet to a particular identified gateway. For example, if the destination is an endpoint within a customer's in-premise network, the packet may be forwarded by the VCN VR 105 to a Dynamic Routing Gateway (DRG) gateway 122 configured for the VCN 104. The packet may then be forwarded from the gateway to the next hop to facilitate delivery of the packet to its final intended destination.
Various different types of gateways may be configured for the VCN. An example of a gateway that may be configured for a VCN is depicted in fig. 1 and described below. Examples of gateways associated with VCNs are also depicted in fig. 18, 19, 20, and 21 (e.g., gateways referenced by reference numerals 1834, 1836, 1838, 1934, 1936, 1938, 2034, 2036, 2038, 2134, 2136, and 2138) and described below. As shown in the embodiment depicted in fig. 1, a Dynamic Routing Gateway (DRG) 122 may be added to or associated with customer VCN 104 and provide a path for private network traffic communications between customer VCN 104 and another endpoint, which may be customer's on-premise network 116, VCN 108 in a different region of CSPI 101, or other remote cloud network 118 not hosted by CSPI 101. The customer in-house deployment network 116 may be a customer network or customer data center built using the customer's resources. Access to the customer in-house deployment network 116 is typically very limited. For customers having both customer in-premise network 116 and one or more VCNs 104 deployed or hosted by CSPI 101 in the cloud, customers may want their in-premise network 116 and their cloud-based VCNs 104 to be able to communicate with each other. This enables customers to build an extended hybrid environment, including customers' VCNs 104 hosted by CSPI 101 and their on-premise network 116.DRG 122 enables such communication. To enable such communications, a communication channel 124 is provided, wherein one endpoint of the channel is located in customer on-premise network 116 and the other endpoint is located in CSPI 101 and connected to customer VCN 104. The communication channel 124 may be over a public communication network (such as the internet) or a private communication network. Various different communication protocols may be used, such as IPsec VPN technology on a public communication network (such as the internet), fastConnect technology of Oracle using a private network instead of a public network, etc. The equipment or equipment in the customer-premises deployment network 116 that forms one endpoint of the communication channel 124 is referred to as Customer Premise Equipment (CPE), such as CPE 126 depicted in fig. 1. On the CSPI 101 side, the endpoint may be a host machine executing DRG 122.
In some embodiments, a remote peer-to-peer connection (RPC) may be added to the DRG that allows a customer to peer one VCN with another VCN in a different locale. Using such RPCs, customer VCN 104 may connect with VCN 108 in another region using DRG 122. DRG 122 may also be used to communicate with other remote cloud networks 118 (such as MicrosoftAzure clouds, amazon AWS clouds, etc.) that are not hosted by CSPI 101.
As shown in fig. 1, the customer VCN 104 may be configured with an Internet Gateway (IGW) 120 that enables computing instances on the VCN 104 to communicate with a public endpoint 114 that is accessible over a public network, such as the internet. IGW 1120 is a gateway that connects the VCN to a public network, such as the internet. IGW 120 enables public subnets within a VCN, such as VCN 104, where resources in the public subnets have public overlay IP addresses, to directly access public endpoints 112 on public network 114, such as the internet. Using IGW 120, a connection may be initiated from a subnet within VCN 104 or from the internet.
A Network Address Translation (NAT) gateway 128 may be configured for the customer's VCN 104 and enables cloud resources in the customer's VCN that do not have a private public overlay IP address to access the internet and do so without exposing those resources to direct incoming internet connections (e.g., L4-L7 connections). This enables private subnets within the VCN (such as private subnet-1 in VCN 104) to privately access public endpoints on the internet. In NAT gateways, connections to the public internet can only be initiated from the private subnetwork, and not from the internet.
In some embodiments, a Serving Gateway (SGW) 126 may be configured for the customer VCN 104 and provides a path for private network traffic between the VCN 104 and service endpoints supported in the services network 110. In some embodiments, the services network 110 may be provided by a CSP and may provide various services. An example of such a service network is the Oracle service network, which provides various services available to customers. For example, computing instances (e.g., database systems) in a private subnet of the client VCN 104 may backup data to a service endpoint (e.g., object store) without requiring a public IP address or access to the internet. In some embodiments, the VCN may have only one SGW and the connection may be initiated only from a subnet within the VCN and not from the serving network 110. If the VCN is peer to peer with another, resources in the other VCN typically cannot access the SGW. Resources in an on-premise network that are connected to a VCN with FastConnect or VPN Connect may also use a service gateway configured for that VCN.
In some embodiments, SGW 126 uses the concept of a service-generic-free inter-domain routing (CIDR) tag, which is a string that represents all regional public IP address ranges for a service or group of services of interest. Customers use the service CIDR tag when they configure the SGW and associated routing rules to control traffic to the service. If the public IP address of the service changes in the future, the client can optionally use it in configuring security rules without having to adjust them.
A local peer-to-peer gateway (LPG) 132 is a gateway that may be added to a customer VCN 104 and enable the VCN 104 to peer with another VCN in the same region. Peer-to-peer refers to VCNs communicating using private IP addresses without the need for traffic to traverse a public network (such as the internet) or to route traffic through customer's on-premise network 116. In the preferred embodiment, the VCN has a separate LPG for each peer it establishes. Local peer-to-peer or VCN peer-to-peer is a common practice for establishing network connectivity between different applications or infrastructure management functions.
A service provider, such as a provider of services in the services network 110, may provide access to services using different access models. According to the public access model, services may be exposed as public endpoints that are publicly accessible by computing instances in the client VCN via a public network (such as the internet), and/or may be privately accessible via SGW 126. Depending on the particular private access model, the service may be accessed as a private IP endpoint in a private subnet in the customer's VCN. This is known as Private Endpoint (PE) access and enables service providers to expose their services as instances in the customer's private network. The private endpoint resources represent services within the customer's VCN. Each PE appears as a VNIC (referred to as a PE-VNIC, having one or more private IPs) in a subnet selected by the customer in the customer's VCN. Thus, the PE provides a way to use the VNIC to present services within a private customer VCN subnet. Since the endpoints are exposed as VNICs, all features associated with the VNICs (such as routing rules, security lists, etc.) may now be used for the PE VNICs.
Service providers may register their services to enable access through the PE. The provider may associate policies with the service that limit the visibility of the service to customer leases. A provider may register multiple services under a single virtual IP address (VIP), especially for multi-tenant services. There may be multiple such private endpoints (in multiple VCNs) representing the same service.
The computing instance in the private subnet may then access the service using the private IP address or service DNS name of the PE VNIC. The computing instance in the client VCN may access the service by sending traffic to the private IP address of the PE in the client VCN. The Private Access Gateway (PAGW) 130, which is a gateway resource that can be attached to a service provider VCN (e.g., a VCN in the services network 110), acts as an ingress/egress point for all traffic from/to the customer subnet private endpoint. PAGW 130 enables a provider to extend the number of PE connections without utilizing its internal IP address resources. The provider need only configure one PAGW for any number of services registered in a single VCN. A provider may represent a service as a private endpoint in multiple VCNs for one or more customers. From the customer's perspective, the PE VNICs are not attached to the customer's instance, but rather appear to be attached to the service with which the customer wishes to interact. Traffic destined for the private endpoint is routed to the service via PAGW. These are called customer-to-service private connections (C2S connections).
The PE concept can also be used to extend private access for services to customer in-premise networks and data centers by allowing traffic to flow through FastConnect/IPsec links and private endpoints in the customer's VCN. Private access to services can also be extended to the customer's peer VCN by allowing traffic to flow between LPG 132 and PEs in the customer's VCN.
The customer may control routing in the VCN at the subnet level, so the customer may specify which subnets in the customer's VCN (such as VCN 104) use each gateway. The routing table of the VCN is used to decide whether to allow traffic to leave the VCN through a particular gateway. For example, in a particular example, a routing table for a common subnet within customer VCN 104 may send non-local traffic through IGW 120. Routing tables for private subnets within the same customer VCN 104 may send traffic destined for CSP services through SGW 126. All remaining traffic may be sent via NAT gateway 128. The routing table only controls traffic out of the VCN.
The security list associated with the VCN is used to control traffic entering the VCN via the gateway via the inbound connection. All resources in the subnetwork use the same routing table and security list. The security list may be used to control the particular type of traffic allowed to enter and exit instances in the sub-network of the VCN. Security list rules may include ingress (inbound) and egress (outbound) rules. For example, an ingress rule may specify an allowed source address range, while an egress rule may specify an allowed destination address range. The security rules may specify a particular protocol (e.g., TCP, ICMP), a particular port (e.g., 22 for SSH, 3389 for Windows RDP), etc. In some implementations, the operating system of the instance can enforce its own firewall rules that conform to the security list rules. Rules may be stateful (e.g., track connections and automatically allow responses without explicit security list rules for response traffic) or stateless.
Accesses from a customer's VCN (i.e., through resources or computing instances deployed on the VCN 104) may be categorized as public, private, or private. Public access refers to an access model that uses public IP addresses or NATs to access public endpoints. Private access enables customer workloads in the VCN 104 (e.g., resources in a private subnet) with private IP addresses to access services without traversing a public network such as the internet. In some embodiments, CSPI 101 enables a customer VCN workload with a private IP address to access (the public service endpoint of) a service using a service gateway. Thus, the service gateway provides a private access model by establishing a virtual link between the customer's VCN and the public endpoint of a service residing outside the customer's private network.
In addition, the CSPI may provide private public access using techniques such as FastConnect public peering, where the customer in-house deployment instance may access one or more services in the customer's VCN using FastConnect connections without traversing a public network such as the internet. The CSPI may also provide private access using FastConnect private peering, where a customer in-house deployment instance with a private IP address may access the customer's VCN workload using FastConnect connection. FastConnect is a network connectivity alternative to connecting customers' on-premise networks to the CSPI and its services using the public internet. FastConnect provide a simple, resilient, and economical way to create private and private connections with higher bandwidth options and a more reliable and consistent networking experience than internet-based connections.
FIG. 1 and the accompanying description above describe various virtualized components in an example virtual network. As described above, the virtual network is established on an underlying physical or substrate network. Fig. 2 depicts a simplified architectural diagram of physical components in a physical network within a CSPI 200 that provides an underlying layer for a virtual network, in accordance with some embodiments. As shown, CSPI 200 provides a distributed environment that includes components and resources (e.g., computing, memory, and networking resources) provided by a Cloud Service Provider (CSP). These components and resources are used to provide cloud services (e.g., iaaS services) to subscribing clients (i.e., clients that have subscribed to one or more services provided by CSPs). Clients are provisioned with a subset of the resources (e.g., computing, memory, and networking resources) of CSPI 200 based on the services subscribed to by the clients. Customers may then build their own cloud-based (i.e., CSPI-hosted) customizable and private virtual networks using the physical computing, memory, and networking resources provided by CSPI 200. As indicated previously, these customer networks are referred to as Virtual Cloud Networks (VCNs). Clients may deploy one or more client resources, such as computing instances, on these client VCNs. The computing instance may be in the form of a virtual machine, a bare metal instance, or the like. CSPI 200 provides an infrastructure and a set of complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available hosted environment.
In the example embodiment depicted in fig. 2, the physical components of CSPI 200 include one or more physical host machines or physical servers (e.g., 202, 206, 208), network Virtualization Devices (NVDs) (e.g., 210, 212), top of rack (TOR) switches (e.g., 214, 216), and physical networks (e.g., 218), as well as switches in physical network 218. The physical host machine or server may host and execute various computing instances that participate in one or more subnets of the VCN. The computing instances may include virtual machine instances and bare machine instances. For example, the various computing instances depicted in fig. 1 may be hosted by the physical host machine depicted in fig. 2. The virtual machine computing instances in the VCN may be executed by one host machine or by a plurality of different host machines. The physical host machine may also host a virtual host machine, a container-based host or function, or the like. The VNICs and VCN VRs depicted in fig. 1 may be performed by the NVD depicted in fig. 2. The gateway depicted in fig. 1 may be performed by the host machine and/or NVD depicted in fig. 2.
The host machine or server may execute a hypervisor (also referred to as a virtual machine monitor or VMM) that creates and enables virtualized environments on the host machine. Virtualized or virtualized environments facilitate cloud-based computing. One or more computing instances may be created, executed, and managed on a host machine by a hypervisor on the host machine. The hypervisor on the host machine enables the physical computing resources (e.g., computing, memory, and networking resources) of the host machine to be shared among the various computing instances executed by the host machine.
For example, as depicted in FIG. 2, host machines 202 and 208 execute hypervisors 260 and 266, respectively. These hypervisors may be implemented using software, firmware, or hardware, or a combination thereof. Typically, a hypervisor is a process or software layer that sits on top of the host machine's Operating System (OS), which in turn executes on the host machine's hardware processor. The hypervisor provides a virtualized environment by enabling physical computing resources of the host machine (e.g., processing resources such as processors/cores, memory resources, networking resources) to be shared among the various virtual machine computing instances executed by the host machine. For example, in fig. 2, the hypervisor 260 may be located above the OS of the host machine 202 and enable computing resources (e.g., processing, memory, and networking resources) of the host machine 202 to be shared among computing instances (e.g., virtual machines) executed by the host machine 202. The virtual machine may have its own operating system (referred to as a guest operating system), which may be the same as or different from the OS of the host machine. The operating system of a virtual machine executed by a host machine may be the same as or different from the operating system of another virtual machine executed by the same host machine. Thus, the hypervisor enables multiple operating systems to be executed simultaneously while sharing the same computing resources of the host machine. The host machines depicted in fig. 2 may have the same or different types of hypervisors.
The computing instance may be a virtual machine instance or a bare machine instance. In FIG. 2, computing instance 268 on host machine 202 and computing instance 274 on host machine 208 are examples of virtual machine instances. The host machine 206 is an example of a bare metal instance provided to a customer.
In some cases, an entire host machine may be provisioned to a single customer, and one or more computing instances (or virtual or bare machine instances) hosted by the host machine all belong to the same customer. In other cases, the host machine may be shared among multiple guests (i.e., multiple tenants). In such a multi-tenancy scenario, the host machine may host virtual machine computing instances belonging to different guests. These computing instances may be members of different VCNs for different customers. In some embodiments, bare metal computing instances are hosted by bare metal servers without hypervisors. When supplying a bare metal computing instance, a single customer or tenant maintains control of the physical CPU, memory, and network interfaces of the host machine hosting the bare metal instance, and the host machine is not shared with other customers or tenants.
As previously described, each computing instance that is part of a VCN is associated with a VNIC that enables the computing instance to be a member of a subnet of the VCN. The VNICs associated with the computing instance facilitate the communication of packets or frames to and from the computing instance. The VNIC is associated with the computing instance when the computing instance is created. In some embodiments, for a computing instance executed by a host machine, a VNIC associated with the computing instance is executed by an NVD connected to the host machine. For example, in fig. 2, host machine 202 executes virtual machine computing instance 268 associated with VNIC 276, and VNIC 276 is executed by NVD 210 connected to host machine 202. As another example, bare metal instances 272 hosted by host machine 206 are associated with VNICs 280 that are executed by NVDs 212 connected to host machine 206. As yet another example, the VNICs 284 are associated with computing instances 274 that are executed by the host machine 208, and the VNICs 284 are executed by NVDs 212 connected to the host machine 208.
For a computing instance hosted by a host machine, an NVD connected to the host machine also executes a VCN VR corresponding to the VCN of which the computing instance is a member. For example, in the embodiment depicted in fig. 2, NVD 210 executes VCN VR 277 corresponding to the VCN of which computing instance 268 is a member. NVD 212 may also execute one or more VCN VRs 283 corresponding to VCNs corresponding to computing instances hosted by host machines 206 and 208.
The host machine may include one or more Network Interface Cards (NICs) that enable the host machine to connect to other devices. The NIC on the host machine may provide one or more ports (or interfaces) that enable the host machine to be communicatively connected to another device. For example, the host machine may connect to the NVD using one or more ports (or interfaces) provided on the host machine and on the NVD. The host machine may also be connected to other devices (such as another host machine).
For example, in fig. 2, host machine 202 is connected to NVD 210 using link 220, which link 220 extends between ports 234 provided by NIC 232 of host machine 202 and ports 236 of NVD 210. The host machine 206 is connected to the NVD 212 using a link 224, which link 224 extends between ports 246 provided by the NIC 244 of the host machine 206 and ports 248 of the NVD 212. The host machine 208 is connected to the NVD 212 using a link 226, which link 226 extends between ports 252 provided by the NIC 250 of the host machine 208 and ports 254 of the NVD 212.
The NVD in turn is connected via communication links to top of rack (TOR) switches that are connected to a physical network 218 (also referred to as a switching fabric). In certain embodiments, the links between the host machine and the NVD and between the NVD and the TOR switch are Ethernet links. For example, in fig. 2, NVDs 210 and 212 are connected to TOR switches 214 and 216 using links 228 and 230, respectively. In some embodiments, links 220, 224, 226, 228, and 230 are ethernet links. The collection of host machines and NVDs connected to TOR is sometimes referred to as a rack.
The physical network 218 provides a communication structure that enables TOR switches to communicate with each other. The physical network 218 may be a multi-level network. In some embodiments, physical network 218 is a multi-level Clos network of switches, where TOR switches 214 and 216 represent leaf level nodes of multi-level and multi-node physical switching network 218. Different Clos network configurations are possible, including but not limited to 2-tier networks, 3-tier networks, 4-tier networks, 5-tier networks, and generally "n" tier networks. An example of a Clos network is depicted in fig. 5 and described below.
There may be a variety of different connection configurations between the host machine and the NVD, such as a one-to-one configuration, a many-to-one configuration, a one-to-many configuration, and the like. In one-to-one configuration implementations, each host machine is connected to its own separate NVD. For example, in fig. 2, host machine 202 is connected to NVD 210 via NIC 232 of host machine 202. In a many-to-one configuration, multiple host machines are connected to one NVD. For example, in fig. 2, host machines 206 and 208 are connected to the same NVD 212 via NICs 244 and 250, respectively.
In a one-to-many configuration, one host machine is connected to multiple NVDs. FIG. 3 shows an example within CSPI 300 where a host machine is connected to multiple NVDs. As shown in fig. 3, host machine 302 includes a Network Interface Card (NIC) 304 that includes a plurality of ports 306 and 308. Host machine 300 is connected to first NVD 310 via port 306 and link 320, and to second NVD 312 via port 308 and link 322. Ports 306 and 308 may be ethernet ports and links 320 and 322 between host machine 302 and NVDs 310 and 312 may be ethernet links. The NVD 310 is in turn connected to a first TOR switch 314 and the NVD 312 is connected to a second TOR switch 316. The links between NVDs 310 and 312 and TOR switches 314 and 316 may be ethernet links. TOR switches 314 and 316 represent level 0 switching devices in a multi-level physical network 318.
The arrangement depicted in fig. 3 provides two separate physical network paths to and from the physical switch network 318 to the host machine 302, a first path through the TOR switch 314 to the NVD 310 to the host machine 302, and a second path through the TOR switch 316 to the NVD 312 to the host machine 302. The separate path provides enhanced availability (referred to as high availability) of the host machine 302. If one of the paths (e.g., a link in one of the paths is broken) or one of the devices (e.g., a particular NVD is not running) is in question, then the other path may be used for communication to/from host machine 302.
In the configuration depicted in fig. 3, the host machine connects to two different NVDs using two different ports provided by the NIC of the host machine. In other embodiments, the host machine may include multiple NICs that enable connectivity of the host machine to multiple NVDs.
Referring back to fig. 2, an nvd is a physical device or component that performs one or more network and/or storage virtualization functions. An NVD may be any device having one or more processing units (e.g., CPU, network Processing Unit (NPU), FPGA, packet processing pipeline, etc.), memory (including cache), and ports. The various virtualization functions may be performed by software/firmware executed by one or more processing units of the NVD.
NVD may be implemented in a variety of different forms. For example, in certain embodiments, the NVD is implemented as an interface card called smartNIC or as a smart NIC with an on-board embedded processor. smartNIC are devices independent of the NIC on the host machine. In FIG. 2, NVDs 210 and 212 may be implemented as smartNIC connected to host machine 202 and host machines 206 and 208, respectively.
SmartNIC is but one example of an NVD implementation. Various other embodiments are possible. For example, in some other implementations, the NVD or one or more functions performed by the NVD may be incorporated into or performed by one or more host machines, one or more TOR switches, and other components of CSPI 200. For example, the NVD may be implemented in a host machine, where the functions performed by the NVD are performed by the host machine. As another example, the NVD may be part of a TOR switch, or the TOR switch may be configured to perform functions performed by the NVD, which enables the TOR switch to perform various complex packet conversions for the public cloud. TOR performing the function of NVD is sometimes referred to as intelligent TOR. In other embodiments where a Virtual Machine (VM) instance is provided to the client instead of a Bare Metal (BM) instance, the functions performed by the NVD may be implemented within the hypervisor of the host machine. In some other implementations, some of the functionality of the NVD may be offloaded to a centralized service running on a set of host machines.
In certain embodiments, such as when implemented as smartNIC as shown in fig. 2, the NVD may include a plurality of physical ports that enable the NVD to connect to one or more host machines and one or more TOR switches. Ports on NVD may be classified as host-oriented ports (also referred to as "south ports") or network-oriented or TOR-oriented ports (also referred to as "north ports"). The host-facing port of the NVD is a port for connecting the NVD to a host machine. Examples of host-facing ports in fig. 2 include port 236 on NVD 210 and ports 248 and 254 on NVD 212. The network-facing port of the NVD is a port for connecting the NVD to the TOR switch. Examples of network-facing ports in fig. 2 include port 256 on NVD 210 and port 258 on NVD 212. As shown in fig. 2, NVD 210 connects to TOR switch 214 using link 228 extending from port 256 of NVD 210 to TOR switch 214. Similarly, NVD 212 connects to TOR switch 216 using link 230 extending from port 258 of NVD 212 to TOR switch 216.
The NVD receives packets and frames from the host machine (e.g., packets and frames generated by computing instances hosted by the host machine) via the host-oriented ports, and after performing the necessary packet processing, the packets and frames may be forwarded to the TOR switch via the network-oriented ports of the NVD. The NVD may receive packets and frames from the TOR switch via the network-oriented ports of the NVD, and after performing the necessary packet processing, may forward the packets and frames to the host machine via the host-oriented ports of the NVD.
In some embodiments, there may be multiple ports and associated links between the NVD and the TOR switch. These ports and links may be aggregated to form a link aggregation group (referred to as LAG) of multiple ports or links. Link aggregation allows multiple physical links between two endpoints (e.g., between NVD and TOR switches) to be considered a single logical link. All physical links in a given LAG may operate in full duplex mode at the same speed. LAG helps to increase the bandwidth and reliability of the connection between two endpoints. If one of the physical links in the LAG fails, traffic will be dynamically and transparently reassigned to one of the other physical links in the LAG. The aggregated physical link delivers a higher bandwidth than each individual link. The multiple ports associated with the LAG are considered to be a single logical port. Traffic may be load balanced among the multiple physical links of the LAG. One or more LAGs may be configured between the two endpoints. The two endpoints may be located between the NVD and TOR switches, between the host machine and the NVD, and so on.
The NVD implements or performs network virtualization functions. These functions are performed by software/firmware executed by the NVD. Examples of network virtualization functions include, but are not limited to, packet encapsulation and decapsulation functions, functions for creating a VCN network, functions for implementing network policies, such as VCN security list (firewall) functionality, functions that facilitate routing and forwarding of packets to and from computing instances in a VCN, and the like. In some embodiments, upon receiving a packet, the NVD is configured to execute a packet processing pipeline to process the packet and determine how to forward or route the packet. As part of this packet processing pipeline, the NVD may perform one or more virtual functions associated with the overlay network, such as performing VNICs associated with cis in the VCN, performing Virtual Routers (VR) associated with the VCN, encapsulation and decapsulation of packets to facilitate forwarding or routing in the virtual network, execution of certain gateways (e.g., local peer gateways), implementation of security lists, network security groups, network Address Translation (NAT) functionality (e.g., translating public IP to private IP on a host-by-host basis), throttling functions, and other functions.
In some embodiments, the packet processing data path in the NVD may comprise a plurality of packet pipelines, each pipeline being comprised of a series of packet transformation hierarchies. In some embodiments, after receiving a packet, the packet is parsed and classified into a single pipeline. The packets are then processed in a linear fashion, grading by grading, until the packets are either discarded or sent out over the NVD's interface. These hierarchies provide basic functional packet processing building blocks (e.g., validate headers, force throttles, insert new layer 2 headers, force L4 firewalls, VCN encapsulation/decapsulation, etc.) so that new pipelines can be built by combining existing hierarchies and new functionality can be added by creating new hierarchies and inserting them into existing pipelines.
The NVD may perform both control plane and data plane functions corresponding to the control plane and data plane of the VCN. Examples of VCN control planes are also depicted in fig. 18, 19, 20, and 21 (see reference numerals 1816, 1916, 2016, and 2116) and described below. Examples of VCN data planes are depicted in fig. 18, 19, 20, and 21 (see reference numerals 1818, 1918, 2018, and 2118) and described below. The control plane functions include functions for configuring the network (e.g., setting up routing and routing tables, configuring VNICs, etc.) that control how data is forwarded. In some embodiments, a VCN control plane is provided that centrally computes all coverage-to-base mappings and publishes them to NVDs and virtual network edge devices (such as various gateways, such as DRG, SGW, IGW, etc.). Firewall rules may also be published using the same mechanism. In certain embodiments, the NVD only obtains a mapping related to the NVD. The data plane functions include functions to actually route/forward packets based on a configuration using control plane settings. The VCN data plane is implemented by encapsulating the customer's network packets before they traverse the base network. Encapsulation/decapsulation functionality is implemented on the NVD. In certain embodiments, the NVD is configured to intercept all network packets in and out of the host machine and perform network virtualization functions.
As indicated above, the NVD performs various virtualization functions, including VNICs and VCN VR. The NVD may execute a VNIC associated with a computing instance hosted by one or more host machines connected to the VNIC. For example, as depicted in fig. 2, NVD 210 performs the functionality of VNIC 276 associated with computing instance 268 hosted by host machine 202 connected to NVD 210. As another example, NVD 212 executes VNICs 280 associated with bare metal computing instances 272 hosted by host machine 206 and executes VNICs 284 associated with computing instances 274 hosted by host machine 208. The host machine may host computing instances belonging to different VCNs (of different customers), and an NVD connected to the host machine may execute a VNIC corresponding to the computing instance (i.e., perform VNIC-related functionality).
The NVD also executes a VCN virtual router corresponding to the VCN of the computing instance. For example, in the embodiment depicted in fig. 2, NVD 210 executes VCN VR 277 corresponding to the VCN to which computing instance 268 belongs. NVD 212 executes one or more VCN VRs 283 corresponding to one or more VCNs to which computing instances hosted by host machines 206 and 208 belong. In some embodiments, the VCN VR corresponding to the VCN is executed by all NVDs connected to a host machine hosting at least one computing instance belonging to the VCN. If a host machine hosts computing instances belonging to different VCNs, then an NVD connected to the host machine may execute VCN VR corresponding to those different VCNs.
In addition to the VNICs and VCN VRs, the NVD may execute various software (e.g., daemons) and include one or more hardware components that facilitate various network virtualization functions performed by the NVD. For simplicity, these various components are grouped together as a "packet processing component" shown in fig. 2. For example, NVD 210 includes a packet processing component 286 and NVD 212 includes a packet processing component 288. For example, a packet processing component for an NVD may include a packet processor configured to interact with ports and hardware interfaces of the NVD to monitor all packets received by and transmitted using the NVD and store network information. The network information may include, for example, network flow information and per-flow information (e.g., per-flow statistics) identifying different network flows handled by the NVD. In some embodiments, network flow information may be stored on a per VNIC basis. The packet processor may perform packet-by-packet manipulation and implement stateful NAT and L4 Firewalls (FWs). As another example, the packet processing component may include a replication agent configured to replicate information stored by the NVD to one or more different replication target stores. As yet another example, the packet processing component may include a logging agent configured to perform a logging function of the NVD. The packet processing component may also include software for monitoring the performance and health of the NVD and possibly also the status and health of other components connected to the NVD.
FIG. 1 illustrates components of an example virtual or overlay network, including a VCN, a subnet within the VCN, a computing instance deployed on the subnet, a VNIC associated with the computing instance, a VR for the VCN, and a set of gateways configured for the VCN. The overlay component depicted in fig. 1 may be executed or hosted by one or more of the physical components depicted in fig. 2. For example, computing instances in a VCN may be executed or hosted by one or more host machines depicted in fig. 2. For a computing instance hosted by a host machine, a VNIC associated with the computing instance is typically executed by an NVD connected to the host machine (i.e., VNIC functionality is provided by an NVD connected to the host machine). The VCN VR functions for a VCN are performed by all NVDs connected to a host machine that hosts or executes computing instances that are part of the VCN. The gateway associated with the VCN may be implemented by one or more different types of NVDs. For example, some gateways may be performed by smartNIC, while other gateways may be performed by one or more host machines or other implementations of NVDs.
As described above, the computing instances in the client VCN may communicate with various different endpoints, where the endpoints may be within the same subnet as the source computing instance, in different subnets but within the same VCN as the source computing instance, or with endpoints external to the VCN of the source computing instance. These communications are facilitated using a VNIC associated with the computing instance, a VCN VR, and a gateway associated with the VCN.
For communication between two computing instances on the same subnet in a VCN, the VNICs associated with the source and destination computing instances are used to facilitate the communication. The source and destination computing instances may be hosted by the same host machine or by different host machines. Packets originating from a source computing instance may be forwarded from a host machine hosting the source computing instance to an NVD connected to the host machine. On the NVD, packets are processed using a packet processing pipeline, which may include execution of VNICs associated with the source computing instance. Because the destination endpoints for the packets are located within the same subnet, execution of the VNICs associated with the source computing instance causes the packets to be forwarded to the NVD executing the VNICs associated with the destination computing instance, which then processes the packets and forwards them to the destination computing instance. VNICs associated with source and destination computing instances may execute on the same NVD (e.g., when both source and destination computing instances are hosted by the same host machine) or on different NVDs (e.g., when source and destination computing instances are hosted by different host machines connected to different NVDs). The VNIC may use the routing/forwarding table stored by the NVD to determine the next hop for the packet.
For packets to be transferred from a computing instance in a subnet to an endpoint in a different subnet in the same VCN, packets originating from a source computing instance are transferred from a host machine hosting the source computing instance to an NVD connected to the host machine. On the NVD, packets are processed using a packet processing pipeline, which may include execution of one or more VNICs and VR associated with the VCN. For example, as part of a packet processing pipeline, the NVD executes or invokes functionality corresponding to a VNIC associated with the source computing instance (also referred to as executing the VNIC). The functionality performed by the VNIC may include looking at VLAN tags on the packet. The VCN VR functionality is next invoked and executed by the NVD because the destination of the packet is outside the subnet. The VCN VR then routes the packet to an NVD that executes the VNIC associated with the destination computing instance. The VNIC associated with the destination computing instance then processes the packet and forwards the packet to the destination computing instance. VNICs associated with source and destination computing instances may execute on the same NVD (e.g., when the source and destination computing instances are hosted by the same host machine) or on different NVDs (e.g., when the source and destination computing instances are hosted by different host machines connected to the different NVDs).
If the destination for the packet is outside of the VCN of the source computing instance, the packet originating from the source computing instance is transmitted from the host machine hosting the source computing instance to an NVD connected to the host machine. The NVD executes the VNIC associated with the source computing instance. Since the destination endpoint of the packet is outside the VCN, the packet is then processed by the VCN VR for that VCN. The NVD invokes VCN VR functionality, which causes the packet to be forwarded to the NVD executing the appropriate gateway associated with the VCN. For example, if the destination is an endpoint within a customer's in-premise network, the packet may be forwarded by the VCN VR to the NVD executing a DRG gateway configured for the VCN. The VCN VR may be executed on the same NVD as the NVD executing the VNIC associated with the source computing instance, or by a different NVD. The gateway may be implemented by an NVD, which may be smartNIC, a host machine, or other NVD implementation. The packet is then processed by the gateway and forwarded to the next hop, which facilitates delivery of the packet to its intended destination endpoint. For example, in the embodiment depicted in fig. 2, packets originating from computing instance 268 may be transmitted from host machine 202 to NVD 210 over link 220 (using NIC 232). On NVD 210, VNIC 276 is invoked because it is the VNIC associated with source computing instance 268. The VNIC 276 is configured to examine the encapsulated information in the packet and determine a next hop for forwarding the packet in order to facilitate the transfer of the packet to its intended destination endpoint and then forward the packet to the determined next hop.
Computing instances deployed on a VCN may communicate with a variety of different endpoints. These endpoints may include endpoints hosted by CSPI 200 and endpoints external to CSPI 200. Endpoints hosted by CSPI 200 may include instances in the same VCN or other VCNs, which may be customer VCNs or VCNs that do not belong to customers. Communication between endpoints hosted by CSPI 200 may be performed through physical network 218. The computing instance may also communicate with endpoints that are not hosted by CSPI 200 or external to CSPI 200. Examples of such endpoints include endpoints within a customer's on-premise network or data center, or public endpoints accessible through a public network such as the internet. Communication with endpoints external to CSPI 200 may be performed over a public network (e.g., the internet) (not shown in fig. 2) or a private network (not shown in fig. 2) using various communication protocols.
The architecture of CSPI 200 depicted in fig. 2 is merely an example and is not intended to be limiting. In alternative embodiments, variations, alternatives, and modifications are possible. For example, in some embodiments, CSPI 200 may have more or fewer systems or components than those shown in fig. 2, may combine two or more systems, or may have a different configuration or arrangement of systems. The systems, subsystems, and other components depicted in fig. 2 may be implemented in software (e.g., code, instructions, programs) executed by one or more processing units (e.g., processors, cores) of the respective system, using hardware, or a combination thereof. The software may be stored on a non-transitory storage medium (e.g., on a memory device).
FIG. 4 depicts connectivity between a host machine and an NVD for providing I/O virtualization to support multi-tenancy in accordance with certain embodiments. As depicted in fig. 4, host machine 402 executes hypervisor 404 that provides a virtualized environment. The host machine 402 executes two virtual machine instances, VM1 406 belonging to guest/tenant #1 and VM2 408 belonging to guest/tenant # 2. Host machine 402 includes a physical NIC 410 connected to NVD 412 via link 414. Each computing instance is attached to a VNIC executed by NVD 412. In the embodiment in FIG. 4, VM1 406 is attached to VNIC-VM1 420 and VM2 408 is attached to VNIC-VM2422.
As shown in fig. 4, NIC 410 includes two logical NICs, logical NIC a 416 and logical NIC B418. Each virtual machine is attached to its own logical NIC and is configured to work with that logical NIC. For example, VM1 406 is attached to logical NIC A416 and VM2 408 is attached to logical NIC B418. Although the host machine 402 includes only one physical NIC 410 shared by multiple tenants, each tenant's virtual machine believes that they have their own host machine and NIC due to the logical NIC.
In some embodiments, each logical NIC is assigned its own VLAN ID. Thus, a specific VLAN ID is assigned to logical NIC a 416 for tenant #1, and a separate VLAN ID is assigned to logical NIC B418 for tenant # 2. When a packet is transferred from VM1 406, a label assigned to tenant #1 is attached to the packet by the hypervisor, which is then transferred from host machine 402 to NVD 412 over link 414. In a similar manner, when a packet is transmitted from VM2 408, a token assigned to tenant #2 is attached to the packet by the hypervisor, which is then transmitted from host machine 402 to NVD 412 over link 414. Thus, packets 424 transmitted from host machine 402 to NVD 412 have associated indicia 426 identifying the particular tenant and associated VM. On the NVD, for a packet 424 received from host machine 402, a flag 426 associated with the packet is used to determine whether the packet is processed by VNIC-VM1 420 or VNIC-VM2 422. The packets are then processed by the corresponding VNICs. The configuration depicted in fig. 4 enables each tenant's computing instance to trust that they own host machine and NIC. The arrangement depicted in FIG. 4 provides I/O virtualization to support multi-tenancy.
Fig. 5 depicts a simplified block diagram of a physical network 500 according to some embodiments. The embodiment depicted in fig. 5 is structured as a Clos network. Clos networks are a specific type of network topology designed to provide connection redundancy while maintaining high split bandwidth and maximum resource utilization. Clos networks are a type of non-blocking, multi-stage or multi-stage switching network, where the number of stages or layers may be two, three, four, five, etc. The embodiment depicted in fig. 5 is a 3-tier network, including tiers 1,2, and 3.TOR switches 504 represent level 0 switches in a Clos network. One or more NVDs are connected to the TOR switch. Layer 0 switches are also known as edge devices of a physical network. The level 0 switch is connected to a level 1 switch, also known as a leaf switch. In the embodiment depicted in fig. 5, a set of "n" level 0TOR switches are connected to a set of "n" level 1 switches and form a group (pod). Each level 0 switch in a cluster is interconnected with all level 1 switches in the cluster, but there is no connectivity of switches between clusters. In some embodiments, two clusters are referred to as blocks. Each block is serviced by or connected to a set of "n" level 2 switches (sometimes referred to as a backbone switch). There may be several blocks in the physical network topology. The level 2 switches are in turn connected to "n" level 3 switches (sometimes referred to as super trunk switches). Communication of packets through the physical network 500 is typically performed using one or more layer 3 communication protocols. Typically, all layers of the physical network (except the TOR layer) are n-way redundant, thus allowing for high availability. Policies may be specified for the clusters and blocks to control the visibility of switches to each other in the physical network, thereby enabling scaling of the physical network.
The Clos network is characterized by a fixed maximum number of hops from one level 0 switch to another level 0 switch (or from an NVD connected to the level 0 switch to another NVD connected to the level 0 switch). For example, in a 3-tier Clos network, a maximum of seven hops are required for a packet to reach from one NVD to another, with the source and target NVDs connected to the leaf tier of the Clos network. Also, in a 4-tier Clos network, a maximum of nine hops are required for packets to arrive from one NVD to another, with the source and target NVDs connected to the leaf tier of the Clos network. Thus, the Clos network architecture maintains consistent latency throughout the network, which is important for communication between and within the data center. The Clos topology is horizontally scalable and cost-effective. The bandwidth/throughput capacity of the network can be easily increased by adding more switches (e.g., more leaf switches and trunk switches) at each level and by increasing the number of links between switches at adjacent levels.
In some embodiments, each resource within the CSPI is assigned a unique identifier called a Cloud Identifier (CID). This identifier is included as part of the information of the resource and may be used to manage the resource, e.g., via a console or through an API. An example syntax for CID is:
ocid 1< resource type > < field > [ area ] [ future use ] < unique ID >
Wherein ocid is a text string indicating the version of the CID, a resource type is a type of resource (e.g., instance, volume, VCN, subnet, user, group, etc.), a domain is a domain in which the resource is located. Exemplary values are "c1" for the business domain, "c2" for the government cloud domain, or "c3" for the federal government cloud domain, etc. Each domain may have its own domain name, region where the resource is located. This portion may be empty if the region is not suitable for the resource, and future use-reserved for future use. Unique ID-unique portion of ID. The format may vary depending on the type of resource or service.
Special regional cloud of customer (DRCC)
The customer-premises private regional cloud (DRCC) corresponds to the infrastructure of a Cloud Service Provider (CSP) deployed in the customer's own data center. With DRCC, an enterprise can easily integrate mission critical database systems with applications previously deployed on expensive hardware on the CSP's high availability and secure infrastructure, creating operational efficiencies and modernization opportunities. Enterprises often find moving to cloud infrastructure to be both costly and difficult due to the inherent mismatch between legacy application architecture and cloud architecture. These challenges are even more acute for workloads that cannot move to the public cloud. Enterprises can only access a small subset of the cloud services deployed on-premise, and these services also have only a limited set of features and capabilities compared to those available in public clouds.
The DRCC framework brings the full capabilities of the public cloud to an on-premise deployment so that enterprises can reduce infrastructure and operating costs, upgrade legacy applications on modern cloud services, and meet the most stringent regulatory, data residence, and latency requirements—all of which are implemented through the CSP's infrastructure, which provides enhanced performance and highest level of security. Customers get the option and flexibility of running all cloud services of CSPs in their data centers. The customer may choose from all public cloud services provided by the CSP, including, for example, VMware cloud, autonomic database, container engine for Kubernetes, bare metal server, exadata cloud services, and only pay for the services they consume. The DRCC framework is designed to keep data and customer operations completely isolated from the internet, with control plane and data plane operations kept deployed on-premise, to help customers meet their most stringent compliance and latency requirements. By fully managed experience and accessing new capabilities at the point when they become available in the public cloud, the DRCC framework provides cloud-scale security, resilience, and scale, as well as support for mission-critical workloads with tools that gradually modernize legacy workloads.
DRCC have the following advantages
● Bringing all public cloud services and autonomous databases to an on-premise to reduce risk and cost of innovation
● Providing a framework in which customers only pay for consumed services
Building a truly consistent development experience for all IaaS and PaaS applications by using exactly the same tools, APIs and SLAs available in the public cloud infrastructure
Preserve complete control over all data to meet the most stringent data privacy and latency requirements
Seamlessly deployed between on-premise and public cloud without any impact on functionality or development experience
Integrating the workload onto a single cloud platform so that customers can focus on business priorities
Reducing the cost of running an on-premise workload with CSP's infrastructure providing the highest level of security
The CSP infrastructure is typically hosted in the regional and availability domains. A region is a localized geographic area and an availability domain is one or more data centers located within the region. Thus, a region is made up of one or more availability domains. Most infrastructure resources are either region-specific (such as virtual cloud networks) or availability domain-specific (such as computing instances). Traffic between availability domains and between regions is encrypted. The availability domains are isolated from each other, have fault tolerance capability, and are unlikely to fail simultaneously. Because the availability domains do not share infrastructure (such as power or cooling) or an internal availability domain network, a fault at one availability domain within a region is less likely to affect the availability of other availability domains within the same region.
The availability domains within the same area are connected to each other through low latency, high bandwidth networks, making it possible to provide high availability connectivity to the internet and on-premise, and to build replicated systems in multiple availability domains to achieve both high availability and disaster recovery. Regions are independent of other regions and may be separated by significant distances-across countries or even continents. In general, one can deploy an application in an area where the application is most frequently used, because resources in the vicinity are used faster than resources in the far away. But one can also deploy applications in different areas to mitigate the risk of regional events such as earthquakes.
A failure domain is a grouping of hardware and infrastructure within the availability domain. Each availability domain contains multiple fault domains (e.g., three fault domains). The failure domain provides anti-affinity, i.e., the failure domain allows you to distribute an instance so that the instance is not on the same physical hardware within a single availability domain. A hardware failure or computing hardware maintenance event affecting one failure domain does not affect instances in other failure domains. In addition, the physical hardware in the failed domain has independent and redundant power supplies, which prevents faults in the power supply hardware in one failed domain from affecting other failed domains.
In the context of DRCC, according to some embodiments, the availability domain is provided via a chassis that includes a plurality of top of rack (TOR) switches and a plurality of host machines/servers. TOR switches are network switches in a data center that connect servers and other network devices in racks. The purpose of TOR switches is to provide high-speed connectivity and efficient data transfer between devices within a rack and a larger network infrastructure. TOR switches typically have a high port density to accommodate multiple servers and devices within a single chassis. It provides ethernet connectivity for these devices allowing them to communicate with each other and with the rest of the network. TOR switches often support high-speed ethernet standards such as 10 gigabit ethernet, 25 gigabit ethernet, 40 gigabit ethernet, or 100 gigabit ethernet. These fast data transfer rates ensure efficient communication between the server and the network. TOR switches provide minimal latency and provide low latency switching, which is critical in data centers that require fast response times for applications and services running on servers. Next, with reference to fig. 6 and 7, different TOR configurations that may be employed within racks in a data center are described.
Fig. 6 depicts a configuration of a plurality of TORs included in a rack in accordance with at least one embodiment. The configuration 600 of multiple TORs depicted in fig. 6 corresponds to a 3-TOR configuration. The rack includes three TORs (601, 602, and 603) and a plurality of host machines/servers. In one embodiment, a failure domain is created within an availability domain (i.e., rack) by selecting a subset of host machines from a plurality of host machines. Each host machine in the selected set of host machines (i.e., the subset of host machines) is communicatively coupled to one of the TORs included in the plurality of TORs. For example, as shown in fig. 6, a first subset of host machines from the plurality of host machines is depicted as 605A. Each host machine/server included in 605A is communicatively coupled to a first TOR (i.e., TOR 1,601). Thus, the combination of the first subset (605A) of host machines and the first TOR (601) forms a first failure domain.
A second failure domain is created within the availability domain by selecting a second subset of host machines from the plurality of host machines. Each of the second subset of host machines is communicatively coupled to another TOR included in the plurality of TORs (i.e., different from the TOR associated with the first failure domain). Note that the first subset of host machines does not intersect the second subset of host machines. For example, as shown in fig. 6, a second subset of host machines from the plurality of host machines is depicted as 605B. Each host machine/server included in 605B is communicatively coupled to a second TOR, TOR 2,602.
In a similar manner, a third failure domain may be created within the availability domain by selecting a third subset of host machines from the plurality of host machines. Each of the third subset of host machines is communicatively coupled to another TOR (i.e., different from the TORs associated with the first and second failure domains). Note that the third subset of host machines does not intersect the first subset of host machines and the second subset of host machines. For example, as shown in fig. 6, a third subset of host machines from the plurality of host machines is depicted as 605C. Each host machine/server included in 605C is communicatively coupled to a third TOR (i.e., TOR 3,603) to form a third fault domain.
In fig. 6, each of the first, second and third subsets of servers (605A, 605B and 605C) is depicted as including K servers. It should be appreciated that this in no way limits the scope of the present disclosure. A particular subset of servers may include a different number of servers than another subset of servers. In addition, the chassis includes a plurality of Network Virtualization Devices (NVDs). A first subset of the server/host machines is connected to a first TOR with a first subset of NVDs from the plurality of NVDs. In a similar manner, a second subset of the server/host machines is connected to a second TOR with a second subset of the NVDs from the plurality of NVDs, and a third subset of the server/host machines is connected to a third TOR with a third subset of the NVDs.
In addition, for each failure domain (i.e., a combination of a subset of servers and TOR switches associated with the subset of servers), a set of addresses corresponding to hosts/servers included in the first subset of host machines/servers are associated with TOR switches associated with the first subset of servers. In this way, the control plane is configured to forward packets destined for a particular server included in the first subset of servers to a first TOR associated with the first subset of servers.
Thus, the configuration of the chassis as depicted in fig. 6 provides three different failure domains that provide a smaller burst radius (i.e., a percentage of capacity lost when a TOR switch fails) than if all servers within the chassis were communicatively coupled to a single TOR switch. Specifically, the configuration of the rack as depicted in fig. 6 resulted in 33% explosion radius, i.e., 33% capacity loss when a single TOR failed. It is noted that the probability of multiple TORs failing simultaneously within a rack is very low, i.e., negligible. In one embodiment, the rack configuration of FIG. 6 provides one or more fault domains that may be presented to a customer. Upon obtaining a request from a customer to allocate one or more host machines in a rack, the control plane may assign one or more host machines included in the one or more failure domains based on certain criteria. For example, if a customer requests high availability, the control plane may allocate host machines located in different failure domains.
Turning to fig. 7, another configuration of multiple TORs included in a chassis is depicted in accordance with some embodiments. Specifically, the configuration 700 depicted in fig. 7 is referred to herein as a dual TOR configuration. As shown in fig. 7, the rack includes two TORs, TOR 1 701 and TOR 2703. In addition, the rack includes a plurality of server/host machines. According to some embodiments, the plurality of servers are grouped into disjoint subsets of servers. For example, the plurality of servers may be grouped into a first subset of servers 705A and a second subset of servers 705B.
As shown in fig. 7, in a dual TOR configuration, each subset of servers is communicatively coupled to each TOR included in the rack. Specifically, a first subset of servers 705A and a second subset of servers are communicatively coupled to TOR 1 701 and TOR 2 703. Thus, in this configuration, no capacity loss is incurred when a single TOR switch in the chassis fails, and the server only needs to select another properly functioning TOR to transfer data. Note that the probability of two TORs failing simultaneously within a rack is very low, i.e., almost negligible. In contrast to the TOR configuration of fig. 6 in which the design of the NVDs coupling servers to TORs has not changed (i.e., each NVD couples a single server to a single TOR), in the configuration of fig. 7, it can be appreciated that the NVDs are configured to connect to two TORs. Thus, in this configuration, each NVD is configured to have multiple IP addresses. It will be appreciated that in the configuration of fig. 7, the entire rack including multiple TORs (i.e., providing redundancy) is considered a failure domain. In some implementations, the fault domain may span multiple racks. Consider, for example, a switch that serves multiple racks (e.g., four racks). In this case, four racks may be considered as a fault domain, where each rack may include multiple TORs to provide redundancy.
FIG. 8 depicts a flowchart illustrating steps performed in providing an availability domain to a customer, in accordance with at least one embodiment. The processes depicted in fig. 8 may be implemented in software (e.g., code, instructions, programs), hardware, or a combination thereof, executed by one or more processing units (e.g., processors, cores) of the respective systems. The software may be stored on a non-transitory storage medium (e.g., on a memory device). The method presented in fig. 8 and described below is intended to be illustrative and non-limiting. Although fig. 8 depicts various process steps occurring in a particular order or sequence, this is not intended to be limiting. In some alternative embodiments, the steps may be performed in a different order, or some steps may be performed in parallel.
The process begins at step 801, where the control plane provides an availability field that includes a rack. The chassis includes a plurality of TOR switches and a plurality of host machines or servers. In step 803, a first failure domain is created within the availability domain. The first failure domain includes a first TOR switch from the plurality of TOR switches and a first subset of host machines from the plurality of host machines. The first subset of host machines is communicatively coupled to the first TOR. Thereafter, the process moves to step 805 where a second failure domain is created within the availability domain. The second failure domain includes a second TOR switch from the plurality of TOR switches and a second subset of host machines from the plurality of host machines. Note that the second subset of host machines does not intersect the first subset of host machines. The second subset of host machines is also communicatively coupled to a second TOR via the NVD. In this way, one or more fault domains may be presented to the customer. Upon obtaining a request from a customer requesting allocation of one or more host machines in the rack, the control plane may assign one or more host machines included in the one or more failure domains based on certain criteria associated with the customer requirements.
Turning now to fig. 9, an exemplary architecture 900 of DRCC framework is depicted that provides clients with the full capabilities of a public cloud. Thus, customers can reduce infrastructure and operating costs, upgrade legacy applications on modern cloud services, and meet the most stringent regulatory, data residence, and latency requirements.
According to some embodiments, fig. 9 depicts a data center 905 that includes a pair of TORs (i.e., TOR #1 922 and TOR #2 924), a network virtualization platform (e.g., NVD 926), and a computing host 928 (also referred to herein as a local computing host). It is appreciated that the computing host 928 includes multiple virtual or bare metal instances. NVD 926 is referred to herein as a local NVD. Computing host 928 includes a host network interface card (i.e., host NIC). For ease of illustration, FIG. 9 depicts computing host 928 as including two virtual machines, VM1 and VM2, respectively. Note that each VM is communicatively coupled to the host NIC via one of the logical interfaces (e.g., the logical interfaces depicted as PF1 and PF2, respectively). Further, note that local NVD 926 may be located on the same chassis as a host NIC included in computing host 928.
A computing host 928 included in the data center may be coupled to another host machine 911, referred to herein as a remote host machine. It will be appreciated that the remote host machine may be "any" host machine, such as (i) another host within the DRCC and behind another NVD, or (ii) another host in another DRCC (e.g., of a set DRCC for the same customer/organization) and behind another NVD (e.g., of a set DRCC for the same customer/organization), or (iii) a host machine included in the customer's in-deployment network. Note that in the case where the host machine is included in the customer's in-house deployment network, the host machine may Connect to DRCC via Fast-Connect or IPSec VPN tunnel and Connect to the host machine in DRCC using a Dynamic Routing Gateway (DRG). For ease of illustration, in the following description, it is assumed that a remote host machine (e.g., host machine 911) is the host machine included in DRCC and located behind another NVD (e.g., NVD 913 and served by remote TOR 915). It is noted that the features described below are equally applicable to other situations of the remote host machine outlined above. Also, it will be appreciated that in this case (and as shown in FIG. 9), two host machines (i.e., a local host machine 928 and a remote host machine 911) may be coupled via the network fabric 920. In addition, for convenience, NVD 913 is referred to herein as a remote NVD.
According to some embodiments, local NVD 926 has multiple physical ports. For example, in one implementation as shown in fig. 9, local NVD 926 has two physical ports—a first physical port 927A (referred to herein as TOR-oriented port) connected to TORs 922 and 924, respectively, and a second physical port 927B (referred to herein as host-oriented port) connected to computing host 928. Each physical port of local NVD 926 may be divided into multiple logical ports. For example, as shown in fig. 9, the physical port 927B is divided into two logical ports on the host-facing side, and the physical port 927A is divided into two logical ports on the TOR-facing side.
Dividing each physical port of local NVD 926 provides flexibility represented by two logical ports, two MAC addresses, and two IP addresses for each physical port of NVD 926. For example, in fig. 9, the overlay IP address and overlay MAC address are represented by underlined symbols (e.g., B1, M1), while the base IP and MAC addresses are represented by non-underlined symbols (e.g., A0, M0). As shown in fig. 9, the first physical port 927A of the local NVD 926 is associated with a first IP address (A1), a second IP address (A3), a first MAC address (M1), and a second MAC address (M3). The second physical port 927B of the local NVD 926 is associated with a first overlay IP address (B1), a second overlay IP address (C1), a first overlay MAC address (M4), and a second overlay MAC address (M6).
It can be appreciated that the limit on the number of logical ports that can be obtained by partitioning the physical ports (e.g., port 927A) of NVD 926 depends on the width of the serializer/deserializer components (i.e., serDes components) included in the NVD chipset. In one example, each physical port of NVD 926 may be split into four logical ports. It can be appreciated that a greater number of logical ports can be obtained for each physical port of NVD 926 via utilizing a gearbox (gecarbox) component in the NVD. The data center 905 (i.e., DRCC) brings the full capabilities of the public cloud to the customer. In particular DRCC hosts applications and data that require strict data retention, control, and security, and provides a means for retaining data in specific locations to achieve low latency connectivity and data intensive processing. Thus, customers may utilize all cloud services that run directly in their own data center rather than in cloud regions that are hundreds or thousands of miles away. Thus, DRCC (described below with reference to fig. 14), which takes up less space, provides an organization with an opportunity to run workloads outside of the public cloud.
Turning to fig. 10, a flow diagram illustrating a process of providing DRCC is depicted in accordance with some embodiments. The processes depicted in fig. 10 may be implemented in software (e.g., code, instructions, programs), hardware, or a combination thereof, executed by one or more processing units (e.g., processors, cores) of the respective systems. The software may be stored on a non-transitory storage medium (e.g., on a memory device). The method presented in fig. 10 and described below is intended to be illustrative and non-limiting. Although fig. 10 depicts various process steps occurring in a particular order or sequence, this is not intended to be limiting. In some alternative embodiments, the steps may be performed in a different order, or some steps may be performed in parallel. In some implementations, the method illustrated in fig. 10 can be performed by a cloud service provider to provide DRCC to a customer.
The method begins at step 1001, where a first physical port of a Network Virtualization Device (NVD) included in a data center is communicatively coupled to a first top of rack (TOR) switch and a second TOR switch. Note that the first and second TOR switches may be included in a chassis (as shown in fig. 6 and 7). In step 1003, a second physical port of the NVD is communicatively coupled with a Network Interface Card (NIC) associated with a host machine included in the data center. The second physical port provides a first logical port and a second logical port for communication between the NVD and the NIC.
Processing then moves to step 1005 where the NVD receives the packet from the host machine via the first logical port or the second logical port. In step 1007, the NVD determines a particular TOR for transmitting the packet from the group consisting of the first TOR and the second TOR. According to some embodiments, the NVD may perform a packet forwarding mechanism, such as equal cost multi-path routing (ECMP) flow hashing, to select one of the two TORs. In addition, in step 1009, the NVD transmits the packet to the particular TOR in order to facilitate transmission of the packet to the destination host machine (e.g., host machines in the same rack behind other NVDs, or host machines outside of other host machine(s) or data center (e.g., including in the customer in-house deployment network) behind other NVDs in other racks).
A detailed description of (i) transmitting packets from a computing host (e.g., computing host 928) included in data center 905 to remote host 911 with reference to fig. 11A, and (ii) transmitting packets from a remote host to a computing host in a data center, i.e., a return path, with reference to fig. 11B, is provided below. As previously described, the overlay IP address and overlay MAC address are represented by underlined symbols (e.g., B1, M1), while the base IP and MAC addresses are represented by non-underlined symbols (e.g., A0, M0). For ease of illustration, we consider the case where a virtual machine (e.g., VM 1) included in computing host 928 sends a packet to remote host 911. The steps included in the transmission of the packet are described below with reference to fig. 11A.
FIG. 11A depicts a flowchart illustrating steps performed in transmitting packets from a computing host included in a data center to a remote host. For ease of illustration, the description provided with reference to fig. 11A and 11B relates to the host in DRCC transmitting/receiving packets from another host in DRCC. It will be appreciated that the features described herein are equally applicable to other situations of remote hosts, such as remote host machines included in a customer's in-house deployment network. The processes depicted in fig. 11A may be implemented in software (e.g., code, instructions, programs), hardware, or a combination thereof, executed by one or more processing units (e.g., processors, cores) of the respective systems. The software may be stored on a non-transitory storage medium (e.g., on a memory device). The method presented in fig. 11A and described below is intended to be illustrative and non-limiting. While FIG. 11A depicts various process steps occurring in a particular order or sequence, this is not intended to be limiting. In some alternative embodiments, the steps may be performed in a different order, or some steps may be performed in parallel.
In step 1101, VM1 having overlay IP address B2 transmits packets (intended for remote host 911 having overlay IP address B100) received by NVD 926 having addresses B1, M4 (i.e., overlay IP and MAC address of one of the NVD's logical ports). Note that VM1 transmits packets to the host NIC via logical interface PF 1. Further, the host NIC utilizes one of the two logical ports of NVD 926 to transmit packets. According to some embodiments, note that each virtual machine of the computing host is associated with a logical interface on the host NIC. These interfaces are referred to herein as virtual functions, i.e., "VFs," such that each VF is associated with a VLAN-ID. Note that packets transmitted by the VM may use either or both of the logical ports of the NVD. Whether a packet uses one or both ports, or one or the other port, depends on the configuration of the NVD (e.g., whether the NVD is configured for active/active or active/backup operations) and the "state" of the interfaces of the NVD (e.g., whether both interfaces are active and running, or whether one interface is off/in backup mode).
In step 1103, NVD 926 performs a lookup operation in the VCN forwarding table after receiving the packet. Specifically, NVD 926 obtains information of the base IP address of the NVD that serves the remote host, i.e., NVD 926 obtains information of base IP address A100 of remote NVD 913 that serves the remote host.
In step 1105, the NVD modifies the header of the packet. Specifically, NVD 926 encapsulates the packet with information corresponding to the base IP address (A100) of the remote NVD 913 as the intended destination of the packet and its own software interface IP address A254 as the source of the packet, e.g., VCN header. Thereafter, NVD 926 attempts to transmit the packet to the remote host.
In step 1107, NVD 926 determines how to forward the packet to remote NVD 1113. According to one embodiment, NVD 926 recognizes that it has two routes to send packets to the remote host, i.e., one route through TOR#1 922 and another route through TOR#2 924. Specifically, NVD 926 recognizes both routes via BGP communications with both TORs. NVD 926 performs Equal Cost Multipath (ECMP) stream hashing to select one of two TORs to forward the packet to remote NVD 913. In step 1109, the packet traverses the selected TOR (e.g., TOR # 1) to traverse the network fabric to ultimately reach the remote TOR serving the remote host machine. In addition, in steps 1111 and 1113, the packet passes through the remote TOR 915 to reach the remote NVD 913. The remote NVD 913 decapsulates the packet to retrieve the destination address included in the packet (i.e., B100) and ultimately provides the packet to the remote host 911.
Turning now to fig. 11B, a flowchart illustrating steps performed when transmitting packets from a remote host (e.g., remote host 911 in fig. 9) to a computing host (e.g., host 928 in fig. 9) is depicted. The processes depicted in fig. 11B may be implemented in software (e.g., code, instructions, programs), hardware, or a combination thereof, executed by one or more processing units (e.g., processors, cores) of the respective systems. The software may be stored on a non-transitory storage medium (e.g., on a memory device). The method presented in fig. 11B and described below is intended to be illustrative and non-limiting. While FIG. 11B depicts various process steps occurring in a particular order or sequence, this is not intended to be limiting. In some alternative embodiments, the steps may be performed in a different order, or some steps may be performed in parallel.
The process begins at step 1151, where a VM in remote host 911 having an overlay IP address B100 sends a packet destined for VM1 (having an overlay IP address B2) to NVD 913. It will be appreciated that in the description of FIG. 11B, it is assumed that both the remote host machine and the local host machine are included in DRCC. Note, however, that if the remote host machine is located outside DRCC, packets transmitted by the remote host machine "enter" DRCC using means such as FastConnect (or IPSec VPN) and then reach the NVD of the host machine included in DRCC using the DRG. In step 1153, NVD 913 performs a lookup operation in the VCN forwarding table after receiving the packet. Specifically, NVD 913 determines that VM1 having overlay address B2 is serviced by NVD having loopback IP address A254.
In step 1155, NVD 913 encapsulates the packet with information corresponding to the loopback IP address (A254) of NVD 926 as the intended destination of the packet and its own base IP address (A100) as the source of the packet, e.g., VCN header. Thereafter, in step 1157, NVD 913 forwards the packet to a remote TOR, which further passes the packet to network fabric 920.
In step 1159, network fabric 920 performs a hashing operation (e.g., a modulo 2 operation) after receiving the packet to select one of the routes (via TOR #1 or TOR # 2) to forward the packet to NVD 926. It is appreciated that the hashing operation performed by the switch (e.g., in the network fabric) may be different from the hashing operation performed by the NVD.
In step 1161, NVD 926 receives a packet. In step 1163, NVD 926 may perform another hash operation to select one of two logical ports (e.g., ports with overlay IP addresses B1 and C1) to forward the packet to VM1, VM1 being the intended destination of the packet. It will be appreciated that, as previously described, packets may arrive at any port of the NVD. For example, if the NVD is configured to be in an active/active mode of operation, in one embodiment, approximately half of the flow will reach either port. But if the NVD is configured in active/backup mode of operation, then all flows will reach the active port (until the active port fails). In step 1165, the NVD forwards the packet to the VM on the selected logical interface of the NVD.
According to some embodiments, traffic emanating from the local host machine and destined for the remote host machine (and vice versa) is statistically load balanced (e.g., via utilization of ECMP routing) across dual TORs 922 and 924. An advantage of using dual TORs in DRCC architecture is failover, i.e., traffic from a failed TOR can be switched to another (normal running) TOR. Furthermore, the architecture of fig. 9 also allows for a reduction in hardware usage, i.e., using a single local NVD (e.g., smartNIC) instead of multiple smartNIC. Further, note that the above-described mechanism utilizing dual TORs is in no way limited to use only in an on-premise data center as described above. Rather, the concept of utilizing dual TORs as described herein is equally applicable to any deployment-i.e., business district or DRCC or any other type of network deployment.
Turning now to fig. 12, another exemplary architecture 1200 of a DRCC framework is depicted that brings the full capabilities of a public cloud to a customer. In this way, customers can reduce infrastructure and operating costs, upgrade legacy applications on modern cloud services, and meet the most stringent regulatory, data residence, and latency requirements. Fig. 12 depicts a data center 1205 including a host machine 1228 coupled to a remote host machine 1211 via a network fabric 1220. It will be appreciated that the remote host machine may be "any" host machine, such as (i) another host within the DRCC and behind another NVD, or (ii) another host in another DRCC and behind another NVD, or (iii) a host included in the customer's in-house deployment network. Note that for hosts included in an in-deployment network, it is possible to Connect to DRCC via Fast-Connect or IPSec VPN and Connect to host 928 using a Dynamic Routing Gateway (DRG). For ease of illustration, in the following description, it is assumed that a remote host (e.g., host 1211) is one that is included in DRCC and that is located behind another NVD (e.g., NVD 1213 and is serviced by remote TOR 1215). It is noted that the features described below are equally applicable to other cases of the remote host outlined above.
Data center 1205 includes a pair of TORs (i.e., TOR #1 1222 and TOR #2 1224), a network virtualization platform (e.g., NVD 1226), and a computing host machine 1228 (i.e., a host machine referred to herein as a local host machine). NVD 1226 is referred to herein as a local NVD. The computing host machine 1228 includes a host machine network interface card (i.e., a host NIC) and a plurality of Virtual Machines (VMs). For ease of illustration, FIG. 12 depicts computing host 1228 as including two virtual machines, VM1 and VM2, respectively. Note that each of the VMs is communicatively coupled to the host NIC via one of the logical interfaces (e.g., the logical interfaces depicted as PF1 and PF2, respectively). Further, note that local NVD 1226 may be located on the same chassis as the host NIC included in computing instance 1228.
According to some embodiments, local NVD 1226 has multiple physical ports. For example, in one implementation as shown in fig. 12, local NVD 1226 has two physical ports—a first physical port 1227A (referred to herein as TOR-oriented port) connected to TORs 1222 and 1224, respectively, and a second physical port 1227B (referred to herein as host-oriented port) connected to computing host 1228. Physical port 1227A of local NVD 1226 may be divided into multiple logical ports. For example, as shown in fig. 12, physical port 1227A is divided into two logical ports on the TOR-facing side, i.e., each logical port is connected to a respective TOR included in the data center. Physical port 1227B connects to the host NIC. Thus, in contrast to the DRCC embodiment of fig. 9, the DRCC embodiment of fig. 12 includes a single connection from NVD 1226 to the host NIC included in computing instance 1228. Thus, the host NIC is associated with a single pair of IP overlay and MAC address (i.e., B1 and M4). This overlay IP address of the host NIC may be reached via two base paths (i.e., via different TORs (i.e., TOR #1 and TOR # 2)), thereby providing TOR redundancy.
Fig. 13 depicts a flowchart that illustrates another process of providing DRCC, according to some embodiments. The processes depicted in fig. 13 may be implemented in software (e.g., code, instructions, programs), hardware, or a combination thereof, executed by one or more processing units (e.g., processors, cores) of the respective systems. The software may be stored on a non-transitory storage medium (e.g., on a memory device). The method presented in fig. 13 and described below is intended to be illustrative and non-limiting. Although fig. 13 depicts various process steps occurring in a particular order or sequence, this is not intended to be limiting. In some alternative embodiments, the steps may be performed in a different order, or some steps may be performed in parallel. In some implementations, the method shown in fig. 13 can be performed by a cloud service provider to provide DRCC to a customer.
The process begins at step 1301, where a first physical port of a Network Virtualization Device (NVD) included in a data center is communicatively coupled to a first top of rack (TOR) switch and a second TOR switch. Note that the first and second TOR switches may be included in a chassis (as shown in fig. 6 and 7). In step 1303, a second physical port of the NVD is communicatively coupled with a Network Interface Card (NIC) associated with a host machine included in the data center.
The process then moves to step 1305 where the NVD receives a packet from the host machine via the second physical port. In step 1307, the NVD determines a particular TOR for transmitting the packet from the group consisting of the first TOR and the second TOR. According to some embodiments, the NVD may perform equal cost multi-path routing (ECMP) stream hashing to select one of the two TORs. In addition, in step 1309, the NVD transmits the packet to the particular TOR to facilitate the transfer of the packet to a remote host machine, e.g., remote host 1211 of FIG. 12.
The customer-premises private regional cloud (DRCC) corresponds to the infrastructure that provides the cloud service provider in the customer's own data center. The DRCC framework brings the full capabilities of the public cloud to an on-premise deployment so that enterprises can reduce infrastructure and operating costs, upgrade legacy applications on modern cloud services, and meet the most stringent regulatory, data residence, and latency requirements—all of which are implemented by CSP's infrastructure. Accordingly, it is desirable to design a network architecture that occupies little space and enables cloud services to be provided at customer-selected on-premise locations. A naive solution to enable DRCC framework is to use a network design as used in a business district. However, a disadvantage of this approach is that the customer typically does not have the power and space requirements to deploy such a network, and the customer may not be able to take full scale of such a network architecture. Thus, a novel network architecture that occupies a reduced space (e.g., fewer devices, racks, etc.) is required to provide cloud services to customers at their selected locations.
Fig. 14 depicts an exemplary network architecture of DRCC, according to some embodiments. The network architecture 1400 for DRCC includes a combination of computing fabric blocks (referred to herein as CFAB, 1420A-1420B) and network fabric blocks (referred to herein as NFAB, 1415). NFAB 1415 are communicatively coupled to each of the CFAB blocks (1420A-1420B) via a plurality of switch blocks (1405, 1410). The plurality of switch blocks are also referred to herein as level 3 (T3) level switches. Each of the plurality of switch blocks includes a predetermined number of switches, for example, four switches. For example, switch block 1405 includes four switches labeled 1405A-1405D, and switch block 1410 includes four switches labeled 1410A-1410D.
According to some embodiments, a computing fabric block (e.g., CFAB block 1420A) is communicatively coupled to a plurality of switch blocks 1405, 1410. The computing architecture block 1420A includes a set of one or more racks (e.g., exadata racks 1424 or computer racks 1423). Each rack in the set of one or more racks includes one or more servers configured to execute one or more workloads of the clients. It is appreciated that each Exadata rack 1424 can be associated with a clustered network of virtual machines 1425. CFAB block 1420A also includes a first plurality of switches 1422 organized into a first plurality of levels (e.g., levels labeled CFAB T1 and CFAB T2). A first plurality of switches 1422 communicatively couples a set of one or more racks (e.g., racks 1423, 1424) to the plurality of switch blocks (1405, 1410). Specifically, the first plurality of levels in computing architecture block 1420A associated with the first plurality of switches 1422 includes (i) a first tier 1 level switch (i.e., CFAB T1), and (ii) a first tier 2 level switch (i.e., CFAB T2).
The first tier 1 level switch is communicatively coupled to the set of one or more racks at a first end and is communicatively coupled to the first tier 2 level switch at a second end. Further, the first tier 2 level switch (i.e., CFAB T2) connects the first tier 1 level switch (i.e., CFAB T1) to the plurality of switch blocks 1405, 1410. In one embodiment, the first hierarchical 1-level switch (CFAB T1) in the computing fabric block comprises eight switches, and the first hierarchical 2-level switch (CFAB T2) in the computing fabric block comprises four switches. Each switch in the first hierarchical level 1 switch in the computing fabric block is connected to each switch in the first hierarchical level 2 switch in the computing fabric block. Further, each switch in the first hierarchical level 2 switch (CFAB T2) in the computing fabric block is connected to at least one switch in each of the plurality of switch blocks 1405, 1410.
According to some embodiments, NFAB block 1415 is communicatively coupled to a plurality of switch blocks 1405, 1410. The network fabric block 1415 includes (i) one or more edge devices 1420, and (ii) a second plurality of switches 1418 organized into a second plurality of levels. The one or more edge devices 1420 include a first edge device that provides connectivity to a first external resource. For example, the first external resource may be a public communication network (e.g., the internet), and the first edge device may be a gateway providing connectivity to the public communication network. The one or more edge devices 1420 may include a gateway, a backbone edge device, a metro edge device, and a routing reflector. Thus, the first edge device (e.g., gateway) enables a workload performed by a server included in a rack in the set of one or more racks included in CFAB block 1420A to access a first external resource (e.g., the internet).
A second plurality of switches 1418 organized into a second plurality of levels (labeled NFAB T and NFAB T) communicatively couples one or more edge devices 1420 to the plurality of switch blocks 1405, 1410. The connections between the plurality of switch blocks and the second plurality of switches 1418 are depicted in fig. 14 as logical constructs 1416 (NFAB T1 stripes). The detailed connection of the logic configuration 1416 will be described later with reference to fig. 15. According to some embodiments, the second plurality of levels associated with the second plurality of switches 1418 in the network fabric block 1415 includes (i)
A second tier 1 level switch (i.e., NFAB T a), and (ii) a second tier 2 level switch (i.e., NFAB T a). Each of the second tier 2 level switches is communicatively coupled to each of the switches included in the second tier 1 level switch (i.e., NFAB T a). In one embodiment, the second hierarchical level 1 switch in the network fabric block comprises eight switches and the second hierarchical level 2 switch in the network fabric block comprises four switches.
According to some embodiments, initial deployment of DRCC network architecture includes deploying NFAB blocks (1415), one CFAB block (1420A), and interconnecting NFAB to the T3 switch layer of the CFAB (i.e., the plurality of switch blocks 1405, 1410). It will be appreciated that additional CFAB blocks (and additional T3 layer switch blocks) may be deployed on-the-fly (i.e., in real-time) based on customer demand. It will be appreciated that the number of switches (e.g., eight switches) described above with reference to the first hierarchical level 1 switch or the second hierarchical level 1 switch in the network structure is for illustrative purposes only. The number of switches included in the first hierarchical level 1 or the second hierarchical level 1 may be any other number of switches, such as four or sixteen or a variable number of switches. In a similar manner, the second hierarchical level 2 switch in the network fabric block as described above includes four switches. It will be appreciated that this is for illustration purposes only and that the actual number of switches in this level may be a variable number of switches, for example, half the number of switches in level-1.
Fig. 15 illustrates connections between NFAB blocks and multiple switch blocks, and connections within NFAB blocks (i.e., between a second plurality of switches organized into a second plurality of levels in NFAB). The second plurality of levels associated with the second plurality of switches in NFAB includes (i) a second tier 1 level switch (i.e., NFAB tier 1,1505) and (ii) a second tier 2 level switch (i.e., NFAB tier 2,1510). In one example, the second hierarchical level 1 switch in NFAB includes eight switches (labeled t1-r1 through t1-r8 in FIG. 15), while the second hierarchical level 2 switch in the network fabric block includes four switches (labeled t2-r1 through t2-r4 in FIG. 15).
As shown in fig. 15, a first subset of switches included in a second hierarchical level 1 switch (i.e., NFAB hierarchy 1) are communicatively coupled to one or more edge devices at a first end. For example, as shown in FIG. 15, switches t1-r1, t1-r2, t1-r3, t1-r4 are coupled to a routing reflector 1520A and VPN gateway 1520, respectively. In addition, a second subset of switches included in the second hierarchical 1 level switch (e.g., switches t1-r5, t1-r6, t1-r7, and t1-r 8) are communicatively coupled to the plurality of switch blocks at the first end (i.e., labeled CFAB hierarchy 3 in fig. 15). Note that the second subset of switches may also be coupled with WDM metropolitan switches (i.e., switches for interconnecting racks located in different buildings). The first subset and the second subset of switches included in the second hierarchical level 1 switch are coupled at a second end to a second hierarchical level 2 switch included in the network fabric block.
Specifically, to provide connectivity between the switches of the T1 layer 1505, a T2 layer switch (e.g., four switches) 1510 is employed in NFAB. As shown, each of the four switches included in the T2 layer switch of NFAB is connected to each T1 layer switch. In this way, the service enclaves (enclave) included in the different CFAB blocks are communicatively coupled to the edge devices via the NFAB fabric. Thus, a workload performed by a server included in a rack of the computing fabric block accesses a first external resource (e.g., the internet) by establishing a connection with a first switch of the plurality of switch blocks. Referring to FIG. 15, this connection is further routed (i) from the first switch to a second switch included in a second subset of switches included in the second hierarchical level 1 switch (e.g., switches t1-r5, t1-r6, t1-r7, and t1-r 8), (ii) from the second switch to a third switch included in the second hierarchical level 2 switch (e.g., one of the switches from the group of switches t1-r1, t1-r2, t1-r3, t1-r 4), (iii) from the third switch to a fourth switch included in the first subset of switches in the second hierarchical level 1 switch (e.g., switches t1-r1, t1-r2, t1-r3, and t1-r 4), and (iv) from the fourth switch to the gateway. It will be appreciated that CFAB and JFAB blocks described above with respect to fig. 14 and 15 support 400G connections and operate at a power budget of 100 KW. The architecture includes a total of 3 network chassis and an optional fourth chassis to support backbone and metropolitan area connections.
Fig. 16 illustrates an exemplary private backbone network for a customer region, according to some embodiments. Fig. 16 depicts several geographic locations (e.g., countries) where clients DRCC may be deployed. For example, FIG. 16 depicts three geographic locations, namely geographic location A1602, geographic location B1604, and geographic location C1606 of deployment client DRCC. Within each geographic location, it is assumed that the customer has disposed DRCC in two regions within the geographic location. For example, geographic location A has two regions, region I1602A and region II 1602B, geographic location B has two regions, region I1604A and region II 1604B, and geographic location C has two regions, region I1606A and region II 1606B, which have customers DRCC deployed, respectively.
Each region includes a pair of dedicated backbone routers, i.e., router 1 and router 2. The routers 1 of each region are connected in the order shown in fig. 16 to form a private backbone ring network. In a similar manner, routers 2 of each region are connected in the order shown in fig. 16 to form another private backbone ring network. It will be appreciated that the pair of dedicated backbone ring networks are client specific, i.e. no other traffic is allowed on the backbone network. The backbone network may support 10G or 100G encrypted connections and provide single link failure, i.e. interruption of a single backbone link included in the ring formed by router 1 or the ring formed by router 2 does not interfere with traffic on the backbone network. It will be appreciated that the ring topology depicted in fig. 16 is for illustration purposes only. The backbone network topology may be mesh, ring, or any other topology based on certain criteria, such as the number of DRCC regions desired by the customer and/or latency and bandwidth requirements of the customer.
Turning to fig. 17, a flow diagram illustrating a process of constructing a network structure is depicted in accordance with some embodiments. The processes depicted in fig. 17 may be implemented in software (e.g., code, instructions, programs), hardware, or a combination thereof, executed by one or more processing units (e.g., processors, cores) of the respective systems. The software may be stored on a non-transitory storage medium (e.g., on a memory device). The method presented in fig. 17 and described below is intended to be illustrative and non-limiting. Although fig. 17 depicts various process steps occurring in a particular order or sequence, this is not intended to be limiting. In some alternative embodiments, the steps may be performed in a different order, or some steps may be performed in parallel. In some embodiments, the method shown in fig. 17 may be performed by a cloud service provider to provide DRCC to a customer.
The process begins at step 1701, where a plurality of switch blocks are provided. These correspond to switch blocks 1405 and 1410 as shown in fig. 14. The process then moves to step 1703 where a first computation fabric block is provided that is communicatively coupled to the plurality of switch blocks. The first computing fabric block includes (i) a set of one or more racks, and (ii) a first plurality of switches organized into a first plurality of levels. Each rack in the set of one or more racks includes one or more servers configured to execute one or more workloads of the clients. The first plurality of switches communicatively couples the set of one or more racks to the plurality of switch blocks.
Thereafter, the process moves to step 1705 where a network fabric block is provided that is communicatively coupled to the plurality of switch blocks. The network fabric block includes (i) one or more edge devices including, and (ii) a second plurality of switches organized into a second plurality of levels. The first edge device provides connectivity to a first external resource. For example, the first external resource may be a public communication network (e.g., the internet), and the first edge device is a gateway providing connectivity to the public communication network. The first edge device enables a workload executed by a server included in a rack of the set of one or more racks to access a first external resource. A second plurality of switches communicatively couples one or more edge devices to the plurality of switch blocks.
Example cloud infrastructure embodiment
As noted above, infrastructure as a service (IaaS) is a particular type of cloud computing. The IaaS may be configured to provide virtualized computing resources over a public network (e.g., the internet). In the IaaS model, cloud computing providers may host infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., hypervisor layer), etc.). In some cases, the IaaS provider may also provide various services to accompany these infrastructure components (e.g., billing, monitoring, documentation, security, load balancing, and clustering, etc.). Thus, as these services may be policy driven, iaaS users may be able to implement policies to drive load balancing to maintain availability and performance of applications.
In some cases, the IaaS client may access resources and services through a Wide Area Network (WAN), such as the internet, and may use the cloud provider's services to install the remaining elements of the application stack. For example, a user may log onto the IaaS platform to create Virtual Machines (VMs), install an Operating System (OS) on each VM, deploy middleware such as databases, create buckets for workloads and backups, and even install enterprise software into the VM. The customer may then use the provider's services to perform various functions including balancing network traffic, solving application problems, monitoring performance, managing disaster recovery, and the like.
In most cases, the cloud computing model will require participation of the cloud provider. The cloud provider may, but need not, be a third party service that specifically provides (e.g., provisions, rents, sells) IaaS. An entity may also choose to deploy a private cloud, thereby becoming a provider of its own infrastructure services.
In some examples, the IaaS deployment is a process of placing a new application or a new version of an application onto a prepared application server or the like. It may also include a process of preparing a server (e.g., installation library, daemon, etc.). This is typically managed by the cloud provider, below the hypervisor layer (e.g., servers, storage, network hardware, and virtualization). Thus, the guest may be responsible for processing (OS), middleware, and/or application deployment (e.g., on a self-service virtual machine (e.g., that may be started on demand), etc.).
In some examples, iaaS provisioning may refer to obtaining computers or virtual hosts for use, even installing the required libraries or services on them. In most cases, the deployment does not include provisioning, and provisioning may need to be performed first.
In some cases, there are two different problems with the IaaS supply. First, there are initial challenges to provisioning an initial infrastructure set before anything runs. Second, once everything has been provisioned, there is a challenge to evolve the existing infrastructure (e.g., add new services, change services, remove services, etc.). In some cases, both of these challenges may be addressed by enabling the configuration of the infrastructure to be defined in a declarative manner. In other words, the infrastructure (e.g., which components are needed and how they interact) may be defined by one or more configuration files. Thus, the overall topology of the infrastructure (e.g., which resources depend on which resources, and how they work in concert) can be described in a declarative manner. In some cases, once the topology is defined, workflows may be generated that create and/or manage the different components described in the configuration file.
In some examples, an infrastructure may have many interconnected elements. For example, there may be one or more Virtual Private Clouds (VPCs) (e.g., potential on-demand pools of configurable and/or shared computing resources), also referred to as core networks. In some examples, one or more security group rules may also be supplied to define how to set security of the network and one or more Virtual Machines (VMs). Other infrastructure elements, such as load balancers, databases, etc., may also be supplied. As more and more infrastructure elements are desired and/or added, the infrastructure may evolve gradually.
In some cases, continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Furthermore, the described techniques may enable infrastructure management within these environments. In some examples, a service team may write code that is desired to be deployed to one or more, but typically many, different production environments (e.g., across various different geographic locations, sometimes across the entire world). In some examples, however, the infrastructure on which the code is to be deployed must first be set up. In some cases, provisioning may be done manually, resources may be provisioned with a provisioning tool, and/or code may be deployed with a deployment tool once the infrastructure is provisioned.
Fig. 18 is a block diagram 1800 illustrating an example mode of the IaaS architecture in accordance with at least one embodiment. Service operator 1802 may be communicatively coupled to a secure host lease 1804 that may include a Virtual Cloud Network (VCN) 1806 and a secure host subnet 1808. In some examples, the service operator 1802 may use one or more client computing devices, which may be portable handheld devices (e.g.,Cellular telephone,Computing tablet, personal Digital Assistant (PDA)) or wearable device (e.g., google)Head mounted display), running software (such as Microsoft Windows) And/or various mobile operating systems (such as iOS, windows Phone, android, blackBerry, palm OS, etc.), and supports the internet, email, short Message Service (SMS), SMS,Or other communication protocol. Alternatively, the client computing device may be a general purpose personal computer, including, for example, microsoft Windows running various versionsAppleAnd/or a personal computer and/or a laptop computer of a Linux operating system. The client computing device may be running a variety of commercially available devicesOr a UNIX-like operating system, including but not limited to a workstation computer of any of a variety of GNU/Linux operating systems such as, for example, google Chrome OS. Alternatively or additionally, the client computing device may be any other electronic device, such as a thin client computer, an internet-enabled gaming system (e.g., with or withoutMicrosoft Xbox game console of the gesture input device), and/or a personal messaging device capable of communicating over a network that has access to the VCN 1806 and/or the internet.
The VCN 1806 may include a local peer-to-peer gateway (LPG) 1810 that may be communicatively coupled to a Secure Shell (SSH) VCN 1812 via the LPG 1810 contained in the SSH VCN 1812.SSH VCN 1812 may include SSH subnetwork 1814, and SSH VCN 1812 may be communicatively coupled to control plane VCN 1816 via LPG 1810 contained in control plane VCN 1816. Also, SSH VCN 1812 may be communicatively coupled to data plane VCN 1818 via LPG 1810. Control plane VCN 1816 and data plane VCN 1818 may be included in a service lease 1819 that may be owned and/or operated by the IaaS provider.
The control plane VCN 1816 may include a control plane demilitarized zone (DMZ) layer 1820 that acts as a peripheral network (e.g., part of a corporate network between a corporate intranet and an external network). DMZ-based servers can assume limited responsibility and help control security vulnerabilities. Further, DMZ layer 1820 can include Load Balancer (LB) subnet(s) 1822, control plane application layer 1824 that can include application subnet(s) 1826, control plane data layer 1828 that can include Database (DB) subnet(s) 1830 (e.g., front end DB subnet(s) and/or back end DB subnet (s)). The LB subnet(s) 1822 included in the control plane DMZ layer 1820 may be communicatively coupled to the application subnet(s) 1826 included in the control plane application layer 1824 and the internet gateway 1834 that may be included in the control plane VCN 1816, and the application subnet(s) 1826 may be communicatively coupled to the DB subnet(s) 1830 and the serving gateway 1836 and Network Address Translation (NAT) gateway 1838 included in the control plane data layer 1828. Control plane VCN 1816 may include a serving gateway 1836 and a NAT gateway 1838.
The control plane VCN 1816 may include a data plane mirror application layer 1840, which may include application subnet(s) 1826. The application subnet(s) 1826 included in the data plane mirror application layer 1840 can include Virtual Network Interface Controllers (VNICs) 1842 that can execute the computing instance 1844. The compute instance 1844 may communicatively couple the application subnet(s) 1826 of the data plane mirror application layer 1840 to the application subnet(s) 1826 that may be included in the data plane application layer 1846.
The data plane VCN 1818 may include a data plane application layer 1846, a data plane DMZ layer 1848, and a data plane data layer 1850. The data plane DMZ layer 1848 may include LB subnet(s) 1822 that may be communicatively coupled to the application subnet(s) 1826 of the data plane application layer 1846 and the internet gateway 1834 of the data plane VCN 1818. Application subnet(s) 1826 can be communicatively coupled to serving gateway 1836 of data plane VCN 1818 and NAT gateway 1838 of data plane VCN 1818. Data plane data layer 1850 may also include DB subnet(s) 1830 that may be communicatively coupled to application subnet(s) 1826 of data plane application layer 1846.
The internet gateway 1834 of the control plane VCN 1816 and the data plane VCN 1818 may be communicatively coupled to a metadata management service 1852, which may be communicatively coupled to the public internet 1854. Public internet 1854 may be communicatively coupled to NAT gateway 1838 of control plane VCN 1816 and data plane VCN 1818. The service gateway 1836 of the control plane VCN 1816 and the data plane VCN 1818 may be communicatively coupled to the cloud service 1856.
In some examples, the service gateway 1836 of the control plane VCN 1816 or the data plane VCN 1818 may make Application Programming Interface (API) calls to the cloud service 1856 without going through the public internet 1854. The API call from the service gateway 1836 to the cloud service 1856 may be unidirectional in that the service gateway 1836 may make an API call to the cloud service 1856 and the cloud service 1856 may send the requested data to the service gateway 1836. Cloud service 1856 may not initiate an API call to service gateway 1836.
In some examples, secure host lease 1804 may be directly connected to service lease 1819, which may otherwise be quarantined. The secure host subnetwork 1808 may communicate with the SSH subnetwork 1814 through the LPG 1810, which may enable bi-directional communication over otherwise isolated systems. Connecting the secure host subnet 1808 to the SSH subnet 1814 may enable the secure host subnet 1808 to access other entities within the service lease 1819.
Control plane VCN 1816 may allow a user of service lease 1819 to set or otherwise provision desired resources. The desired resources provisioned in the control plane VCN 1816 may be deployed or otherwise used in the data plane VCN 1818. In some examples, control plane VCN 1816 may be isolated from data plane VCN 1818, and data plane mirror application layer 1840 of control plane VCN 1816 may communicate with data plane application layer 1846 of data plane VCN 1818 via VNIC 1842, which may be contained in data plane mirror application layer 1840 and data plane application layer 1846.
In some examples, a user or customer of the system may make a request, such as a create, read, update, or delete (CRUD) operation, through the public internet 1854 that may communicate the request to the metadata management service 1852. The metadata management service 1852 may communicate requests to the control plane VCN 1816 through the internet gateway 1834. The request may be received by LB subnet(s) 1822 contained in control plane DMZ layer 1820. The LB subnet(s) 1822 may determine that the request is valid and, in response to the determination, the LB subnet(s) 1822 may transmit the request to the application subnet(s) 1826 contained in the control plane application layer 1824. If the request is validated and a call to the public internet 1854 is required, the call to the public internet 1854 may be transmitted to the NAT gateway 1838 which may make the call to the public internet 1854. The memory in which the request may desire to store may be stored in DB subnet(s) 1830.
In some examples, the data plane mirror application layer 1840 may facilitate direct communication between the control plane VCN1816 and the data plane VCN 1818. For example, it may be desirable to apply changes, updates, or other suitable modifications to the configuration to the resources contained in the data plane VCN 1818. Via VNIC 1842, control plane VCN1816 may communicate directly with resources contained in data plane VCN 1818, and thus may perform changes, updates, or other appropriate modifications to the configuration.
In some embodiments, control plane VCN 1816 and data plane VCN 1818 may be included in service lease 1819. In this case, a user or customer of the system may not own or operate the control plane VCN 1816 or the data plane VCN 1818. Alternatively, the IaaS provider may own or operate the control plane VCN 1816 and the data plane VCN 1818, both of which may be contained in the service lease 1819. This embodiment may enable isolation of networks that may prevent a user or customer from interacting with other users or other customers' resources. Moreover, this embodiment may allow a user or customer of the system to store the database privately without relying on the public internet 1854 for storage that may not have the desired threat prevention level.
In other embodiments, LB subnet(s) 1822 included in control plane VCN 1816 may be configured to receive signals from serving gateway 1836. In this embodiment, the control plane VCN 1816 and the data plane VCN 1818 may be configured to be invoked by customers of the IaaS provider without invoking the public internet 1854. This embodiment may be desirable to customers of the IaaS provider because the database(s) used by the customers may be controlled by the IaaS provider and may be stored on a service lease 1819, which may be isolated from the public internet 1854.
Fig. 19 is a block diagram 1900 illustrating another example mode of an IaaS architecture in accordance with at least one embodiment. Service operator 1902 (e.g., service operator 1802 of fig. 18) may be communicatively coupled to secure host lease 1904 (e.g., secure host lease 1804 of fig. 18), which may include Virtual Cloud Network (VCN) 1906 (e.g., VCN 1806 of fig. 18) and secure host subnet 1908 (e.g., secure host subnet 1808 of fig. 18). The VCN 1906 may include a local peer-to-peer gateway (LPG) 1910 (e.g., LPG 1810 of fig. 18) that may be communicatively coupled to a Secure Shell (SSH) VCN 1912 (e.g., SSH VCN 1812 of fig. 18) via the LPG 1810 contained in the SSH VCN 1912. SSH VCN 1912 may include SSH subnetwork 1914 (e.g., SSH subnetwork 1814 of fig. 18), and SSH VCN 1912 may be communicatively coupled to control plane VCN 1916 (e.g., control plane VCN 1816 of fig. 18) via LPG 1910 contained in control plane VCN 1916. The control plane VCN 1916 may be included in a service lease 1919 (e.g., service lease 1819 of fig. 18), and the data plane VCN 1918 (e.g., data plane VCN 1818 of fig. 18) may be included in a customer lease 1921 that may be owned or operated by a user or customer of the system.
The control plane VCN 1916 may include a control plane DMZ layer 1920 (e.g., control plane DMZ layer 1820 of fig. 18) that may include LB subnet(s) 1922 (e.g., LB subnet(s) 1822 of fig. 18), may include control plane application layer 1924 (e.g., control plane application layer 1824 of fig. 18) that may include application subnet(s) 1926 (e.g., application subnet(s) 1826) of fig. 18), may include a Database (DB) subnet 1930 (e.g., control plane data layer 1928 (e.g., control plane data layer 1828) of fig. 18) similar to DB subnet(s) 1830 of fig. 18). The LB subnet(s) 1922 included in the control plane DMZ layer 1920 may be communicatively coupled to the application subnet(s) 1926 included in the control plane application layer 1924 and the internet gateway 1934 (e.g., the internet gateway 1834 of fig. 18) that may be included in the control plane VCN 1916, and the application subnet(s) 1926 may be communicatively coupled to the DB subnet(s) 1930 and the service gateway 1936 (e.g., the service gateway of fig. 18) and the Network Address Translation (NAT) gateway 1938 (e.g., the NAT gateway 1838 of fig. 18) included in the control plane data layer 1928. The control plane VCN 1916 may include a serving gateway 1936 and a NAT gateway 1938.
The control plane VCN 1916 may include a data plane mirror application layer 1940 (e.g., data plane mirror application layer 1840 of fig. 18) that may include application subnet(s) 1926. The application subnet(s) 1926 included in the data plane mirror application layer 1940 may include Virtual Network Interface Controllers (VNICs) 1942 (e.g., VNICs of 1842) that may execute computing instances 1944 (e.g., similar to computing instance 1844 of fig. 18). The computing instance 1944 may facilitate communication between the application subnet(s) 1926 of the data plane mirror application layer 1940 and the application subnet(s) 1926 that may be included in the data plane application layer 1946 (e.g., the data plane application layer 1846 of fig. 18) via the VNICs 1942 included in the data plane mirror application layer 1940 and the VNICs 1942 included in the data plane application layer 1946.
The internet gateway 1934 contained in the control plane VCN 1916 may be communicatively coupled to a metadata management service 1952 (e.g., metadata management service 1852 of fig. 18) that may be communicatively coupled to the public internet 1954 (e.g., public internet 1854 of fig. 18). Public internet 1954 may be communicatively coupled to NAT gateway 1938 included in control plane VCN 1916. The service gateway 1936 included in the control plane VCN 1416 may be communicatively coupled to cloud services 1956 (e.g., cloud services 1856 of fig. 18).
In some examples, data plane VCN 1918 may be included in customer lease 1921. In this case, the IaaS provider may provide a control plane VCN 1916 for each customer, and the IaaS provider may set a unique computing instance 1944 contained in the service lease 1919 for each customer. Each computing instance 1944 may allow communication between a control plane VCN 1916 included in service lease 1919 and a data plane VCN 1918 included in customer lease 1921. The computing instance 1944 may allow resources provisioned in the control plane VCN 1916 contained in the service lease 1919 to be deployed or otherwise used in the data plane VCN 1918 contained in the customer lease 1921.
In other examples, a customer of the IaaS provider may have a database that exists in customer lease 1921. In this example, control plane VCN 1916 may include a data plane mirror application layer 1940, which may include application subnet(s) 1926. The data plane mirror application layer 1940 may reside in the data plane VCN 1918, but the data plane mirror application layer 1940 may not reside in the data plane VCN 1918. That is, data plane mirror application layer 1940 may access customer lease 1921, but data plane mirror application layer 1940 may not exist in data plane VCN 1918 or be owned or operated by a customer of the IaaS provider. The data plane mirror application layer 1940 may be configured to make calls to the data plane VCN 1918, but may not be configured to make calls to any entity contained in the control plane VCN 1916. The customer may desire to deploy or otherwise use resources provisioned in the control plane VCN 1916 in the data plane VCN 1918, and the data plane mirror application layer 1940 may facilitate the customer's desired deployment or other use of resources.
In some embodiments, the customer of the IaaS provider may apply the filter to the data plane VCN 1918. In this embodiment, the customer may determine what the data plane VCN 1918 may access, and the customer may restrict access to the public internet 1954 from the data plane VCN 1918. The IaaS provider may not be able to apply filters or otherwise control access to any external networks or databases by the data plane VCN 1918. Application of filters and controls by customers to the data plane VCN 1918 contained in customer lease 1921 may help isolate the data plane VCN 1918 from other customers and public internet 1954.
In some embodiments, cloud services 1956 may be invoked by service gateway 1936 to access services that may not exist on public internet 1954, control plane VCN 1916, or data plane VCN 1918. The connection between the cloud service 1956 and the control plane VCN 1916 or the data plane VCN 1918 may not be real-time or continuous. Cloud services 1956 may exist on different networks owned or operated by the IaaS provider. The cloud service 1956 may be configured to receive calls from the service gateway 1936 and may be configured not to receive calls from the public internet 1954. Some cloud services 1956 may be isolated from other cloud services 1956, and control plane VCNs 1916 may be isolated from cloud services 1956 that may not be in the same area as control plane VCNs 1916. For example, control plane VCN 1916 may be located in "zone 1" and cloud service "deployment 13" may be located in zone 1 and "zone 2". If a service gateway 1936 contained in control plane VCN 1916 located in region 1 makes a call to deployment 13, the call can be transferred to deployment 13 in region 1. In this example, control plane VCN 1916 or deployment 13 in region 1 may not be communicatively coupled or otherwise in communication with deployment 13 in region 2.
Fig. 20 is a block diagram 2000 illustrating another example mode of the IaaS architecture in accordance with at least one embodiment. The service operator 2002 (e.g., the service operator 1802 of fig. 18) can be communicatively coupled to a secure host lease 2004 (e.g., the secure host lease 1804 of fig. 18) that can include a Virtual Cloud Network (VCN) 2006 (e.g., the VCN 1806 of fig. 18) and a secure host subnet 2008 (e.g., the secure host subnet 1808 of fig. 18). The VCN 2006 may include an LPG 2010 (e.g., the LPG1810 of fig. 18) that may be communicatively coupled to the SSH VCN 2012 (e.g., the SSH VCN 1812 of fig. 18) via the LPG 2010 contained in the SSH VCN 2012. SSH VCN 2012 may include SSH subnetwork 2014 (e.g., SSH subnetwork 1814 of fig. 18), and SSH VCN 1812 may be communicatively coupled to control plane VCN 2016 (e.g., control plane VCN 1816 of fig. 18) via LPG 2010 contained in control plane VCN 2016 and to data plane VCN 2018 (e.g., data plane 1818 of fig. 18) via LPG 2010 contained in data plane VCN 2018. The control plane VCN 2016 and the data plane VCN 2018 may be included in a service lease 2019 (e.g., service lease 1819 of fig. 18).
The control plane VCN 1816 may include a control plane DMZ layer 1820 (e.g., control plane DMZ layer 1820 of fig. 18) that may include Load Balancer (LB) subnet(s) 1822 (e.g., LB subnet(s) 1822 of fig. 18), a control plane application layer 2024 (e.g., control plane application layer 1824) that may include application subnet(s) 2026 (e.g., similar to application subnet(s) 1826 of fig. 18), a control plane data layer 2028 (e.g., control plane data layer 1828 of fig. 18) that may include DB subnet(s) 1820. The LB subnet(s) 2022 contained in the control plane DMZ layer 2020 may be communicatively coupled to the application subnet(s) 2026 contained in the control plane application layer 2024 and the internet gateway 1834 (e.g., the internet gateway 1834 of fig. 18) that may be contained in the control plane VCN 2016, and the application subnet(s) 2026 may be communicatively coupled to the DB subnet(s) 1830 and the service gateway 1836 (e.g., the service gateway of fig. 18) and the Network Address Translation (NAT) gateway 1838 (e.g., the NAT gateway 1838 of fig. 18) contained in the control plane data layer 1828. Control plane VCN 2016 may include a serving gateway 2036 and a NAT gateway 2038.
The data plane VCN 2018 may include a data plane application layer 2046 (e.g., the data plane application layer 1846 of fig. 18), a data plane DMZ layer 2048 (e.g., the data plane DMZ layer 1848 of fig. 18), and a data plane data layer 2050 (e.g., the data plane data layer 1850 of fig. 18). The data plane DMZ layer 2048 may include a trusted application subnet(s) 2060 and an untrusted application subnet(s) 2062 that may be communicatively coupled to the data plane application layer 2046 and LB subnet 2022 of the internet gateway 2034 contained in the data plane VCN 2018. Trusted application subnet(s) 2060 may be communicatively coupled to service gateway 2036 contained in data plane VCN 2018, NAT gateway 2038 contained in data plane VCN 2018, and DB subnet(s) 2030 contained in data plane data layer 2050. Untrusted application subnet(s) 2062 may be communicatively coupled to service gateway 2036 contained in data plane VCN 2018 and DB subnet(s) 2030 contained in data plane data layer 2050. Data plane data layer 2050 may include DB subnetwork(s) 2030 that may be communicatively coupled to service gateway 2036 included in data plane VCN 2018.
The untrusted application subnet(s) 2062 may include one or more primary VNICs 2064 (1) - (N) that may be communicatively coupled to tenant Virtual Machines (VMs) 2066 (1) - (N). Each tenant VM 2066 (1) - (N) may be communicatively coupled to a respective application subnet 2067 (1) - (N) that may be contained in a respective container outlet VCN 2068 (1) - (N) that may be contained in a respective customer lease 2070 (1) - (N). The respective auxiliary VNICs 2072 (1) - (N) may facilitate communication between the untrusted application subnet(s) 2062 contained in the data plane VCN 2018 and the application subnets contained in the container egress VCNs 2068 (1) - (N). Each container egress VCN 2068 (1) - (N) may include a NAT gateway 2038 that may be communicatively coupled to a public internet 2054 (e.g., public internet 1854 of fig. 18).
The internet gateway 2034 contained in the control plane VCN 2016 and in the data plane VCN 2018 may be communicatively coupled to a metadata management service 2052 (e.g., the metadata management system 1852 of fig. 18) that may be communicatively coupled to the public internet 2054. Public internet 2054 may be communicatively coupled to NAT gateway 2038 included in control plane VCN 2016 and in data plane VCN 2018. The service gateway 2036 contained in the control plane VCN 2016 and in the data plane VCN 2018 may be communicatively coupled to the cloud service 2056.
In some embodiments, the data plane VCN 2018 may be integrated with the customer lease 2070. Such integration may be useful or desirable for customers of the IaaS provider in some situations, such as where support may be desired while executing code. The customer may provide code that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects to operate. In response thereto, the IaaS provider may determine whether to run the code given to the IaaS provider by the customer.
In some examples, a customer of the IaaS provider may grant temporary network access to the IaaS provider and request functionality attached to the data plane layer application 2046. Code that runs this function may be executed in VM 2066 (1) - (N), and the code may not be configured to run anywhere else on data plane VCN 2018. Each VM 2066 (1) - (N) may be connected to a guest lease 2070. The respective containers 2071 (1) - (N) contained in VMs 2066 (1) - (N) may be configured to run code. In this case, there may be dual isolation (e.g., containers 2071 (1) - (N) running code, where containers 2071 (1) - (N) may be at least contained in VMs 2066 (1) - (N) contained in untrusted application subnet(s) 2062), which may help prevent incorrect or otherwise undesirable code from damaging the IaaS provider's network or damaging the network of a different customer. Containers 2071 (1) - (N) may be communicatively coupled to customer rental 2070 and may be configured to transmit or receive data from customer rental 2070. Containers 2071 (1) - (N) may not be configured to transmit or receive data from any other entity in data plane VCN 2018. After the running code is complete, the IaaS provider may terminate or otherwise dispose of containers 2071 (1) - (N).
In some embodiments, trusted application subnet(s) 2060 may run code that may be owned or operated by an IaaS provider. In this embodiment, trusted application subnet(s) 2060 may be communicatively coupled to DB subnet(s) 2030 and configured to perform CRUD operations in DB subnet(s) 2030. Untrusted application subnet(s) 2062 may be communicatively coupled to DB subnet(s) 2030, but in this embodiment, untrusted application subnet(s) may be configured to perform read operations in DB subnet(s) 2030. Containers 2071 (1) - (N), which may be contained in VMs 2066 (1) - (N) of each guest and may run code from the guest, may not be communicatively coupled with DB subnet(s) 2030.
In other embodiments, the control plane VCN 2016 and the data plane VCN 2018 may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN 2016 and the data plane VCN 2018. Communication may occur indirectly through at least one method. LPG 2010 may be established by an IaaS provider, which may facilitate communications between control plane VCN 2016 and data plane VCN 2018. In another example, the control plane VCN 2016 or the data plane VCN 2018 may invoke the cloud service 2056 via the service gateway 2036. For example, a call from the control plane VCN 2016 to the cloud service 2056 may include a request for a service that may communicate with the data plane VCN 2018.
Fig. 21 is a block diagram 2100 illustrating another example mode of the IaaS architecture in accordance with at least one embodiment. Service operator 2102 (e.g., service operator 1802 of fig. 18) can be communicatively coupled to a secure host lease 2104 (e.g., secure host lease 1804 of fig. 18) that can include a Virtual Cloud Network (VCN) 2106 (e.g., VCN 1806 of fig. 18) and a secure host subnet 2108 (e.g., secure host subnet 1808 of fig. 18). VCN 2106 may include an LPG 2110 (e.g., LPG 1810 of fig. 18) that may be communicatively coupled to SSH VCN 2112 via LPG 2110 contained in SSH VCN 2112 (e.g., SSH VCN 1812 of fig. 18). SSH VCN 2112 may include SSH subnetwork 2114 (e.g., SSH subnetwork 1814 of fig. 18), and SSH VCN 2112 may be communicatively coupled to control plane VCN 2116 (e.g., control plane VCN 1816 of fig. 18) via LPG 2110 contained in control plane VCN 2116 and to data plane VCN 2118 (e.g., data plane 1818 of fig. 18) via LPG 2110 contained in data plane VCN 2118. The control plane VCN 2116 and the data plane VCN 2118 may be included in a service lease 2119 (e.g., service lease 1819 of fig. 18).
Control plane VCN 2116 may include a control plane DMZ layer 2120 (e.g., control plane DMZ layer 1820 of fig. 18) that may include LB subnet(s) 2122 (e.g., LB subnet(s) 1822 of fig. 18), a control plane application layer 2124 (e.g., control plane application layer 1824 of fig. 18) that may include application subnet(s) 2126 (e.g., application subnet(s) 1826) of fig. 18), and a control plane data layer 2128 (e.g., control plane data layer 1828 of fig. 18) that may include DB subnet(s) 2130 (e.g., DB subnet(s) 2030 of fig. 20). The LB subnet(s) 2122 contained in the control plane DMZ layer 2120 may be communicatively coupled to the application subnet(s) 2126 contained in the control plane application layer 2124 and the internet gateway 2134 (e.g., internet gateway 1834 of fig. 18) that may be contained in the control plane VCN 2116, and the application subnet(s) 2126 may be communicatively coupled to the DB subnet(s) 2130 and the serving gateway 2136 (e.g., serving gateway of fig. 18) and Network Address Translation (NAT) gateway 2138 (e.g., gateway 1838 of fig. 18) contained in the control plane data layer 2128. The control plane VCN 2116 may include a serving gateway 2136 and a NAT gateway 2138.
The data plane VCN 2118 may include a data plane application layer 2146 (e.g., data plane application layer 1846 of fig. 18), a data plane DMZ layer 2148 (e.g., data plane DMZ layer 2148 of fig. 18)), and a data plane data layer 2150 (e.g., data plane data layer 1850 of fig. 18). The data plane DMZ layer 2148 may include trusted application subnet(s) 2160 (e.g., trusted application subnet(s) 2060 of fig. 20) and untrusted application subnet(s) 2162 (e.g., untrusted application subnet(s) 2062 of fig. 20) and LB subnet(s) 2122 of the internet gateway 2134 contained in the data plane VCN 2118 that may be communicatively coupled to the data plane application layer 2146. Trusted application subnet(s) 2160 may be communicatively coupled to serving gateway 2136 included in data plane VCN 2118, NAT gateway 2138 included in data plane VCN 2118, and DB subnet(s) 2130 included in data plane data layer 2150. The untrusted application subnet(s) 2162 may be communicatively coupled to the serving gateway 2136 contained in the data plane VCN 2118 and the DB subnet(s) 2130 contained in the data plane data layer 2150. The data plane data layer 2150 may include DB subnet(s) 2130 that may be communicatively coupled to service gateway 2136 included in data plane VCN 2118.
The untrusted application subnet(s) 2162 may include a host VNIC 2164 (1) - (N) that may be communicatively coupled to tenant Virtual Machines (VMs) 2166 (1) - (N) residing within the untrusted application subnet(s) 2162. Each tenant VM 2166 (1) - (N) may run code in a respective container 2167 (1) - (N) and be communicatively coupled to an application subnet 2126 that may be included in a data plane application layer 2146 included in the container outlet VCN 2168. The respective auxiliary VNICs 2172 (1) - (N) may facilitate communications between the untrusted application subnet(s) 2162 contained in the data plane VCN 2118 and the application subnet contained in the container egress VCN 2168. The container egress VCN may include a NAT gateway 2138 that may be communicatively coupled to the public internet 2154 (e.g., public internet 1854 of fig. 18).
The internet gateway 2134 included in the control plane VCN 2116 and in the data plane VCN 2118 may be communicatively coupled to a metadata management service 2152 (e.g., the metadata management system 1852 of fig. 18) that may be communicatively coupled to the public internet 2154. Public internet 2154 may be communicatively coupled to NAT gateway 2138 contained in control plane VCN 2116 and in data plane VCN 2118. The service gateway 2136 included in the control plane VCN 2116 and in the data plane VCN 2118 may be communicatively coupled to the cloud service 2156.
In some examples, the pattern illustrated by the architecture of block 2100 of fig. 21 may be considered an exception to the pattern illustrated by the architecture of block 2000 of fig. 20, and if the IaaS provider cannot directly communicate with the customer (e.g., a disconnected area), such a pattern may be desirable to the customer of the IaaS provider. The guests may access respective containers 2167 (1) - (N) contained in each guest's VM 2166 (1) - (N) in real-time. The containers 2167 (1) - (N) may be configured to invoke respective auxiliary VNICs 2172 (1) - (N) contained in the application subnet(s) 2126 of the data plane application layer 2146, which may be contained in the container egress VCN 2168. The auxiliary VNICs 2172 (1) - (N) may transmit calls to the NAT gateway 2138, which may transmit the calls to the public internet 2154. In this example, containers 2167 (1) - (N), which may be accessed by clients in real-time, may be isolated from control plane VCN 2116 and may be isolated from other entities contained in data plane VCN 2118. Containers 2167 (1) - (N) may also be isolated from resources from other customers.
In other examples, the customer may use containers 2167 (1) - (N) to invoke cloud service 2156. In this example, a customer may run code in containers 2167 (1) - (N) that requests services from cloud service 2156. The containers 2167 (1) - (N) may transmit the request to the auxiliary VNICs 2172 (1) - (N), which may transmit the request to the NAT gateway, which may transmit the request to the public internet 2154. The public internet 2154 may transmit the request to the LB subnet(s) 2122 contained in the control plane VCN 2116 via the internet gateway 2134. In response to determining that the request is valid, the LB subnet(s) may transmit the request to the application subnet(s) 2126, which may transmit the request to the cloud service 2156 via the service gateway 2136.
It should be appreciated that the IaaS architecture 1800, 1900, 2000, 2100 depicted in the figures may have other components than those depicted. Additionally, the embodiments shown in the figures are merely some examples of cloud infrastructure systems that may incorporate embodiments of the present disclosure. In some other embodiments, the IaaS system may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components.
In certain embodiments, the IaaS system described herein may include application suites, middleware, and database service products that are delivered to customers in a self-service, subscription-based, elastically extensible, reliable, highly available, and secure manner. An example of such an IaaS system is the Oracle Cloud Infrastructure (OCI) offered by the present assignee.
FIG. 22 illustrates an example computer system 2200 in which various embodiments of the disclosure may be implemented. System 2200 can be used to implement any of the computer systems described above. As shown, computer system 2200 includes a processing unit 2204 that communicates with a number of peripheral subsystems via a bus subsystem 2202. These peripheral subsystems may include a processing acceleration unit 2206, an I/O subsystem 2208, a storage subsystem 2218, and a communication subsystem 2224. Storage subsystem 2218 includes tangible computer-readable storage media 2222 and system memory 2210.
Bus subsystem 2202 provides a mechanism for letting the various components and subsystems of computer system 2200 communicate with each other as intended. Although bus subsystem 2202 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. The bus subsystem 2202 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. Such architectures can include Industry Standard Architecture (ISA) bus, micro Channel Architecture (MCA) bus, enhanced ISA (EISA) bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as Mezzanine bus manufactured by the IEEE P1386.1 standard, for example.
The processing unit 2204, which may be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of the computer system 2200. One or more processors may be included in the processing unit 2204. These processors may include single-core or multi-core processors. In some embodiments, processing unit 2204 may be implemented as one or more separate processing units 2232 and/or 2234, where each processing unit includes a single or multi-core processor therein. In other embodiments, processing unit 2204 may also be implemented as a four-core processing unit formed by integrating two dual-core processors into a single chip.
In various embodiments, the processing unit 2204 may execute various programs in response to program code and may maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed may reside in the processor(s) 2204 and/or in the storage subsystem 2218. The processor(s) 2204 may provide the various functions described above by appropriate programming. The computer system 2200 may additionally include a processing acceleration unit 2206, which may include a Digital Signal Processor (DSP), special-purpose processor, or the like.
The I/O subsystem 2208 may include user interface input devices and user interface output devices. The user interface input devices may include a keyboard, a pointing device such as a mouse or trackball, a touch pad or touch screen incorporated into a display, a scroll wheel, a click wheel, dials, buttons, switches, a keyboard, an audio input device with a voice command recognition system, a microphone, and other types of input devices. The user interface input device may include, for example, a motion sensing and/or gesture recognition device, such as Microsoft WindowsMotion sensors that enable users to control, for example, microsoft, through a natural user interface using gestures and voice commands360 To the input device of the game controller and interact therewith. The user interface input device may also include an eye gesture recognition device, such as detecting eye activity from a user (e.g., "blinking" when taking a photograph and/or making a menu selection) and converting the eye gesture to an input device (e.g., google) Google of input in (a)A blink detector. In addition, the user interface input device may include a device that enables a user to communicate with the voice recognition system via voice commands (e.g.,Navigator) interactive voice recognition sensing device.
User interface input devices may also include, but are not limited to, three-dimensional (3D) mice, joysticks or sticks, game pads and drawing tablets, as well as audio/video devices such as speakers, digital cameras, digital video cameras, portable media players, webcams, image scanners, fingerprint scanners, bar code reader 3D scanners, 3D printers, laser rangefinders and gaze tracking devices. Further, the user interface input device may include, for example, a medical imaging input device such as a computed tomography, magnetic resonance imaging, positron emission tomography, medical ultrasound device. The user interface input device may also include, for example, an audio input device such as a MIDI keyboard, digital musical instrument, or the like.
The user interface output device may include a display subsystem, an indicator light, or a non-visual display such as an audio output device, or the like. The display subsystem may be a Cathode Ray Tube (CRT), a flat panel device such as one using a Liquid Crystal Display (LCD) or a plasma display, a projection device, a touch screen, or the like. In general, use of the term "output device" is intended to include all possible types of devices and mechanisms for outputting information from computer system 2200 to a user or other computer. For example, user interface output devices may include, but are not limited to, various display devices that visually convey text, graphics, and audio/video information, such as monitors, printers, speakers, headphones, car navigation systems, plotters, voice output devices, and modems.
Computer system 2200 can include a storage subsystem 2218 comprising software elements, shown as being currently located in system memory 2210. The system memory 2210 may store program instructions that are loadable and executable on the processing unit 2204, as well as data generated during the execution of these programs.
Depending on the configuration and type of computer system 2200, system memory 2210 may be volatile (such as Random Access Memory (RAM)) and/or nonvolatile (such as Read Only Memory (ROM), flash memory, etc.). RAM typically contains data and/or program modules that are immediately accessible to and/or presently being operated on and executed by processing unit 2204. In some implementations, the system memory 2210 may include a variety of different types of memory, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM). In some implementations, a basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within computer system 2200, such as during start-up, may be stored in ROM. By way of example, and not limitation, system memory 2210 also illustrates application programs 2212, which may include client applications, web browsers, middle tier applications, relational database management systems (RDBMS), and the like, program data 2214, and operating system 2216. By way of example, operating system 2216 can include various versions of Microsoft WindowsApple And/or Linux operating system, various commercially availableOr UNIX-like operating systems (including but not limited to various GNU/Linux operating systems, googleOperating system, etc.) and/or such as iOS,Phone、OS、17OSMobile operating system of OS operating system.
Storage subsystem 2218 may also provide a tangible computer-readable storage medium for storing basic programming and data structures that provide the functionality of some embodiments. Software (programs, code modules, instructions) that when executed by a processor provide the functionality described above may be stored in storage subsystem 2218. These software modules or instructions may be executed by the processing unit 2204. Storage subsystem 2218 may also provide a repository for storing data used in accordance with the disclosure.
The storage subsystem 2200 may also include a computer-readable storage media reader 2220 that may be further connected to a computer-readable storage media 2222. Along with and optionally in conjunction with system memory 2210, computer-readable storage medium 2222 may comprehensively represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information
The computer-readable storage medium 2222 containing the code or portions of code may also include any suitable medium known or used in the art including storage media and communication media, such as, but not limited to, volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information. This may include tangible computer-readable storage media such as RAM, ROM, electrically Erasable Programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer-readable media. This may also include non-tangible computer-readable media, such as data signals, data transmissions, or any other medium that can be used to transmit desired information and that can be accessed by computing system 2200.
For example, computer-readable storage media 2222 can include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and a removable, nonvolatile optical disk (such as a CD ROM, DVD, and a CD-ROM diskA disk or other optical medium) to which a data signal is read or written. The computer-readable storage media 2222 can include, but is not limited to,Drives, flash memory cards, universal Serial Bus (USB) flash drives, secure Digital (SD) cards, DVD discs, digital audio tape, and the like. The computer-readable storage media 2222 may also include non-volatile memory-based Solid State Drives (SSDs) (such as flash memory-based SSDs, enterprise flash drives, solid state ROMs, etc.), volatile memory-based SSDs (such as solid state RAM, dynamic RAM, static RAM), DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM-and flash memory-based SSDs. The disk drives and their associated computer-readable media can provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computer system 2200.
Communication subsystem 2224 provides an interface to other computer systems and networks. The communication subsystem 2224 serves as an interface for receiving data from and transmitting data to other systems from the computer system 2200. For example, communication subsystem 2224 may enable computer system 2200 to be connected to one or more devices via the internet. In some embodiments, the communication subsystem 2224 may include a Radio Frequency (RF) transceiver component for accessing wireless voice and/or data networks (e.g., advanced data network technology using cellular telephone technology, such as 3G, 4G, or EDGE (enhanced data rates for global evolution), wi-Fi (IEEE 802.11 family standard), or other mobile communication technology, or any combination thereof), a Global Positioning System (GPS) receiver component, and/or other components. In some embodiments, communication subsystem 2224 may provide a wired network connection (e.g., ethernet) in addition to or in lieu of a wireless interface.
In some embodiments, communication subsystem 2224 may also receive input communications in the form of structured and/or unstructured data feeds 2226, event streams 2228, event updates 2230, and the like, on behalf of one or more users who may use computer system 2200.
For example, the communication subsystem 2224 may be configured to receive data feeds 2226 in real-time from users of social networks and/or other communication services, such asFeeding(s),Updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third-party information sources.
In addition, the communication subsystem 2224 may also be configured to receive data in the form of a continuous data stream, which may include an event stream 2228 and/or event updates 2230 of real-time events that may be continuous or unbounded in nature without explicit termination. Examples of applications that generate continuous data may include, for example, sensor data applications, financial quoters, network performance measurement tools (e.g., network monitoring and traffic management applications), click stream analysis tools, automobile traffic monitoring, and so forth.
The communication subsystem 2224 may also be configured to output structured and/or unstructured data feeds 2226, event streams 2228, event updates 2230, and so on, to one or more databases, which may be in communication with one or more streaming data source computers coupled to the computer system 2200.
The computer system 2200 may be one of various types, including a handheld portable device (e.g.,Cellular telephone,A computing tablet, PDA), a wearable device (e.g.,Glass head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing system.
Due to the ever-changing nature of computers and networks, the description of computer system 2200 depicted in the drawings is intended only as a specific example. Many other configurations are possible with more or fewer components than the system depicted in the figures. For example, custom hardware may also be used and/or particular elements may be implemented in hardware, firmware, software (including applets), or a combination thereof. In addition, connections to other computing devices, such as network input/output devices, may also be employed. Based on the disclosure and teachings provided herein, one of ordinary skill in the art will recognize other ways and/or methods to implement various embodiments.
While specific embodiments of the present disclosure have been described, various modifications, alterations, alternative constructions, and equivalents are also included within the scope of the disclosure. Embodiments of the present disclosure are not limited to operation within certain particular data processing environments, but may be free to operate within multiple data processing environments. Furthermore, while embodiments of the present disclosure have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present disclosure is not limited to the described series of transactions and steps. The various features and aspects of the embodiments described above may be used alone or in combination.
In addition, while embodiments of the present disclosure have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present disclosure. Embodiments of the present disclosure may be implemented in hardware alone, or in software alone, or in a combination thereof. The various processes described herein may be implemented in any combination on the same processor or on different processors. Thus, where a component or module is described as being configured to perform certain operations, such configuration may be accomplished by, for example, designing the electronic circuitry to perform the operations, performing the operations by programming programmable electronic circuitry (such as a microprocessor), or any combination thereof. The processes may communicate using a variety of techniques, including but not limited to conventional techniques for inter-process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will be evident that various additions, subtractions, deletions and other modifications and changes may be made thereunto without departing from the broader spirit and scope as set forth in the claims. Thus, while specific disclosed embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are intended to be within the scope of the following claims.
The use of the terms "a" and "an" and "the" and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Unless otherwise indicated, the terms "comprising," "having," "including," and "containing" are to be construed as open-ended terms (i.e., meaning "including, but not limited to"). The term "connected" should be interpreted as including in part or in whole, attached to, or connected together, even though something is intermediate. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate embodiments of the disclosure and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure.
A disjunctive language, such as the phrase "at least one of X, Y or Z," unless explicitly stated otherwise, is intended to be understood within the context of what is commonly used to refer to an item, term, etc., may be X, Y or Z, or any combination thereof (e.g., X, Y and/or Z). Thus, such disjunctive language is generally not intended nor should it suggest that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
Preferred embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate, and the inventors intend for the disclosure to be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, unless indicated otherwise or otherwise clearly contradicted by context, this disclosure includes any combination of the above elements in all possible variations thereof.
All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. In the foregoing specification, aspects of the present disclosure have been described with reference to specific embodiments thereof, but those skilled in the art will recognize that the present disclosure is not limited thereto. The various features and aspects of the disclosure described above may be used alone or in combination. In addition, embodiments may be utilized in any number of environments and applications other than those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.