[go: up one dir, main page]

A VMScluster, originally known as a VAXcluster, is a computer cluster involving a group of computers running the OpenVMS operating system. Whereas tightly coupled multiprocessor systems run a single copy of the operating system, a VMScluster is loosely coupled: each machine runs its own copy of OpenVMS, but the disk storage, lock manager, and security domain are all cluster-wide, providing a single system image abstraction. Machines can join or leave a VMScluster without affecting the rest of the cluster. For enhanced availability, VMSclusters support the use of dual-ported disks connected to two machines or storage controllers simultaneously.

Initial release

edit

Digital Equipment Corporation (DEC) first announced VAXclusters in May 1983. At that stage, clustering required specialised communications hardware, as well as some major changes to low-level subsystems in the VMS operating system. The software and hardware were designed jointly. VAXcluster support was first added in VAX/VMS V4.0, which was released in 1984. This version only supported clustering over DEC's proprietary Computer Interconnect (CI).

At the center of each cluster was a star coupler, to which every node (computer) and data storage device in the cluster was connected by one or two pairs of CI cables. Each pair of cables had a transmission rate of 70 megabits per second, a high speed for that era. Using two pairs gave an aggregate transmission rate of 140 megabits per second, with redundancy in case one cable failed; the star couplers also had redundant wiring for better availability.

Each CI cable connected to its computer via a CI Port, which could send and receive packets without any CPU involvement. To send a packet, a CPU had only to create a small data structure in memory and append it to a "send" queue; similarly, the CI Port would append each incoming message to a "receive" queue. Tests showed that a VAX-11/780 could send and receive 3000 messages per second, even though it was nominally a 1-MIPS machine. The closely related Mass Storage Control Protocol (MSCP) allowed similarly high performance from the mass storage subsystem. In addition, MSCP packets were easily transported over the CI allowing remote access to storage devices.

VAXclustering was an early clustering system to achieve commercial success (along with AT&T, Tandem Computers[1][2], and Stratus Computers[3]), and was a major selling point for VAX systems.

Later developments

edit

In 1986, DEC added VAXclustering support to their MicroVAX minicomputers, running over Ethernet instead of special-purpose hardware. While not giving the high-availability advantages of the CI hardware, these Local Area VAXclusters (LAVc) provided an attractive expansion path for buyers of low-end minicomputers. LAVc also allowed diskless satellite nodes to bootstrap over the network using the system disk of a bootnode.

Later versions of OpenVMS (V5.0 and later) supported "mixed interconnect" VAXclusters (using both CI and Ethernet), and VAXclustering over DSSI (Digital Systems and Storage Interconnect), SCSI and FDDI, among other transports. Eventually, as high-bandwidth wide area networking became available, clustering was extended to allow satellite data links and long-distance terrestrial links. This allowed the creation of disaster-tolerant clusters; by locating the single VAXcluster in several diverse geographical areas, the cluster could survive infrastructure failures and natural disasters.

VAXclustering was greatly aided by the introduction of terminal servers using the LAT protocol. By allowing ordinary serial terminals to access the host nodes via Ethernet, it became possible for any terminal to rapidly and easily connect to any host node. This made it much simpler to accomplish fail over of the user terminals from one node of the cluster to another.

Support for clustering over TCP/IP was added in OpenVMS version 8.4, which was released in 2010. With Gigabit Ethernet now common and 10 Gigabit Ethernet being introduced, standard networking cables and cards are quite sufficient to support VMSclustering.

Features

edit

OpenVMS supports up to 96 nodes in a single cluster, and allows mixed-architecture clusters, where VAX and Alpha systems, or Alpha and Itanium systems can co-exist in a single cluster (Various organizations have demonstrated triple-architecture clusters and cluster configurations with up to 150 nodes, but these configurations are not officially supported).

Unlike many other clustering solutions, VMScluster offers transparent and fully distributed read-write with record-level locking, which means that the same disk and even the same file can be accessed by several cluster nodes at once; the locking occurs only at the level of a single record of a file, which would usually be one line of text or a single record in a database. This allows the construction of high-availability multiply redundant database servers.

Cluster connections can span upwards of 500 miles (800 km), allowing member nodes to be located in different buildings on an office campus, or in different cities.

Host-based volume shadowing allows volumes (of the same or of different sizes) to be shadowed (mirrored) across multiple controllers and multiple hosts, allowing the construction of disaster-tolerant environments.

Full access into the distributed lock manager (DLM) is available to application programmers, and this allows applications to coordinate arbitrary resources and activities across all cluster nodes. This includes file-level coordination, but the resources and activities and operations that can be coordinated with the DLM are completely arbitrary.

With the supported capability of rolling upgrades and multiple system disks, cluster configurations can be maintained on-line and upgraded incrementally. This allows cluster configurations to continue to provide application and data access while a subset of the member nodes are upgraded to newer software versions.[4][5] Cluster uptimes are frequently measured in years with the current longest uptime being at least sixteen years.[6]

References

edit
  1. ^ "Tandem History: An Introduction". Center magazine, vol 6 number 1, Winter 1986, a magazine for Tandem employees.
  2. ^ "History of TANDEM COMPUTERS, INC. – FundingUniverse". www.fundinguniverse.com. Retrieved 2024-10-12.
  3. ^ Covi, K. R. (July 1992). "Three-loop feedback control of fault-tolerant power supplies in IBM Enterprise System/9000 processors". IBM Journal of Research and Development. 36 (4): 781–789. doi:10.1147/rd.364.0781. ISSN 0018-8646.
  4. ^ "VSI OpenVMS Cluster Systems" (PDF). August 2019.
  5. ^ "VSI Products – Clusters".
  6. ^ Uptimes Project breakdown for VMSclusters.

Further reading

edit