Welcome SuperBrain made with Rhomb.io


Tis Project was Founded by CDTI

The following document explains the functionalities regarding the software and the hardware of this product called

“ SuperBrain”

We will start with a short description about the history of the word cluster.

In the Inicial Concept we show how rhomb.io can fit in this product, allowing to build a hig availability cluster, allowing flexibility and of course a reduction in power consumption. All of this means that  introduce a new concept of multicomputer based on rhomb.io creating a cluster in one single device.

The product has different elements to describe, and will be explained in further deal over the next few pages.

The basic system, has 1 Superbrain Backplane, with up to 10 x Superbrain Edge boards.

From Wikipedia:

computer cluster consists of a set of loosely or tightly connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.

The components of a cluster are usually connected to each other through fast local area networks (“LAN”), with each node (computer used as a server) running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware[1] and the same operating system, although in some setups (i.e. using Open Source Cluster Application Resources (OSCAR)), different operating systems can be used on each computer, and/or different hardware.[2]

They are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.[3]

Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low-cost microprocessors, high speed networks, and software for high-performance distributed computing.[citation needed] They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM’s Sequoia.[4]

Basic Concepts

The desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations.

The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fast local area network.[5] The activities of the computing nodes are orchestrated by “clustering middleware”, a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept.[5]

Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer to peer or grid computing which also use many nodes, but with a far more distributed nature.[5]

A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer.[6] The developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a relatively low cost.[7]

Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. The TOP500 organization’s semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world’s fastest machine in 2011 was the K computer which has a distributed memory, cluster architecture.[8][9]

History

Main article: History of computer clusters

See also: History of supercomputing

VAX 11/780, c. 1977

Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup.[10] Pfister estimates the date as some time in the 1960s. The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl’s Law.

The history of early computer clusters is more or less directly tied into the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster.

The first production system designed as a cluster was the Burroughs B5700 in the mid-1960s. This allowed up to four computers, each with either one or two processors, to be tightly coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation.

The first commercial loosely coupled clustering product was Datapoint Corporation’s “Attached Resource Computer” (ARC) system, developed in 1977, and using ARCnet as the cluster interface. Clustering per se did not really take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VAX/VMS operating system (now named as OpenVMS). The ARC and VAXcluster products not only supported parallel computing, but also shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the Tandem Himalayan (a circa 1994 high-availability product) and the IBM S/390 Parallel Sysplex (also circa 1994, primarily for business use).

Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer. Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, and introduced internal parallelism via vector processing.[11] While early supercomputers excluded clusters and relied on shared memory, in time some of the fastest supercomputers (e.g. the K computer) relied on cluster architectures.

Attributes of Cluster

A load balancing cluster with two servers and N user stations (Galician).

Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations. In either case, the cluster may use a high-availability approach. Note that the attributes described below are not exclusive and a “computer cluster” may also use a high-availability approach, etc.

Load-balancing” clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized.[12] However, approaches to load-balancing may significantly differ among applications, e.g. a high-performance cluster used for scientific computations would balance load with different algorithms from a web-server cluster which may just use a simple round-robin method by assigning each new request to a different node.[12]

Computer clusters are used for computation-intensive purposes, rather than handling IO-oriented operations such as web service or databases.[13] For instance, a computer cluster might support computational simulations of vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that may approach “supercomputing“.

High-availability clusters” (also known as failover clusters, or HA clusters) improve the availability of the cluster approach. They operate by having redundant nodes, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure. There are commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux operating system.

Continue Reading for more info on Wiki: https://en.wikipedia.org/wiki/Computer_cluster

Description “SuperBrain Rhomb.io Cluster

The system has the following parts.

The EDGE Board

  • Core
  • Module x 2
  • Monitoring

The Backplane

  • Up to 10 x Edge Interfaces for conectivity
  • Ethernet Conectivity for stack.

The Hardware

Because the system  is designed to allocate all the system in a single unit, Superbrain is a Desktop Cluster.

Having the possibility to connect up to 10 edge boards, it appears to the customer to be one single computer. But having up to 10 x physical processors and depending on which core you chose, you can enjoy differing levels of performance.

 

The biggest differences between them are the size and the power consumption, as it may initially seem that with Superbrain you have less Gfps in one unit, but in reality you can stack to reach the desired cluster size, thereby giving you more than 5 times the power you currently enjoy.  More importantly, you will spend less money in obtaining increased power.

The system is developed to allow you to swap edge boards with hotplug, leading to qucker and more streamlined system repairs.

Regarding the Software

The system can run any Linux distribution depending on the core you chose.  With x86/x64 you can use any distribution, and with ARM architecture you can use any distribution which allows ARMHF architecture.

We will provide support for the Beowulf distribution in our github, ensuring it is configured and ready to use.

 

 

 

By | 2017-06-01T12:26:15+00:00 March 20th, 2017|Categories: General|0 Comments