Get it now
Customized scheme
EVOC University HPC Solution
With the development of the Internet era, more and more university projects have become inseparable from the use of computers, especially in some technical research fields. A single computing device can no longer meet its computing performance requirements, and the demand for high-performance computing clusters in universities has become increasingly strong.
At present, universities generally form a high-performance computing cluster based on colleges or schools, with multiple project teams sharing resources and different project teams researching different approaches
The requirements for high-performance computing cluster resources vary depending on the direction and field, so high-performance computing clusters in universities generally have the following characteristics:
Diverse application types
There are many departments in universities, and the departments that mainly require high-performance computing resources are those in the fields of physics and chemistry, bioinformatics, 3D design, and artificial intelligence. The resources required for research in these fields vary.
High network bandwidth
At present, most computing software supports MPI parallelism and can perform cross node parallel computing. The parallel scalability exhibited by computing programs with different algorithms varies. The parallel computing of most computing programs has a significant impact on the parallel acceleration ratio and parallel scalability of the computing program. The latency and bandwidth performance of computing networks are both important, and when there are many small or large packet exchanges, the latency of computing networks is very sensitive.
High concurrency read-write storage
High performance computing clusters require the configuration of a global shared storage system. When the cluster size increases, multiple computing nodes access IO nodes concurrently through the network, causing competition for network exit bandwidth among IO nodes. IO nodes are also unable to handle excessive IO requests, resulting in high load, IO blocking, and bottleneck formation. Especially in the field of physical chemistry, some computing software puts a lot of pressure on storing IO.
At present, EVOC provides hardware support and hardware resources required for high-performance computing platforms for university users throughout the entire architecture.
The hardware in high-performance computing clusters consists of four parts: computing resources, storage resources, network, and login management.
computing resource
CPU computing node: Adopting EVOC 2U Whitley platform server, a single node can support up to 80C, and a single CPU with eight memory channels greatly meets the requirements of memory intensive applications, efficiently unleashing its computing performance.
Heterogeneous nodes: Adopting EVOC GPU servers, a single unit can support up to 10 full height and full length GPU cards, providing powerful computing power for users' artificial intelligence training and other applications accelerated by available GPU computing resources.
Fat Node: Adopting EVOC 4U server, supporting up to 4 CPUs, providing large capacity memory to meet some applications that are not suitable for cross node computation and have high requirements for single machine computing performance.
Storage resources
The storage adopts the new generation distributed cluster storage system launched by EVOC, which adopts a horizontal expansion architecture to meet the requirements of efficient file storage systems and data sharing. Based on the storage server cluster, it provides a unified file namespace, high-performance file storage capacity, and stable and efficient read and write bandwidth. At the same time, it has the characteristics of elastic expansion, high reliability, easy deployment, easy management, and easy use, and protects data security through replication.
network resource
Computing storage network: using mainstream HDR Infiniband networking, all nodes adopt HDR full line speed interconnection, low latency and high bandwidth enable high-speed data transmission, avoiding becoming a system performance bottleneck;
Management network: Gigabit Ethernet is used as the login management network to meet the needs of various users accessing system resources;
Monitoring network: Gigabit Ethernet is used to uniformly monitor each node, timely locate faults, and grasp node status.
Login management
Login Node: EVOC 2U rack mounted servers are used as cluster login nodes to provide users with access to high-performance computing systems and cluster resources;
Management node: EVOC 2U rack mounted servers are used as cluster management nodes to deploy cluster management software, scheduling systems, and host the operation of services required by the cluster.
Powerful performance
Based on Intel's latest architecture Whitley, with optimal configuration, we maximize the performance of devices and provide resources needed for various applications to meet the computing requirements of university users.
Efficient management
In this system, whether it is computing resources, storage resources, login management nodes, or networks, they can continuously expand horizontally according to customer needs and scale, eliminating the worries of difficult resource expansion for university users in the later stage.
High scalability
By using cluster management and job scheduling software to integrate all resources in the system and manage all nodes uniformly, faults can be located in a timely manner through the management platform, which is beneficial for operation and management.
Safe and reliable
The system storage adopts a parallel distributed architecture, with multiple nodes processing data concurrently. The data is securely protected in the form of replicas to ensure its safety and availability.