Sharing the wealth of HPC applications with flexible models as a service

0

A growing number of IT stores are offering high-performance IT systems and the applications they support as on-demand services accessible through cloud connections.

In the past, high-performance computing workshops functioned as a dedicated resource for the needs of scientists and engineers whose work required the computing power of a supercomputer. But this is not the case today. In modern HPC stores, IT administrators function as service providers serving the needs of many basic HPC users who need more processing power than they can get on a desktop or laptop system.

Take the case of the University of Michigan, where supercomputing specialists run a university-based supercomputing center like a business. They make the resources of their Great Lakes supercomputer available to approximately 2,500 users who run hundreds of different applications. Best of all, the reach of these HPC resources, based on systems from Dell Technologies, extends to the community. For example, users of the system include MCity, a public-private initiative that brings together industry, government and academia to advance transportation safety, sustainability and accessibility.

IT managers at the University of Florida are doing the same with their UF Innovate hub, the University’s tech business incubator. Among other support functions, UF Innovate provides start-ups with access to a Dell Technologies supercomputer for high performance computing, visualization and data analysis. This HPC system, known as HiPerGator, accelerates various research workloads with the power of more than 46,000 processor cores.

Bringing HPC to as many people as possible – via cloud connections

So how do today’s computer stores extend the benefits of HPC to thousands of users? Increasingly, the answer is a multi-cloud environment that gives users web access to a wide variety of HPC systems, both in on-premises private clouds and in public clouds.

The University of Michigan, for example, provides its university community with easy access to on-premises HPC clusters through its Open OnDemand program and to public cloud computing platforms through its ITS Cloud Services program.

The University of Florida does much the same through its version of the Open OnDemand service, which connects the academic community to the HiPerGator cluster to accelerate compute and data-intensive science workloads. To work on HiPerGator, users log into the system from a local computer through an SSH terminal session or through web application interfaces provided by the UF Research Computing team.

Meanwhile, on the West Coast, the San Diego Supercomputer Center (SDSC) at the University of California at San Diego is offering its cloud-based on-demand system to users who need immediate access to a supercomputer. for event science. Urgent applications likely to use the OnDemand system range from making films of earthquakes to providing near real-time warnings based on predictions of the trajectory of tornadoes, hurricanes and toxic plumes.

Automate processes and educate users

This brings us to another question. With such a variety of jobs coming from all over, how do HPC shops steer that job towards the right infrastructure? In short: automation.

SDSC, for example, automates the process with software that automatically determines where a job will run, matching the right job with the right IT resources. To enable this process with its Dell Technologies Expanse supercomputer, SDSC is pioneering composable HPC systems that dynamically allocate resources to suit individual workloads.

In addition to automating access to the right supercomputing resources, HPC workshops strongly focus on education and training to help their users get the most out of HPC clusters. This includes step-by-step online instructions, expert advice, and advice from people who have been there before. They all have teams to help people optimize their code, optimize infrastructure, and optimize results.

This is the case at the University of Michigan, where the Advanced Research Computing – Technology Services team offers workshops on topics such as GPU programming, writing machine learning code, choosing ‘machine learning and the creation and training of deep learning models.

Everything is an HPC workload

Research computing is all about working together to make the next big scientific discovery or technological innovation. And to carry out this research, we need high performance computer systems. Whether it is unraveling the secrets of a deadly virus like SARS-CoV-2 or simulating the aftermath of an earthquake, HPC is now an essential tool for scientific research.

HPC is also an essential tool for new applications of data analysis, artificial intelligence, machine learning and deep learning. For these and other applications, HPC technologies serve as the engine under the hood, helping us turn raw data into valuable information. To this end, HPC systems provide large memory to enable applications such as image recognition, visualizations and molecular dynamics simulations. They offer GPUs for training deep learning workloads and CPUs for machine learning and inference jobs that work well in HPC systems.

And increasingly, all of this is available as part of as-a-service approaches that integrate elements such as data pipeline tools for ingesting and processing data from a variety of sources, systems tailored to calculations that require large amounts of physical memory and storage clusters that contain petabytes of data to solve scientific problems. You name it, and you can probably get it as a service, through a cloud interface.

Gain practical experience

To reduce the risk associated with new technology investments and improve the speed of implementation, Dell Technologies invites customers to experience HPC solutions firsthand in a global network of dedicated facilities. These Customer Solution Centers are trusted environments where world-class IT experts collaborate to share best practices, facilitate discussions of effective business strategies, and use briefings, workshops, and proofs of concept to accelerate IT initiatives.

Other Dell Technologies resources available to support modern IT initiatives include the HPC and AI Innovation Lab, HPC and AI Centers of Excellence, and the Dell Technologies HPC community.

For a closer look at the HPC environments of the institutions discussed here, check out these assets:


Source link

Leave A Reply

Your email address will not be published.