Saudi HPC/AI Conference 2022:
Using HPC & AI to accelerate and improve medical research
(September 27-29, 2022)
HPC EMEA Director, Intel
HPC & AI with Intel in the New Era of Supercomputing
HPC, AI, and Analytics users ask more of their HPC-AI systems than ever before. High Performance Computing is the foundation of research and discovery. Artificial Intelligence is adding to it. Intel’s deep investments in developer ecosystems, tools, technology and an open platform are clearing the path forward to scale artificial intelligence everywhere. Intel has made AI more accessible and scalable for developers through extensive optimizations of popular libraries and frameworks on Intel® Xeon® Scalable processors. Intel’s investment in multiple AI architectures to meet diverse customer requirements, using an open standards-based programming model, makes it easier for developers to run more AI workloads in more use cases. Let’s look at Intel HPC-AI strategy and new innovations including the latest Intel® Xeon® Scalable processors, data center GPUs and powerful software tools. Together, let’s accelerate the next era of innovation in HPC-AI.
Associate Research Scientist, King Abdullah International Medical Research Center
Application of Artificial Intelligence in Cytogenetics
An artificial intelligence approach to semi-automate detection of structural chromosomal abnormalities
Director, The Cambridge Centre for Data-Driven Discovery, UK
Mohammed S. Alarawi
Research Specialist, KAUST
The current status of the biomedical/biological research in term of HPC usage and presence
The volume of data generated from biological source had increased massively. Since the introduction of high throughput sequencing, imaging and screening platforms; the rate of digitalizing biology pushed computational resources to new limits for, compute, storage and data transfer. The secondary use of biological data increases the value of research money. The number of algorithms and tools developed to analyze biological data is increasing rapidly. The projection of zettabyte of raw data, nonetheless the intermediate analysis results is the near future, as major databases are doubling every 12-18 months. This plays a major role into pooling resources and develop strategy to enable best practice use of data and resources. Biological/biomedical research within Saudi Arabia need focus in exercising fair use of data and fair access to HPC resources to further achieve the goals of improving human life by answering fundamental research questions.
AI Business Development Manager, Hewlett Packard Enterprise
Cray AI Development Software Environment for HPE SUPERCOMPUTING
Cray AI Development Environment is a machine learning training platform that makes building machine learning models fast and easy. The software platform enables Machine Learning Engineers and researchers to:
- Train models faster using state-of-the-art distributed training: by provisioning machines, setting up
networking, optimizing communication between machines, efficient distributed data loading, and fault tolerance.
- Automatically find high-quality models with advanced hyperparameter tuning: including state-of-theart algorithms developed by the creators of Hyperband1 and ASHA2.
- Efficiently utilize different accelerators (e.g. GPUs): with intelligent and configurable resource management.
- Track, reproduce, and collaborate on experiments: with automatic experiment tracking that works outof-the-box, covering code versions, metrics, checkpoints, and hyperparameters.
As an end-to-end training platform, the system integrates these features into an easy-to-use, highperformance Machine Learning and Deep Learning environment that can be deployed on bare metal, Kubernetes, or the cloud, supporting the largest providers such as AWS, Azure, and GCP”””
Chief Technology Officer Unstructured Data Solutions @ EMC, Dell
AI & HPC in Healthcare
Field CTO, VAST Data
Addressing the Exascale storage challenge
VAST Data’s managed storage software unlocks the value of data and odernizes datacentres in preparation for the era of AI computing. VAST delivers real-time performance to all data and overcomes the historic cost barriers to building all-flash datacentres. Since its launch in February 2019, VAST has become the fastest-selling infrastructure startup in history. Join Sven Breuner during this session to learn more.
Muneera M. Almuhaidib
Computer Operating System Specialist, Saudi Aramco
HPC Cybersecurity benchmark
This presentation shares the outcomes of a research project on HPC cybersecurity posture that was done recently by Saudi Aramco. The main purpose was to see what other main HPC centers are doing in terms of security. The presentation covers the research problem, objectives, survey, benchmarking, and feasible ways to enhance the ECC HPC security.
Senior Director, Strategic Relationships, Enterprise Computing, Altair Engineering, France
Multi-dimensional HPC with Altair: A deep-dive into the Convergence of HPC and AI
High performance computing (HPC) and artificial intelligence (AI) are converging. This requires administrators to manage both workloads together in an unsiloed environment. This presentation will illustrate how Altair® PBS Professional® the industry’s leading job scheduling and workload management solutions together with Altair HPC Tools can be used as a unique scheduler both for HPC and Kubernètes. We will also explore integration with the most important AI tools.
Cloud Architect, Saudi Aramco
Cloud-native HPC use case
This presentation will present the available HPC cloud-native services and how they can be utilized to showcase the smart modern way for running HPC applications securely and in a cost-effective way.
We will introduce the scope and extent of the current state of HPC services in the cloud and how they provide the required building blocks to build the required infrastructure and services for HPC workloads.
Innovating without Infrastructure Constraints, improving security and operational posture and enabling advanced workflows.We will introduce the scope and extent of the current state of HPC services in the cloud and how they provide the required building blocks to build the required infrastructure and services for HPC workloads.
Innovating without Infrastructure Constraints, improving security and operational posture and enabling advanced workflows.
Parallel Programming Software Engineer, Intel
Leveraging DAOS Storage System for Seismic Data Storage and Manipulation
The DAOS seismic graph is introduced to the seismic community, utilizing the evolving DAOS technology, to solve some of the seismic IO bottlenecks caused by the SEGY data formatthrough leveraging the graph theory in addition to the DAOS object-based storage to design andimplement a new seismic data format natively on top of the DAOS storage model in order to accelerate data access, provide in-storage compute capabilities to process data in place and to get rid of the serial seg-y file constraints. The DAOS seismic graph API is built on top of theDAOS file system(dfs) and seismic data is accessed and manipulated using the DAOS seismic graph API after accessing the root seismic dfs object. The mapping layer is perfectly utilizing the graph theory and the object storage to split the acquisition geometry represented by the tracesheaders away from the time-series data samples.
EMEA Director, Data Centric Workloads Specialists, Dell
Data Analytics & AI in HPC
Data is growing, AI is everywhere and HPC is converging with every emerging and disruptive technology you hear about. You know this, so this session will focus upon the Why this is happening and how you can accelerate your journey into the next generation of HPC. This session will spotlight how the complex data management process integrates into a modern HPC environment. We look ahead for the next generation of HPC environment where the data gathered at the edge, processed using AI and flows though a distributed HPC. AI and Data Analytics in HPC spans across hybrid clouds and innovative on premises cloud enabled HPC services. The future for HPC is amazing, the potential is huge. Join this session to get a closer look at the How as well as the why of Data Analytics & AI in HPC.
Pet Engrg Sys Analyst IV, Saudi Aramco
Simulation Runtime Optimization via Auto-Tuning of Numerical Tolerances
The presentation will give an overview of Saudi Aramco efforts to optimize the runtime of numerical reservoir simulators. These efforts focused on optimization of the reservoir simulation model solver tolerances, global source code optimizations (e.g. complex well modeling, domain decomposition, MPI communication reduction), and HPC environment tuning. The presentation will shed the light on a new innovative approach to determine the optimum numerical solver tolerances by analyzing various parameters (e.g. pressure and saturation changes, material balance errors, etc.). This innovative approach has the potential to speed up the simulation runtime by up to 60%. This will result in improving the simulation runtime and allowing for accommodating more simulation runs to address the business requirements.
Petroleum Engineer System Analyst, Saudi Aramco
Leveraging Artificial Intelligence to Optimize Reservoir Simulation HPC Environment
This presentation will give an overview of several AI algorithms that have been developed in-house to optimize the utilization of the reservoir simulation HPC compute resources. This development capitalizes on Deep Learning and Big Data Mining to accurately predict GigaPOWERS jobs’ resources requirements (e.g. cores, memory & runtime). This is accomplished by predicting the optimal number of cores and memory requirements while maintaining an optimized runtime and ensuring maximum scalability. This effort helped to optimize the utilization of compute resources and significantly improve reservoir simulation KPI’s (e.g. Job Wait Time, HPC effectiveness, etc.).
Geophysicist IV, Saudi Aramco
Leveraging High Performance Computing for Big Data Processing
Datasets such as 3D seismic datasets are typically enormous and are therefore computationally expensive to generate seismic attributes on. They may also contain noise, which can degrade the results of interpretation algorithms and computed seismic attributes. As a result, powerful filtering algorithms such as Non-Local Means (NLM), are required to produce noise-reduced and structurally-preserved results. Such powerful algorithms are computationally intensive for large seismic datasets and would therefore benefit significantly from hardware acceleration.
Dhabaleswar K. (DK) Panda
Professor and University Distinguished Scholar, The Ohio State University
High-Performance Deep Learning, Machine Learning, and Data Science on Modern HPC Systems
This talk will start with an overview of challenges being faced by the AI community to achieve high-performance Deep Learning (DL), Machine Learning (ML), and Data Science on Modern HPC systems with both scale-up and scale-out strategies. Next, we will focus on a range of solutions to address these challenges: 1) MPI-driven Deep Learning on CPU and GPU-based systems, 2) Out-of-core DNN training and exploiting Hybrid (Data and Model) parallelism for training large models and data, 3) High-performance MPI Runtime for cuML to support GPU-accelerated ML applications, and 4) High-Performance Dask for supporting data science applications. Case studies to accelerate DL, ML, and data science applications on modern HPC systems will be presented
Dr. Nofe Ateq Alganmi
Assistant Professor, KING ABDULAZIZ UNIVERSITY
Increasing Diagnostic rate in Clinical Genomics Variant Interpretation using Aziz Supercomputer
With the current knowledge of NGS (Next Generation Sequencing), its medical uses, and the relevant progress in information technology (such as high-performance computing), it is possible to imagine the near-future vision of ubiquitous medical software systems that will not only continuously support the “bench-to-bedside” transition but will also be available in custom toolboxes for all phases of diagnosis and treatment.
In this talk, promising results, and best practice in using King Abdulaziz university supercomputer (AZIZ) to apply genetics medicine in clinics will be presented.
Muataz Al Barwani
Senior Director, Center for Research Computing, New York University Abu Dhabi, Abu Dhabi, UAE
Research Computing @ NYUAD
Research computing historically has been the purview of a few fields within engineering and applied sciences with the focus on access to and the using of High-Performance Computing (HPC) systems. However more recently, other disciplines such as social sciences and humanities have ventured into data intensive research, this requires additional resources and support.
To cater for this expansion and growth, universities should not only grow their computing and data storage resources but also introduce new services such as consulting & professional services, application development and data science services including; analytics, visualization, big data, data management and the use of artificial intelligence (AI) techniques such as machine learning, natural language processing and computer vision.
This talk will provide insight into the Center for Research Computing at New York University Abu Dhabi (NYUAD); the infrastructure, applications, tools, governance, staff and the skills needed to manage and support all computational and data intensive research activities carried out at NYUAD.
Dr. Ben Bennett
Director, HPC & AI Strategic Programs
Hewlett Packard Enterprise
SUPERCOMPUTING FOR THE EXASCALE COMPUTING ERA
Exascale Computing may seem a long way off for the majority of users of high-performance computing, but the resources that HPE has invested in to stand up this flagship problem solving supercomputers has benefits for the industrial and commercial deployments. See how the work that creates tomorrow’s supercomputers is relevant to all users of high performance computing, today.
DIRECTOR SUPERCOMPUTING CORE LAB, KAUST
HPC/AI Service at KAUST
Overview of HPC/AI service at KAUST will be given, which include infrastructure, application, and collaboration.