by hpcsaudi_duafc3 | Feb 18, 2019 | Uncategorized
Welcome to WordPress. This is your first post. Edit or delete it, then start writing!Ahmad Alabduljabbar,
Intel Corporation
Saudi Arabia Country Manager
Ahmad Alabduljabbar has extensive experience of over 20 years in IT, banking and technology management. He has background in sales and marketing, business development, product development, customer management, process optimization and customer relationship management.
Experience in IT, HPC, Big Data, Islamic and Conventional Banking Products and
Enterprise Business Segments. Ahmad has held leadership and management positions with many world class
organizations and affiliates such as Olayan group, City Bank, HSBC, Credit Agricole
and prior to joining Intel 2008 he was Regional e-banking Sales Manager at Banque
Saudi Fransi, and currently working as Saudi Arabia Country Manager at Intel
Corporation.
Ahmad has undertaken business development for many government and private
sector entities as well as universities and enterprises. Ahmad holds an MBA focused
on Big Data Marketing from University of Northampton in the United Kingdom.
Philip A. Murphy Jr.
Co-Founder, Chief Executive Officer
As CEO of Cornelis Networks, Phil is responsible for the overall management and strategic direction of the company. Prior to co-founding Cornelis Networks, Phil served as a director at Intel Corporation, responsible for fabric platform planning and architecture, product positioning, and business development support. Prior to that role, Phil served as vice president of engineering and vice president of HPC technology within QLogic’s Network Solutions Group, responsible for the design, development, and evangelizing of all high-performance computing products, as well as all storage area network switching products. Before joining QLogic, Phil was vice president of engineering at SilverStorm Technologies, which he co-founded in 2000 and which was acquired by QLogic in 2006. Prior to co-founding SilverStorm, Phil served as director of engineering at Unisys Corporation and was responsible for all I/O development across the company’s diverse product lines.Phil holds a BS in Mathematics from St. Joseph’s University and an MS in Computer and Information Science from the University of Pennsylvania.
Title: Purpose Built High-Performance Fabrics for HPC/HPDA/AI
A recent spinout of Intel, Cornelis Networks, is excited to participate at the 10th SaudiHPC symposium. Our CEO, Phil Murphy’s session will highlight the strong foundation the company is built on, joint success with OEM partners and end users alike, and the open standards innovation including OFI and next generation Fabric solutionsthatCornelis will bring to the HPC and AI communities.
Jysoo Lee is Facilities Director of Research Computing Core Labs at KAUST, responsible for KAUST’s supercomputing, cluster computing, and visualization services. Prior to this role, Lee was director of Supercomputing Center in KISTI (Korea Institute of Science and Technology Information) from 2004 to 2006 and from 2009 to 2012, and Founding Director General of NISN (National Institute of Supercomputing and Networking) in Korea for 2013 to 2014.
Lee led numerous national initiatives such as Korean National Grid Project of K*Grid and Korean National e-Science Project, and has been involved in international organizations such as the OGF (Open Grid Forum), PRAGMA (Pacific Rim Application
and Grid Middleware Assembly), and GLORIAD (Global Ring Network for Advanced Applications Development). He was the Chief Professor for the Grid and Supercomputing Program of Korea’s University of Science and Technology.
He received a B.S. from Seoul National University in Korea and a Ph.D. from Boston University, both in physics. He was visiting scholar at Julich Supercomputing Center in Germany, and was visiting professor at the University of California at San Diego.
Faisal Nazir is a Machine Learning Specialist Solutions Architect in Amazon Web Service. Faisal has over 20 years experience working in the IT industry working on projects in Saudi Arabia, UAE, Kuwait and Oman and also Europe and Asia. He has a Masters in Physics from Imperial College London where his master dissertation was on the Consistent Histories interpretation of Quantum Mechanics.
Valerio’s Bio: AI Lead & Solution Architect for Lenovo, he is a key member of an expert team of Artificial Intelligence, Machine Learning and Deep Learning specialists operating within the EMEA field sales organization and its business development team. He is a recognized expert in the fields of neuroscience and neurophysiology with 10 years of track record in brain research made between Italy and USA.
An Engineer working in research and development of Numerical Weather Prediction applications at the National Center for Meteorology and part of the Numerical Weather Prediction team. The NWP team at the NCM is responsible for simulating the atmosphere and providing accurate and timely products to the forecasters at the Central Forecasting Department and other beneficiaries. This involves being at the crossroads of atmospheric science, numerical methods and software development.
His background is in Chemical Engineering, having received his BSc from King Abdulaziz University and his Mres from Bolton, UK.
Khalid Al-Garni is a software developer at the Exploration Application Services Department at Saudi Aramco, supporting multiple seismic processing applications. Khalid has over twenty years of application development and support at Saudi Aramco. Khalid received his BS degree in Computer Science from King AbdulAziz University, and his MS degree from Bradford University (UK), in High Performance Computing. Khalid has multiple research work presented in multiple international conferences e.g. SPE, GEO.
Silvio Giancola is a Research Scientist at King Abdullah University of Science and Technology (KAUST), working under the supervision of Prof. Bernard Ghanem in the Image and Video Understanding Laboratory (IVUL), part of the Visual Computing Center (VCC). He obtained his MSc from INSA Strasbourg, France and his PhD from Politecnico di Milano, Italy.
Muataz Al-Barwani is the Senior Director of the Center for Research Computing at New York University Abu Dhabi (NYUAD). He joined NYUAD in February 2012 to establish the center for High Performance Computing (HPC).
Before coming to NYU Abu Dhabi Muataz established the HPC Facility at Sultan Qaboos University in Muscat, Oman becoming its’ inaugural manager in 2008. In addition to being a full-time faculty and researcher in the area of computational physics.
He received his bachelors’ degree in Physics from Sultan Qaboos University (Oman) in 1992, a master’s degree in Physics from Brown University (USA) in 1995 and a Ph.D. in Theoretical Physics from the University of Bristol (UK) in 2000.
Mansoor Hanif is the Executive Director of Engineering in the Technology & Digital sector at NEOM,where he oversees NEOM’s initiatives on emerging technologies such as space, satellites, advanced robotics and human-machine interfaces. He also leads the development of the NEOM Digital Masterplan and research collaborations with leading global universities and thinktanks.
Previously, Mansoor led the design and implementation of NEOM’s fixed, mobile, satellite and subsea networks.
An industry leader, Mansoor has over 25 years of experience in planning, building, optimising and operating mobile networks around the world. He is patron of the Institute of Telecommunications Professionals (ITP), a member of the Steering Board of the UK5G Innovation Network, and on the Advisory Boards of the Satellite Applications Catapult and University College London (UCL) Electrical and Electronic Engineering Dept.
Prior to joining NEOM, Mansoor was Chief Technology Officer of Ofcom, the UK telecoms and media regulator, where he oversaw the security and resilience of the nation’s networks.
As Director of the Converged Networks Research Lab at BT, he led research into fixed and mobile networks to drive convergence across research initiatives.
Mansoor held several roles at EE, a UK-based telecommunications company, and was responsible for the technical launch of 4G and integration of the Orange and T-Mobile networks as Director of Radio Networks and board member of MBNL. In addition, he held positions at both Orange Moldova and Vodafone Italy, overseeing network optimization, capacity expansion and the planning and implementation of new technologies.
Mansoor holds a Bachelor of Engineering in Electronic and Electrical Engineering from University College London (UCL) and a DiplômeD’ingénieurfrom the École NationaleSuperieurede Télécom de Bretagne.
Noha Ahmed Al-Harthi is the Technology Lead in the Technology & Digital sector of NEOM. In her role, Dr. Al-Harthi is leading NEOM’s initiatives on emerging technologies such as advanced robotics and human-machine interfaces. She holds a Ph.D. and a Master’s degree in Computer Science and Electrical Engineering from King Abdullah University of Science and Technology (KAUST). Dr. Al-Harthi is the first researcher from the Middle East to win the prestigious Gauss award (2020) for original research that best advances high-performance computing, and has published many research papers in the fields of HPC and supercomputers in reputed international conferences and journals.
Sanzio Bassini. Director of Supercomputing Application and Innovation Department of Cineca, Italian Inter University Consortium. He has been Independent reviewer of many international digital infrastructure projects, recently Member of the Expert committee of the Canadian Major Science initiative Fund 2017 – 2022. He served as Vice Chairman for the Research Area of European Technology Platform for HPC – ETP4HPC AISBL in the years 2012 – 2014 and as Chairman of the Partnership for Advanced Computing – PRACE Council in the period 2014 – 2106. Currently he is Member of the EuroHPC Infrastructure Advisory Board and Leader of EuroHPC Italian Pre-Exascale Infrastructure Leonardo project.
Rooh Khurram is working as a Staff Scientist at the KAUST Supercomputer Laboratory at King Abdullah University of Science and Technology in Saudi Arabia.
He provides advanced support for CFD projects and runs training and consulting services for the Kingdom’s engineering community. Rooh has conducted research in finite element methods, high performance computing, deep learning, multiscale methods, fluid structure interaction, detached eddy simulations, in-flight icing and computational wind engineering. He has over 20 years of industrial and academic experience in CFD. His industrial collaborators include Boeing, Bombardier, Bell Helicopter, Newmerical Technologies, ANSYS and Saudi Aramco. Before joining KAUST in 2012, Rooh worked at the CFD Lab at McGill University and the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign.
Rooh received his Ph.D. from the University of Illinois at Chicago in 2005. In addition to a Ph.D. in Civil Engineering, Rooh has degrees in Mechanical Engineering, Nuclear Engineering and Aerospace Engineering.
As EMEA Business Director, Data Centric Workloads & Solutions for Dell Technologies, Christopher Huggins leads a team of regional development executives focused on enabling Dell’s HPC & AI business growth in Europe, Middle East & Africa.
Dell Technologies is ideally positioned to empower customers through the breadth of product portfolio; from server and storage in the datacenter, through desktops and client devices in the field, from software to the cloud and with data & compute at the heart of every solution.
From this standpoint, our DCWS Specialist Sales Team is engaged in supporting local Dell teams across EMEA in aligning Dell’s HPC & AI solutions portfolio (hardware, software & services) with evolving customer challenges and requirements. As a natural extension, this team offers customers future insights & guidance within the context of the fourth industrial revolution driven by artificial intelligence and big data. Further responsibilities include business strategy on incubation technologies such as Machine Learning, coordination of key partner relations and marketing.
Before joining Dell, Christopher acted as Commercial Director for ClusterVision, a European HPC Specialist. Christopher holds a degree in Computer Science and Philosophy from the University of Durham.
Glendon Holst is a Visualization Scientist in the Visualization Core Lab at KAUST (King Abdullah University of Science and Technology) specializing in HPC workflow solutions that improve simulation efficiency, enable large-scale image processing, and support deep learning.
David R. Pugh is a Staff Scientist in the Visualization Core Lab at KAUST (King Abdullah University of Science and Technology) specializing in Data Science and Machine Learning. David is also a certified Software and Data Carpentry Instructor and Instructor Trainer and is the lead instructor of the Introduction to Data Science Workshop series at KAUST.
Herbert Huber obtained a PhD in physics from the Ludwig-Maximilians-University of Munich (LMU) in 1998. He joined BADW-LRZ in the year 1997 and is actually leading the “High-Performance Systems” department of BADW-LRZ. The focus of his work and his research interests are energy-efficient high-performance computer infrastructures (processor and system architecture, interconnection network, file systems) and supercomputing centres.
James Maltby is a Solution Architect for Cray, Inc. and specializes in mapping scientific and business applications to new computer architectures. He has an academic background in physics and engineering, specializing in radiation transport. He has worked for Cray since 2000, developing software for the massively multithreaded Cray XMT as a well as the other Cray systems. He also led the Bioinformatics practice at Cray for several years, using HPC to solve Life Science problems. Also, he wrote a highly parallel in-memory Semantic Graph Database for the XMT architecture. His most recent project involved developing a Scalable Deep Learning and Analytics package for the Cray XC series of supercomputers, now available as Urika-XC
Martin Hilgeman (1973,The Netherlands) has aMSc. in Physical and Organic Chemistry obtained at the VU University of Amsterdam. He has worked at SGI and IBM for 14 years as a consultant, architect and as a member of the technical staff in the SGI applications engineering group, where his main involvement was in porting, optimization and parallelization of HPC applications.
Martin joined Dell Technologies in 2011, where he is acting as a Technical Director for HPC in Europe, Middle East and Africa. His main interests are into application optimization, modernization of parallel workloads and platform efficiency.In 2019, he joined AMD as a senior managerand worked on porting and optimizing the major HPC applications to the “Rome” microarchitecture. Martin returned to Dell Technologies in May 2020 as a Technical Lead HPC applications in the Data Centric Workloads Engineering team in Austin, TX.
Title: “Technology trends in HPC and opportunities for a system builder”
With multiple architectural choices available for every component of an HPC system, the technology is evolving more rapidly than ever. Incorporating these technologies into a complete solution with respect to cost, thermals and performance is a continuing challenge for a HPC vendor, like Dell Technologies. This presentation shows some technology trends and their potential for the future.
Mike is the Chief Technology Officer for the High-Performance Computing/AI, Mission Critical Solutions, and Converged Edge Business Unit (HPC/MCS) at HPE
Mike is responsible for driving the long-term product roadmap and architecture, tracking market trends and differentiation, evaluating new technologies and business partnerships, and representing HPE’s perspective externally for the HPC/MCS BU. He leads a team which covers HPC System Architecture, High Performance Interconnects, High Performance Data/Storage Technology, AI Technology, High Performance Programming, Memory-Driven Computing, and Security. This work involves collaboration across many other teams at HPE including Hewlett Packard Labs, HPE-IT, and Pointnext services.
Mike has a B.Sc in Computer Systems Engineering, The University of Kent, Canterbury, UK. Mike has been granted multiple US patents in the field of computer system architecture.
Eng Lim Goh is senior vice president and chief technology officer for artificial intelligence at Hewlett Packard Enterprise.Prior to this, he was CTO for majority of his 27 years at Silicon Graphics, now an HPE company.His research interests include humanity’s differentiations we progress from analytics to inductive machine learning, deductive reasoning, and specific to general artificial intelligence.He continues his studies in human perception for virtual and augmented reality.
As principal investigator of the experiment aboard the International Space Station to operate autonomous supercomputers on long duration space travel, Dr. Goh was awarded NASA’s Exceptional Technology Achievement Medal.In addition to co-inventing blockchain-based swarm learning applications, he oversees deployment of artificial intelligence to Formula 1 racing, works on industrial application of technologies behind a champion poker bot, co-designed the systems architecture for simulating a biologically detailed mammalian brain, and led the machine learning of gene expression data from a vaccine clinical trial.He has been granted nine U.S. patents, with four others pending.
A Singapore Visionary Award recipient, Dr. Goh is a Scientific Advisory Board member of the National Research Foundation, Prime Minister’s Office.In 2005, InfoWorld named him one of the World’s 25 Most Influential CTOs. He was included twice in the HPCwire list of “People to Watch”and received the HPC Community Recognition Award. His work for Stephen Hawking included a symposium invitation to introduce the discoveries of Professor Saul Perlmutter, winner of the 2011 Nobel Prize in Physics.
A Shell Cambridge University Scholar, Dr. Goh completed his PhD research and dissertation on parallel architectures and computer graphics, and holds a first-class honors degree in mechanical engineering from Birmingham University in the U.K.
Johann Lombardi is a senior principal engineer in the Cloud & Enterprise Solution Group (CESG) at Intel. He started to work on Lustre in 2003 and led the sustaining team in charge of the Lustre file system worldwide support for more than 5 years. He then transitioned to research programs (Fast Forward, ESSIO, CORAL & Path Forward) to lead the development of a storage stack for Exascale HPC, Big Data and AI called DAOS.
Yasmeen Alufaisan is an IT System Analyst in Saudi Aramco EXPEC Computer Center. She previously served as an Assistant Professor in the College of Computer Engineering and Science at Prince Mohammad bin Fahd University. She holds a Ph.D. in Computer Science from the University of Texas at Dallas. Her research interests are accountability and privacy issues in data mining and machine learning models.
Saeed Al-Zahrani is an IT Consultant at Saudi Aramco as part of the EXPEC Computer Center Technology Planning Group. He holds a Bachelor of Science degree in Computer Engineering from Oregon State University and a Masters in Computer Science from Sheffield University. Saeed has more than 20 years of experience in IT industry mainly in High Performance Computing where he was part of multiple HPC projects at Saudi Aramco.
Mohammed Ahmed Humaid Al-Amri
Director of numerical weather prediction department in national center for meteorology
Master degree in meteorology
17 years experience
DNA profiling is a widely-used method in many important application areas, including paternity testing, disaster victim identification, missing person investigations, mapping genetic diseases, and criminal investigations. A prevalent task in DNA profiling in forensic sciences is determining the number of contributors in a DNA profile mixture. The computational complexity of such problems could increase exponentially with the increase of the number of unknowns in the DNA mixtures. The accuracy and computational complexity of the existing methods that address these challenges remain a problem. To address these limitations, machine learning (ML) methods have been proposed. To date, there are only two works that have used ML, and both of these methods have not used HPC and have expressed concerns about the computational complexity of the training phases.
By developing a new technique based on machine learning (ML) and HPC, it is expected to have an improvement in the speed and accuracy of the methods used to determine the number of contributors in a DNA profile. Especially in the training stage. This approach is what we are currently working on.
Hamdah Alotaibi is currently a master student in the Department of Computer Science at FCIT,
King Abdulaziz University (KAU). She obtained her bachelor degree from Umm Al-Qura
University. Her research interests are in the fields of High-Performance Computing and
Machine Learning.
Machine Learning has been transformative on solving previously intractable business problems within the field of computer vision, time series analysis and Natural Language. Similarly, Quantum Computing emerging as a third type of computing paradigm (after, boolean, and probabilistic). Recently research is discovering how these possible two technologies can be used together and the field of Variational Quantum Circuits and complementary topics such a Quantum State Vector Machines. In this talk we review what is going on here and how ML Researchers can practically implement Quantum Machine Learning Solution in their research.
This presentation will cover KSL’s educational, training and consulting services for the Kingdom’s
engineering community. This outreach program has two streams: modelling & simulation and deep
learning for engineers. Since 2017, KSL has conducted seven training workshops in collaboration with
ANSYS. The workshops were attended by researchers and students from KAUST, Saudi universities and
industry. In total, 408 people attended the workshops. Based on the feedback from the participants, the
training themes evolved with time from basic CFD/CSM to multiphase flows, electromagnetism and
multiphysics simulations. Hands-on sessions on workstations and Shaheen II are the key components of the
workshops. In 2019, KSL started a certification program which became quite popular among Saudi
students. Since the start of this certification program, 188 students have been certified in CFD and CSM.
Participants of the workshops showed keen interest in transitioning from workstations to Shaheen II. In
order to expedite this transition and provide project-based support services, KSL recently started a
consulting program for in-Kingdom engineers. KSL has arranged contractors from Fluid Codes to provide
the additional manpower required to execute this program. The consultancy program provides two services,
namely, a CFD helpline and scientific project support. Any in-Kingdom researcher or student can simply
send their questions to CFD-Helpline@hpc.kaust.edu.sa or apply for project-based support.
The convergence of scientific computing and data driven science opens up new opportunities for the
engineering community. Deep learning has been applied successfully in the fields of computer vision,
robotics, gaming, and medical science, but there are limited examples of applying it to engineering
problems. In order to train Saudi engineers in basic deep learning techniques, KSL teamed up with
Mathworks and organized a one-day training event last year. This online event was attended by 192 people.
The attendees were trained with hands-on sessions on classification, images, transfer learning and time
series. Through live demos, the attendees were shown that deep learning workflow runs much faster on
GPUs compared to CPUs.
The desire of moving from data to intelligence has become a trend that pushes the world we live in today fast forward. Artificial intelligence (AI) techniques are being used as important tools to unlock the wealth of voluminous amounts of data owned by organizations. While we are mesmerized by the impressive achievement of these smart algorithms, we have overlooked an important issue –the need of explainable AI– that potentially conceals the limitations and risks of these algorithms. In this presentation, we highlight explainable AI that unmask the incomprehensible reasoning many AI techniques are deservedly taking the blame for.
The Distributed Asynchronous Object Storage (DAOS, see http://daos.io) is an open source scale-out storage system designed from the ground up to deliver high bandwidth, low latency, and high I/O operations per second (IOPS) to HPC applications. It enables next-generation data-centric workflows that combine simulation, data analytics, and AI. This talk will first provide an overview of DAOS capabilities, then introduce how DAOS can accelerate Oil & Gas workloads and finally present the roadmap and future features.
The Presenter will give an introduction on the role of the NCM, its current and previous HPC capabilities, followed by a description of the current model line up including the models used for atmospheric simulation, ocean and sea currents and wave modelling.
Talking Points:
1- Introduction to National Center for Meteorology
2- Modelling and Simulation needs
3- HPC history
4- Current HPC specifications
5- Benchmarks
6- Models: (WRF, RAMS, NEMO, WAM, WRF-CHEM)
7- Models verification and results
8- Future expansion (Stage II)
Research computing historically has been the purview of a few fields within
engineering and applied sciences with the focus on access to and the using of High
Performance Computing (HPC) systems. However more recently, other disciplines
such as social sciences and humanities have ventured into data intensive research,
this requires additional resources and support.
To cater for this expansion and growth, universities should not only grow their
computing and data storage resources but also introduce new services such as
consulting & professional services, application development and data science
services including; analytics, visualization, big data, data management and the use of
artificial intelligence (AI) techniques such as machine learning, natural language
processing and computer vision.
This talk will provide insight into the Center for Research Computing at New York
University Abu Dhabi (NYUAD); the infrastructure, applications, tools, governance,
staff and the skills needed to manage and support all computational and data
intensive research activities carried out at NYUAD.
The Arabic language is spoken by millions of people around the world, and one
of the most well-known characteristics of it is that it is syntactically and morphologically
much more complex than other languages, which makes understanding and
implementing its rules a tough task for a lot of native and non-native speakers. Automatic
proofreading tools for Arabic texts have been in high demand in recent years due to the
complexity of the Arabic language. Automating the proofreading task will allow a wider
range of writers to correct their texts at cheaper costs. Furthermore, having correct texts
will reduce the input noise within many Natural Language Processing (NLP) applications,
therefore increases its reliability. This concept is established in the research community
as Grammar Error Correction (GEC) which is defined as \The task of automatically
detecting and correcting grammatical, spelling, and word choice errors in written text”
CloudLabeling as an online platform that aims to deliver “object detection as a service”. CloudLabeling leverages AWS SageMaker scalability and last advances in object detection on images, to train, evaluate and deploy deep learning models trained on customers data. Our online tool enables an online active learning process, allowing any user to upload its own images, annotate them (or part of them) and train a custom detection model from custom data. The model is deployed for remote inference through APIs and used on newly uploaded images to increase the dataset and help in the annotation process. CloudLabeling is available worldwide and currently used in KSA, France and Romania. It already empowers applications in fish behavior tracking, endangered species census, seeds counting and satellite imaging, among others.
The desire of moving from data to intelligence has become a trend that pushes the world we live in today fast forward. Artificial intelligence (AI) techniques are being used as important tools to unlock the wealth of voluminous amounts of data owned by organizations. While we are mesmerized by the impressive achievement of these smart algorithms, we have overlooked an important issue –the need of explainable AI– that potentially conceals the limitations and risks of these algorithms. In this presentation, we highlight explainable AI that unmask the incomprehensible reasoning many AI techniques are deservedly taking the blame for.
We design and develop a new high performance implementation of a fast direct LU-based solver using low-rank approximations on massively parallel systems.
The LU factorization is the most time-consuming step toward solving systems of linear equations in the context of analyzing acoustic scattering from large 3D objects.
The matrix equation is obtained by discretizing the boundary integral of the exterior Helmholtz problem using a higher-order Nyström scheme. The main idea is to exploit the inherent data sparsity of the matrix operator by performing local tile-centric approximations while still capturing the most significant information.
In particular, the proposed LU-based solver leverages the Tile Low-Rank (TLR) data compression format as implemented in the Hierarchical Computations on Manycore Architectures (HiCMA) library to decrease the complexity of “classical” dense direct solvers from cubic to quadratic order.
We taskify the underlying boundary integral kernels to expose fine-grained computations. We then employ the dynamic runtime system StarPU to orchestrate the scheduling of computational tasks on shared
and distributed-memory systems. The resulting asynchronous execution permits to compensate for the load imbalance due to the heterogeneous ranks, while mitigating the overhead of data motion.
We assess the robustness of our TLR LU-based solver and study the qualitative impact when using different numerical accuracies. The new TLR LU factorization outperforms the state-ofthe-art dense factorizations by up to an order of magnitude on various parallel systems, for analysis of scattering from large-scale 3D synthetic and real geometries.
The current ongoing developments in weather applications at the NCM. Two main challenges lay ahead: the large amount of weather data being collected and the large parallelization capabilities available in modern HPC platforms. The presentation describes a few solutions developed at the NCM that attempt to deal with these challenges through artificial intelligence and algorithmic and mathematical advancements in Computational Science.
Talking Points:
1- Current challenges (Amount of data and scalability of simulations)
2- Challenge 1: AI driven forecasts
3- Challenge 1: Declutter of Radar Images
4- Challenge 1: Interactive Visualization of Simulation Products
5- Challenge 2: The Saudi Arabian Mesoscale-Limited Area Model
6- Dynamics
7- Horizontal Discretization
8- Time Integration
9- Early Results
Edge-detection of 3D seismic data is a key process for identifying subsurface boundaries in
hydrocarbon reservoirs. It is generally performed with coherence algorithms (which are sensitive to
random noise). Smoothing algorithms are often run to reduce noise prior to computation with edge
detection algorithms. However, the smoothing algorithm must be edge preserving and three-
dimensional (3D). 3D Edge preserving smoothing (3D EPS) algorithms are effective in suppressing
noise while enhancing the edges but are compute intensive and inefficient even on multicore CPU
with multithreading parallelism. To overcome this issue, we parallelize the 3D EPS algorithm on GPU,
where massive complex multiplications and convolutions are performed iteratively on a large
number of parallel cores. We tested our GPU implementation on a 3D seismic data and our initial
study demonstrates that the NVIDIA GPU implementation (Tesla V100-SXM2) is about thirteen
times faster than the CPU multithreading platform (Intel Xeon Gold 6248 2.50GHz, 16 threads). The
optimized 3D EPS filter provides the opportunity to generate and efficiently analyze huge 3D seismic
volumes, which in turn will help to optimize well locations and reduce drilling risk.
A National Big Data project supported by the Regional government and the European Commission is building in Bologna one of the main HPC hubs at the international level. Computational methods are at the heart of an ecosystem of institutions, universities and research centers that use HPC enabling technologies for technological breakthroughs and innovation and scientific discoveries. CINECA is the keystone of the system and its supercomputing infrastructure is a resource of this system. During the presentation the development strategies of the technopole system and the HPC enabling infrastructure will be discussed.
Recent Comments