Dr. Almosallam holds a Ph.D. degree from the University of Oxford in Artificial Intelligence and specifically Machine Learning. He previously worked as a visiting scholar at the University of Stanford and Columbia University in New York City. He is Currently an Assistant Research Professor at the National Center for Artificial Intelligence and Big Data Technologies at King Abdulaziz City for Science and Technology in Riyadh, Saudi Arabia. He has more than ten years of experience in AI research in areas such as Natural Language Processing, Social Network Analysis, Speech Processing, and AI governance/safety. His current research interests are astroinformatics and quantum machine learning.
Majdi Baddourah is a simulation and high-performance-computing Sr. Consultant hold a BS, master, and Ph.D. in Civil Engineering. Prior to joining Aramco in 2003, he worked for Lawrence Berkely Lab., USA carrying research and development for the US Department of Energy utilizing high-performance-computers. Majdi also worked at NASA Langley as a high-performance specialist. Majdi is a key support for POWERS and GigaPOWERS and providing support for Aramco strategic studies. Majdi also works with ECTD and ECOD and active technologist in evaluating and deployment state of the art high-performance-computing solutions at minimal cost. Majdi authored and co-authored many technical papers and patents. Majdi worked and mentored many young professionals at Saudi Aramco.
In reservoir simulation, high resolution models have become the norm to model detailed characteristics of fluids flow in hydrocarbon bearing reservoir. Massively parallel reservoir simulators exhibit MPI communication overhead that can be abridged to improve the applications runtime. Such challenge was addressed by reducing the MPI Barrier calls without changing the simulation results. This optimization has shown up to 4% speed up in the overall run time and 8% speed up in the MPI time. Besides, using Intel MPI collective algorithms has helped in optimization. Four highly used collective operations within the simulator application were optimized by selecting the best algorithm for each. This has shown a speed up of 3%. Cumulative speed up of both optimizations indicates up to 6% saving. The work presented will provide the process used in identifying communication hotspots by profiling, and the procedure on how the optimization was undertaken to overcome communication bottlenecks.
Our societies depend on our capability to collect massive data sets and make sense of that tsunami of data. There is an imperative requirement for exponential compute capability. But our old technologies and infrastructures will not respond to the challenges. Our labs demonstrated years ago that we had to rethink holistically our hardware and software stacks to pass the Exascale frontier in a power envelop of 20 MWatts. In the same time the Artificial Intelligence revolution, Edge and Cloud Computing are transforming our ecosystems and challenge even more our infrastructures. We will show why genZ, a vision shared by a large number of partners is so fundamental to succeed in the next decades, not only to develop the next generations of HPC systems, but also to respond to challenges of a world where everything compute.
Patrick works for HPE since 1980. As Distinguished Technologist, he helps our strategic customers to use disruptive technologies like photonic, new Storage Class Memories, new accelerators, that HPE and its partners are developing. Patrick is an expert in HPC applications tuning and microarchitectures. Patrick is engaged in developing the Genz ecosystem for Exascale systems and Edge Computing. Patrick is also involved in the BigData and Artifical Intelligence transformations of the workloads. Patrick studies the subjects related to the limits of computation then explores the potential of even more exotic computing technologies like Neuromorphic Computing, Quantum Computing, etc.
Alaa Alahmadi is an Assistant Professor at College of Computer Science and Information Technology – Imam Abdulrahman Bin Faisal University. She got her Phd from university of limerick, Limerick, Ireland, In 2016. Her research area is in ML, Text Classification and AI.
We have made great strides in computation the past 70 years, what started as bulky vacuum tubes in the 1940s has now reached nano-scale transistors almost approaching the size of an atom. The number of transistors has been doubling every two years as predicted by Gordon Moore in the 1960s in what is now known as Moore’s law, but this law will soon expire! Because as we enter the subatomic world, this world is governed by different laws of physics, i.e. quantum physics, and quantum phenomenon such as quantum tunneling will start to kick-in making it much harder to hold on to a particle. Thus, classical computing will soon reach its physical limit. Quantum computing aims to use quantum phenomena such as tunneling, superposition, and entanglement to its advantage and design a quantum information processing model based on the principles of quantum mechanics. It is not simply an evolution of classical computers, it is a paradigm shift that will not only help cross the physical barrier threshold but also the theoretical computational limits of classical information processing. In fact, this is what prompted research in quantum computing in the 1980s, after people have realized that simulating quantum systems in a classical computer is an impossibility, yet nature does it effortlessly. There are known problems in computer science that are so complex no computer can or will ever be able to solve, such as factoring large prime numbers for example, that quantum computers can solve almost instantaneously. We have come a long way since the 1980s and quantum computers are now a reality. There are now commercial quantum computers available by IBM and D-Wave Systems and more companies are starting research its potential to its business, including in oil and gas such as the collaboration between IBM and Exxon Mobile just announced in January of this year. This talk will focus on what quantum computers are, how are they different from classical computers and what is the current state of the technology and its adoption in the industry.
MNIST (“Modified National Institute of Standards and Technology”) is the de facto “hello world” dataset of computer vision. Since its release in 1999, this classic dataset of handwritten images has served as the basis for benchmarking classification algorithms. A new machine learning techniques emerge, MNIST remains a reliable resource for researchers and learners alike.
In this hands-on tutorial, we will walk participants through the development of several standard deep learning pipelines using the popular PyTorch framework capable of correctly identifying digits from the MNIST dataset. Participants will learn how they can customize the standard deep learning pipelines to improve model performance. Once participants have successfully trained their custom model, we will show them how to submit their model’s predictions to Kaggle for scoring. Time and resources are permitting we will also show participants how training can be accelerated using GPUs and how training of deep neural networks can be distributed across a cluster.
Participants are expected to bring their laptops and will need to download free, open-source software before arriving for the workshop.
Infrastructure could refer to the fundamental systems and facilities that an organization, city, or country needs to function. It needs to be understood in the context of the evolution of our societies, recent trends in urbanization, and the broader life. We define smart infrastructure as, “knowledge-based, collaborative, converged, ubiquitous, self-aware, adaptive, resilient, digitally-enabled, and self-governing foundational structure; comprising hard, soft, virtual, and digital facilities and systems, and intellectual and social capital; enabling social, environmental and economic sustainability; enabling innovation and competitiveness; facilitating personalization in all aspects of modern-day and future living, the aspects including transportation, healthcare, entertainment, work, businesses, social interactions, and governance; to meet societal, economic and other demands of organizations, cities or countries”. Smart infrastructure would include the Internet of Things (IoT) to monitor and actuate. High-performance computing (HPC), big data, artificial intelligence, cloud, fog, and edge computing will be needed to provide the necessary intelligence, storage, computer, and communication resources for the smart infrastructure.
In this talk, I would review some of the research at KAU on building smart infrastructure with a focus on converging IoT, big data, HPC, AI, fog, edge, and cloud computing
Reverse Time Migration (RTM) is a powerful seismic imaging approach, widely used for migrating areas of complex structures of steep-dips and subsalt regions, despite its high computational cost. Following our previous work showing FD kernel optimizations and different techniques to keep snapshots (with or without IO or compression), we are now presenting some results on boundary conditions taking into consideration the trade-off between the geophysical effects and utilization of computational elements.
Some approaches attempt to mimic the absorption of all incoming energy at the boundaries of the computational grid which imitate a real-life infinite media, e.g., Sponge, PML, random boundaries, etc.
Here, we review 2 RTM implementations; the first one is using the standard approach (2 propagation with in-memory snapshots of the full wave-field with IPP compression) covering Sponge and CPML boundary conditions, while the second implementation uses random velocity boundaries that almost avoids IO but involves an extra propagation. We previously demonstrated the efficiency trade-off of those implementations. So now we can typically balance the number of grid points in the damping area to find the best combination concerning the computational efficiency of the RTM kernels. Moreover, we present a complete comparison of the damped energy with varying boundaries lengths.
How Lenovo is addressing this trend. In the presentation, Rick Koopman will outline/discuss the areas of convergence, such as the application and technologies of HPC and AI.
The convergence of HPC and AI brings new challenges in computation, storage, communication and workload management to HPC. This talk will cover Huawei’s perspective on the synergy of HPC and AI.
The confluence of massive data with a massive computer is unprecedented. This coupled with recent algorithmic breakthroughs, we are now at the cusp of a major transformation. This transformation has the potential to disrupt a long-held balance between humans and machine where all forms of number crunching is left to computers, and most forms of decision-making are left to us humans. This transformation is spurring a virtuous cycle of computing which will impact not just how we do computing, but what computing can do for us. In this talk, I will discuss some of the application-level opportunities and system-level challenges that lie at the heart of this intersection of traditional high-performance computing with emerging data-intensive computing.
Rashid Mehmood is the Research Professor of Big Data Systems and the Director of Research, Training, and Consultancy at the High-Performance Computing Centre, King Abdulaziz University, Saudi Arabia. He has gained qualifications and academic work experience from universities in the UK including Swansea, Cambridge, Birmingham, and Oxford. Rashid has over 20 years of research experience in computational modeling and simulation systems coupled with his expertise in high-performance computing. His broad research aim is to develop multi-disciplinary science and technology to enable a better quality of life and Smart Economy with a focus on real-time intelligence and dynamic system management. He has published over 150 research papers including 6 edited books. He has organized and chaired several international conferences and workshops including EuropeComm, Nets4Car, SCE, SCITA, and HPC Saudi. He has led and contributed to academia-industry collaborative projects funded by EPSRC, EU, UK regional funds, and Technology Strategy Board UK with the value of over £50 million. He is a founding member of the Future Cities and Community Resilience (FCCR) Network. He is a member of ACM, OSA, Senior Member IEEE and former Vice-Chairman of IET Wales SW Network.
Prof. Dr. Mohamed Abouelhoda is an associate professor of Bioinformatics. Dr. Abouelhoda is currently
leading the bioinformatics team of the Saudi Human Genome Program. He has been an Associate
Professor of bioinformatics at Cairo University. Dr. Abouelhoda studied in Bielefeld and Ulm Universities in
Germany and obtained his Ph.D. in 2005 from Bielefeld University. Dr. Abouelhoda has been part of many
international projects and a continuous reviewer for journals and conferences in the field. He has been
also awarded many academic and industrial awards/funds during his career.
Mrs. Shaima Alsaif is a senior storage administrator in Saudi Aramco’s EXPEC Computer Center, where she supports and develops data storage technologies for upstream users. She has been working in Saudi Aramco since 2011, with experience in systems analysis, systems administration, and storage administration. Since she joined Saudi Aramco, her career has been focused on automation and orchestration of processes, developing new processes, and re-engineering the way multi-discipline divisions collaborate. Shaima collaborated with different notable service providers to design and implement different solutions for data management in both general purpose and high-performance storage. Shaima received her Bachelor’s Degree in Electrical Engineering from Boston University in 2011 and her MBA in Entrepreneurial Management from the Australian Institute of Business in 2015. When not traveling, Shaima enjoys solving puzzles and creative writing.
Eng. Mohamed ElKalioby is the software engineering group leader in the Saudi Human Genome Program
and a research associate at the Genetics Research Department, King Faisal Specialist Hospital & Research Center.
El-Kalioby received his MSc in Software Engineering from Nile University in 2013 and BSc of Biomedical Engineering in 2008.
He worked in a multi-national academic research project involving Nile University,
Harvard Medical School, USA, Imperial College, UK, and Bielefeld University in Germany.
He has a number of professional certificates in Health Informatics (CPHIMS) and Software Engineering. since September 2018,
and a member of Saudi Council of Engineers since 2017 and a Microsoft Certified Application Developer since 2006.
Bernard Ghanem is currently an Associate Professor in the CEMSE division and a theme leader at the Visual Computing Center at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Before that, he was a Senior Research Scientist at the University of Illinois Urbana-Champaign (UIUC) in Singapore. His research interests lie in computer vision, machine learning, and optimization geared towards real-world applications. He received his Bachelor’s degree from the American University of Beirut (AUB) in 2005 and his MS/PhD in from UIUC in 2010. His work has received several awards and honors, including the Henderson Graduate Award from UIUC, CSE fellowship awards from UIUC, two Best Paper Awards (CVPRW 2013 and ECCVW 2018), a two-year KAUST Seed Fund, and a Google Faculty Research Award in 2015 (first and only one in MENA region for Machine Perception). He has co-authored more than 75 peer reviewed conference and journal papers in his field as well as two issued patents.
Visit ivul.kaust.edu.sa and www.bernardghanem.com for more details.
Pradeep K. Dubey, Ph.D.
Intel Fellow, and Director, Parallel Computing Lab
Dr. Pradeep Dubey is an Intel Fellow and Director of Parallel Computing Lab (PCL), part of Intel Labs. His research focus is computer architectures to efficiently handle new computer- and data-intensive application paradigms for the future computing environment.
He previously worked at IBM’s T.J. Watson Research Center and Broadcom Corporation. He has made contributions to the design, architecture, and application-performance of various microprocessors, including IBM Power PC, Intel i386, i486, Pentium Xeon, and the Xeon Phi line of processors.
He holds over 36 patents, has published more than 100 technical papers, won the Intel Achievement Award in 2012 for Breakthrough Parallel Computing Research and was honored with Purdue University’s 2014 Outstanding Electrical and Computer Engineer Award. Dr. Dubey received a Ph.D. in electrical engineering from Purdue University. He is a Fellow of IEEE.
Dr. Zhaohui Ding has been carrying out research, developing systems, and products in the area of HPC for 15 years. He had worked in SDSC as visiting scholar three times during 2005~2007. Then he had joined Platform Computing Inc. (acquired by IBM in 2012) Development Organization, where he contributed to multiple generations of LSF products as LSF chief product architect.
Currently, Ph.D. Zhaohui Ding is Chief Scientist of HPC Lab at Huawei. His team is responsible for research and development of HPC Software. In his career, he had published over ten scholarly publications in the peer-reviewed setting.
Rick Koopman is Lenovo HPC Technical Leader for EMEA region. In this role, he is responsible for business and technical sales strategy development, channel and field enablement, and technical level relationship with many of Lenovo Alliance Partners. This in addition to being actively involved with customers on their new or future HPC solutions.
With over 30 years of experience in the IT industry, Rick has served in several international business developments and technical roles in IBM Sales and Distribution throughout Europe, Middle East, and Africa, before his current role in Lenovo.
During his career, Rick operated within IBM’s consultancy organization and IBM Innovation Center in La Gaude as a Global Subject Matter Expert for Media and Entertainment solutions; much of IBM’s HPC portfolio was then part of the delivered solutions to the M&E industry. He has also been leading IBM EMEA High-Performance Center of Competency, developing the HPC skills and solutions both within IBM and now Lenovo, and working closely with the partners engaged in driving innovation in the HPC marketplace.
Essam Algizawy is leading a team of parallel programming and code optimization engineers at Brightskies. Essam and his team are working on taking advantage of new HPC infrastructure to enhance the performance of existing applications; focusing on algorithmic optimization in the field of seismic exploration and reservoir simulation. His areas of expertise span across multiple levels including heterogeneous computing, instruction optimization, vectorization, distributed computing using MPI, threading using OpenMP, TBB, POSIX … etc.
Essam is also an Assistant Professor, Department of Computer and Systems Engineering, Faculty of Engineering, Banha University, Egypt. He received the B.Sc. and M.Sc. degrees from Banha University, Cairo, Egypt, in 2009 and 2013, respectively. He has successfully obtained the national MOHE Egypt, Ph.D. Fellowship for three consecutive years. Meanwhile, he obtained a prestigious visiting research Fellowship Kasahara/Kimura Lab, department of computer science, Waseda University, Japan. Essam holds a Ph.D. in Computer Science and Engineering from JUST University, Egypt; and he published many prestigious papers in computer science and knowledge discovery from data. His research interests include parallel processing, big-data, distributed systems, and ubiquitous computing.
Dr. James Maltby is a Solution Architect for Cray, Inc. and specializes in mapping scientific and business applications to new computer architectures. He has an academic background in physics and engineering, specializing in radiation transport. He has worked for Cray since 2000, developing software for the massively multithreaded Cray XMT as a well as the other Cray systems. He also led the Bioinformatics practice at Cray for several years, using HPC to solve Life Science problems. Also, he wrote a highly parallel in-memory Semantic Graph Database for the XMT architecture. His most recent project involved developing a Scalable Deep Learning and Analytics package for the Cray XC series of supercomputers, now available as Urika-XC
Boris Tvaroska has more than 20 years of IT Leadership experience across Europe, Asia, Africa, and North America. He is currently Global Artificial Intelligence Architect for Lenovo. Previously, Boris was CTO of Sidekick.chat, a startup with mission to automate repetitive tasks in project management. Before his startup work, he was running Solution Architecture for HP in Central & Eastern Europe, Middle East, Africa, and India. Boris has a Master in Computer Science from Comenius University in Bratislava, Slovakia, and Master in Business Administration from Erasmus University in Rotterdam, Netherlands. Boris has more than 20 years of experience developing new technologies and building new markets.
Dr. David R. Pugh is a Staff Scientist in the Visualization Core Lab at KAUST (King Abdullah University of Science and Technology) specializing in Data Science and Machine Learning. David is also a certified Software and Data Carpentry Instructor and Instructor Trainer and is the lead instructor of the Introduction to Data Science Workshop series at KAUST.
Glendon Holst is a Visualization Scientist in the Visualization Core Lab at KAUST (King Abdullah University of Science and Technology) specializing in HPC workflow solutions that improve simulation efficiency, enable large-scale image processing, and support deep learning.