The growing disparity between CPU speed and memory speed, known as the memory wall problem, has been one of the most critical and long-standing challenges in the computing industry. To address the memory wall problem, memory systems have been becoming increasingly complex in recent years. New memory technologies and architectures are introduced into conventional memory hierarchies. This newly added memory complexity plus the existing programming complexity and architecture heterogeneity make utilizing high performance computer systems extremely challenging. Performance optimization has thus shifted from computing to data access, especially for data-intensive applications. Significant amount of efforts of a user is often spent on optimizing local and shared data access regarding the memory hierarchy rather than for decomposing and mapping task parallelism onto hardware. This increase of memory optimization complexity also demands significant system support, from tools to compiler technologies, and from modeling to new programming paradigms. Explicitly or implicitly, to address the memory wall performance bottleneck, the development of programming interfaces, compiler tool chains, and applications are becoming memory oriented or memory centric.
The organization of this workshop believe it is important to elevate the notion of memory-centric programming to utilize the unprecedented and ever-elevating modern memory systems. Memory-centric programming refers to the notion and techniques of exposing the hardware memory system and its hierarchy, which include NUMA regions, shared and private caches, scratch pad, 3-D stack memory, and non-volatile memory, to the programmer for extreme performance programming via portable abstraction and APIs for explicit memory allocation, data movement and consistency enforcement between memories. The concept has been gradually adopted in main stream programming interfaces, for example and to name a few, the use of
place in OpenMP and X10 and
locale in Chapel to represent memory regions in a system, the use of
shared modifier in CUDA or
cache modifier in OpenACC for representing scratch-pad SRAM for GPUs, the
memkind library and the recent effort for OpenMP memory management for supporting 3-D stack memory (HBM or HMC), and PMEM library for persistent memory programming. The MCHPC workshop aims to bring together computer and computational science researchers, from industry and academia, concerned with the programmability and performance of the existing and emerging memory systems. The term performance for memory system is general, which include latency, bandwidth, power consumption and reliability from the aspect of hardware memory technologies to what it is manifested in the application performance.
The topics of interest for the workshop include, but are not limited to:
November 12th 2017 - MCHPC2017 Workshop
Authors are invited to submit manuscripts in English structured as technical papers up to 8 pages or as short papers up to 5 pages, both of letter size (8.5in x 11in) and including figures, tables, and references using the IEEE format for conference proceedings. Submissions not conforming to these guidelines may be returned without review. Reference style files are available at http://www.ieee.org/conferences_events/conferences/publishing/templates.html.
All manuscripts will be reviewed and judged on correctness, originality, technical strength, and significance, quality of presentation, and interest and relevance to the workshop attendees. Submitted papers must represent original unpublished research that is not currently under review for any other conference or journal. Papers not following these guidelines will be rejected without review and further action may be taken, including (but not limited to) notifications sent to the heads of the institutions of the authors and sponsors of the conference. Submissions received after the due date, exceeding length limit, or not appropriately structured may also not be considered. At least one author of an accepted paper must register for and attend the workshop. Authors may contact the workshop organizers for more information.
Papers should be submitted electronically at: https://easychair.org/conferences/?conf=mchpc2017.
Vivek Sarkar is a Professor of Computer Science at Georgia Institute of Technology, where he holds the Stephen Fleming Chair for Telecommunications in the College of Computing. Earlier, he held the E.D. Butcher Chair in Engineering at Rice University during 2007 -- 2017. Prior to joining Rice in 2007, Vivek was Senior Manager of Programming Technologies at IBM Research. His past research projects include the Habanero Extreme Scale Software Research and Open Community Runtime (OCR) projects, the X10 programming language, the Jikes Research Virtual Machine for the Java language, the ASTI optimizer used in IBM’s XL Fortran product compilers, and the PTRAN automatic parallelization system. Vivek became a member of the IBM Academy of Technology in 1995, the E.D. Butcher Chair in Engineering at Rice University in 2007, and was inducted as an ACM Fellow in 2008. Vivek has been serving as a member of the US Department of Energy’s Advanced Scientific Computing Advisory Committee (ASCAC) since 2009, and on CRA’s Board of Directors since 2015.
In this talk, Andy will describe the emerging Persistent Memory technology and how it can be applied to HPC-related use cases. Andy will also discuss some of the challenges using Persistent Memory, and the ongoing work the ecosystem is doing to mitigate those challenges.
Andy Rudoff is a Senior Principal Engineer at Intel Corporation, focusing on Non-Volatile Memory programming. He is a contributor to the SNIA NVM Programming Technical Work Group. His more than 30 years industry experience includes design and development work in operating systems, file systems, networking, and fault management at companies large and small, including Sun Microsystems and VMware. Andy has taught various Operating Systems classes over the years and is a co-author of the popular UNIX Network Programming text book.