Presenter Information

Jeffrey Kirk, Dell EMCFollow

Start Date

10-17-2017 3:15 PM

Description

High performance computing has matured into an indispensable tool for not only academic research and government labs and agencies, but also for many industry sectors: energy, manufacturing, healthcare, financial services, even digital content creation. More recently, the advent of Big Data has enabled the use of HPC techniques for large scale data analysis, expanding the scope of HPC and the reach of it into more research and enterprise use cases. Since 2012, a new regime of data-driven analytics, deep learning, has erupted in popularity, fueled by both the massive performance increases in HPC technologies and in the explosive rate of digital data being generated, collected and managed. While data analytics, including deep learning, will never eliminate the need for HPC-enabled simulations in research, the emergence of deep learning will enable both researchers and enterprises to accomplish discovery and innovation in new ways and in ways that complement, extend, and sometimes even substitute for more traditional HPC simulation techniques. Together, HPC and AI will enable the transformation for science to continue, and a new explosion in enterprise and consumer applications.

Speaker's Bio

Jeffrey Kirk has spent his career working at the leading edge of compute and networking. Jeff is currently working in the Server Office of the CTO as the HPC and AI Technology Strategist where he has helped grow the HPC program with a new vision, strategy, business development efforts, and new partnerships and solutions. He is now also one of the leaders of AI strategy development for Dell EMC, and is working on new AI solutions. Prior to joining Dell EMC, Jeff worked at several cutting edge semi-conductor companies. At AMD he specialized in superscalar RISC and x86 platforms for high performance computation (1999). At Mellanox he worked on some of the first Infiniband HPC installations, including the Virginia Tech cluster that reached number three on the top 500 (Big Mac) using Apple workstations (2004). While at Mellanox he supported Dr. D.K. Panda and the first implementation of MVAPICH at his alma mater, The Ohio State University. Later at Solarflare his focus was OnLoad technology and financial markets (2010). After moving to Dell, Jeff worked in Dell Networking implementing their first Fibre Channel over Ethernet systems and he holds several patents on FCoE (2013). His interest in supercomputing was sparked while working on the number three Virginia Tech cluster, but his interest in data science is encouraged and fueled by his daughter a PhD Statistician Data Scientist at the FDA.

Share

COinS
 
Oct 17th, 3:15 PM

The Transformation of Science with HPC, Big Data, and AI

High performance computing has matured into an indispensable tool for not only academic research and government labs and agencies, but also for many industry sectors: energy, manufacturing, healthcare, financial services, even digital content creation. More recently, the advent of Big Data has enabled the use of HPC techniques for large scale data analysis, expanding the scope of HPC and the reach of it into more research and enterprise use cases. Since 2012, a new regime of data-driven analytics, deep learning, has erupted in popularity, fueled by both the massive performance increases in HPC technologies and in the explosive rate of digital data being generated, collected and managed. While data analytics, including deep learning, will never eliminate the need for HPC-enabled simulations in research, the emergence of deep learning will enable both researchers and enterprises to accomplish discovery and innovation in new ways and in ways that complement, extend, and sometimes even substitute for more traditional HPC simulation techniques. Together, HPC and AI will enable the transformation for science to continue, and a new explosion in enterprise and consumer applications.