Intel® Hands-on Developer Workshop for Technical Computing and Artificial Intelligence - PARIS
Hands-on Developer Workshop for Technical Computing and Artificial Intelligence - Paris 20 & 21 November 2018
Join us for 2 days of hands-on coding sessions on Parallel Programming, Performance Optimization, Artificial Intelligence, and Machine & Deep Learning. Experience a unique opportunity to test out the latest tools, advanced coding knowledge and best practices to get started implementing AI on your own laptop, guided by experts from Intel and partners.
If you are a software developer, a data scientist or an engineer working on projects in the data center, in the cloud, at the edge, or on the Internet of Things (IoT), with knowledge of C/C++ and/or Python, don’t miss this opportunity to dive deep into the latest tips and tools from Intel® Software.
Please bring your own – Intel®-based laptop.1 We will provide all required software and technology. Detailed technical requirements will be sent to registered attendees.
AGENDA DAY 1**
08:00 09:00 Registration with light breakfast 09:00 17:30 Hands-on coding sessions on your own laptop PC 17:30 18:30 Networking evening with drinks & food
PARALLELISM, PERFORMANCE & OPTIMIZATION ON INTEL® ARCHITECTURE – WHAT YOU SHOULD KNOW! (30 min) The Intel® architecture offers parallelism at many different levels. Here we present the seven different levels of parallelism you should be thinking about when you write or optimize code.
SESSION 1: A STEP-BY-STEP APPROACH TO APPLICATION TUNING WITH INTEL® COMPILERS (3 hours HANDS-ON) In this first hands-on session, we follow the steps recommended by the Intel® Compiler Engineers that should be used to get the best performance out of the Intel® Compiler. Practical topics include general optimization, processor-specific options, inter procedural optimization, profile guided optimization, compiler optimization reports, and support for thread parallelism.
SESSION 2: INTEGRATING VTUNE™ AND ADVISOR IN YOUR OPTIMIZATION PROCESS (3 hours HANDS-ON) Starting from an un-optimized wave propagation kernel, we describe a general methodology to characterize and profile your code. We will see how threading, vectorization, but also memory optimization can affect the behaviour of your programs. The roofline model, dependency analysis, and memory access pattern analysis from Intel® Advisor as well as the HPC Performance Characterization and Memory Access from Intel® VTune™ Amplifier will be used to guide the developer to a more performant code.
AGENDA DAY 2**
08:00 09:00 Registration with light breakfast 09:00 17:30 Hands-on AI coding sessions on your own laptop PC 17:30 18:30 Networking evening with drinks & food
INTEL® DEVELOPER ZONE(Intel® DZ) – YOUR ONLINE SUPPORT TOOL (20 mins) A brief overview on training resources, tool downloads, programs such as the Ambassador program, the AI builder program.
AI CONCEPTS (40 mins) In this session, we will explore the concepts and applications of Deep Learning, with a focus on real-world applications using the Intel® processors for training and inference.
SESSION 1: GETTING STARTED ON INTEL® ARCHITECTURE (2 hours HANDS-ON) Work through a series of Deep Learning programs on the cloud using Intel® AI DevCloud running on the latest generation of Intel processors. For this session you will need to have installed an SSH client on your laptop.
INTRODUCTION TO OPENVINO™ (30 mins) Introduction to using the Intel OpenVINO™ toolkit to extend workloads heterogeneously across Intel hardware and maximize performance.
COMPUTER VISION ACCELERATION ON FPGA USING OPENVINO™ (1 hour) A live demo showing how to accelerate Deep Learning Inference tasks on FPGA using the the Intel OpenVINO™ toolkit.
SESSION 2 : TRAINING AND DEPLOYING USING OPENVINO™ (2.5 hours HANDS-ON) In this session, we discuss the advantages of using Caffe* optimized for Intel® Architecture and show how to train deep network models using one or more compute nodes. In the practical part of this session, you will then get the chance to run a pre-trained model on firstly and Intel CPU, and then on the Intel® Movidius™ Neural Compute Stick.