Understanding the dynamics of GPU utilization and workloads in containerized systems is critical to creating efficient software systems. We create a set of dashboards to monitor and evaluate GPU performance in the context of TensorFlow. We monitor performance in real time to gain insight into GPU load, GPU memory and temperature metrics in a Kubernetes GPU enabled system. Visualizing TensorFlow training job metrics in real time using Prometheus allows us to tune and optimize GPU usage. Also, because Tensor flow jobs can have both GPU and CPU implementations it is useful to view detailed real time performance data from each implementation and choose the best implementation. To illustrate our system, we will show a live demo gathering and visualizing GPU metrics on a GPU enabled Kubernetes cluster with Prometheus and Grafana.
Monitoring of GPU Usage with Tensorflow Models Using Prometheus
1. MONITORING OF GPU USAGE
WITH TENSORFLOW MODEL TRAINING USING PROMETHEUS
Diane Feddema, Principal Software Engineer
Zak Hassan, Senior Software Engineer
#RED_HAT #AICOE #CTO_OFFICE
2. YOUR SPEAKERS
DIANE FEDDEMA
PRINCIPAL SOFTWARE ENGINEER - ARTIFICIAL INTELLIGENCE CENTER OF EXCELLENCE, CTO OFFICE
● Currently focused on developing and applying Data Science and Machine Learning techniques for performance
analysis, automating these analyses and displaying data in novel ways.
● Previously worked as a performance engineer at the National Center for Atmospheric Research, NCAR, working on
optimizations and tuning in parallel global climate models.
ZAK HASSAN
SENIOR SOFTWARE ENGINEER - ARTIFICIAL INTELLIGENCE CENTER OF EXCELLENCE, CTO OFFICE
● Leading the log anomaly detection project within the aiops team and building a user feedback service for improved
accuracy of machine learning predictions.
● Developing data science apps and working on improved observability of machine learning systems such as spark and
tensorflow.
#RED_HAT #AICOE #CTO_OFFICE
3. Outline
● Story
● Concepts
○ Comparing CPU vs GPU
○ What Is Cuda and anatomy of cuda on kubernetes
○ Monitoring GPU and custom metrics with pushgateway
○ TF with Prometheus integration
○ What is Tensorflow and Pytorch
○ A Pytorch example from MLPerf
○ Tensorflow Tracing
● Examples:
○ Running Jupyter (CPU, GPU, targeting specific gpu type)
○ Mounting Training data into notebook/tf job
○ Uses of Nvidia-smi
● Demo
○ Running Detectron on a Tesla V100 with Prometheus & Grafana
monitoring
4. “Design the factory like you
would design an advanced
computer… In fact use
engineers that are used to doing
that and have them work on
this.”
-- Elon Musk (2016)
https://youtu.be/f9uveu-c5us
Source: https://flic.kr/p/chEftd
5. • unlocking
phones
WHY IS DEEP LEARNING A BIG
DEAL ?
MobileOnline
• Netflix.com
• Amazon.com
• Targeted ads
Automotive
• self driving
• voice assistant
8. PARALLEL PROCESSING
MOST LANGUAGES
SUPPORT
● MODERN HARDWARE SUPPORT
EXECUTION OF PARALLEL
PROCESSES/THREADS AND HAVE APIS
TO SPAWN PROCESSES IN PARALLEL
● YOUR ONLY LIMITS IS HOW MANY CPU
CORES YOU HAVE ON YOUR MACHINE
● CPU USED TO BE A KEY COMPONENT OF
HPC
● GPU HAS DIFFERENT ARCHITECTURE &
# OF CORES
CPU
INSTRUCTION
MEMORY
DATA
MEMORY
Input/Output
ARITHMETRIC
LOGIC UNIT
CONTROL
UNIT
15. WHAT IS CUDA?
PROPRIETARY TOOLING
● hardware/software for HPC
● prerequisite is that you have nvidia cuda supported graphics cards
● ML frameworks like tensorflow, theanos, pytorch utilize cuda for leveraging
hardware acceleration
● You may get a 10x faster performance for machine learning jobs by utilizing
cuda
16. ANATOMY OF A CUDA
WORKLOAD ON K8S
TENSORFLOW
CUDA LIBS
CONTAINER RUNTIME
NVIDIA LIBS
HOST OS
SERVER
/dev/nvidaX
GPU
CONTAINER
HARDWARE
JUPYTER
19. Idle GPU Alert
● Alert Manager can
notify:
○ slack chat notification
○ email
○ web hook
○ more
● Get notified when your
GPU isn’t being utilized
and shut down your
VM’s in the cloud to
save on cost.
groups:
- name: nvidia_gpu.rules
rules:
- alert: UnusedResources
expr: nvidia_gpu_duty_cycle == 0
for: 10m
labels:
severity: critical
annotations:
description: GPU is not being utilized you
should scale down your gpu node
summary: GPU Node isn't being utilized
30. Mounting Training Data
● use persistent
volume claims to
access your data
● in this example we
us nfs but you can
choose another
type.
apiVersion: v1
kind: Pod
metadata:
name: jp-notebook
spec:
containers:
- name: jp-notebook
image: tensorflow/tensorflow:nightly-gpu-py3-jupyter
volumeMounts:
- name: my-pvc-nfs
mountPath: "/tf/data"
volumes:
- name: my-pvc-nfs
persistentVolumeClaim:
claimName: nfs
31. Additional Tips
● Kubernetes doesn’t support sharing gpu’s
● If your running in cloud you should look at
stopping your VM if there is no workloads
being used. Restart it when you need it. The
costs can add up.
● Use volumes to mount your data for training
and share it across your environment
32. Monitoring and Performance
of ML on GPUs
● Benchmarking ML on GPUs
○ Monitoring
○ Performance
● Example using MLperf together with Prometheus
and Grafana
● Computing requirements & why GPU’s for ML
33. Why do we need gpus to
solve these problems
● Neural Networks rely heavily on floating point matrix
multiplication
● These algorithms also require a lot of data to train
large memory (GBs) and high speed networks to
complete in a reasonable amount of time
● Faster Deep Learning training
34. Nvidia DGX-2
GPUGPU GPU GPU GPU GPU GPU GPU
DRAM DRAM DRAM DRAM DRAM DRAM DRAM DRAM
DRAM DRAM DRAM DRAM DRAM DRAM DRAM DRAM
GPUGPUGPUGPUGPUGPUGPUGPU
Source: Nvidia
V100V100 V100V100 V100 V100V100V100
V100V100 V100V100 V100 V100V100V100
35. Benchmarks in MLPerf
Application
Area
Vision Language Commerce
Reinforcement
Learning
Problem
Image classification
Object Detection (light weight and
heavy weight)
Translation Recommendations
Games
Go
Datasets
ImageNet
COCO
WMT
English-German
MovieLens-20M Go
Models
ResNet-50
Detectron
Transformer
OpenNMT
Neural Collaborative
Filtering
Mini Go
Metrics COCO mAp
Prediction accuracy
BLEU Prediction Accuracy
Prediction accuracy
Win/Loss
37. What is Tensorflow?
● Open source Python library used to implement
deep neural networks (released from Google in
2015)
● A machine learning framework
● Tools to write your own models in Python,
JavaScript or Swift
● Collection of datasets ready to use with tensorflow
● TF run in Eager and Graph mode
● TF can run on CPUs or GPUs
38. What is Pytorch?
● Python-based open source deep learning library
● Used to build Neural Networks
● Replacement for NumPy for use with GPUs
● Can run on CPUs or GPUs
● Uses GPUs to accelerate numerical computations
● Pytorch performs computations
42. MLPerf Results - Single Node
[c
Source: Nvidia Developer News Dec 2018
43. How to monitor gpus with
nvidia-smi
$ nvidia-smi
--query-gpu=timestamp,name,pci.bus_id,driver_version,pstate,pcie.
link.gen.max,pcie.link.gen.current,temperature.gpu,utilization.gpu,ut
ilization.memory,memory.total,memory.free,memory.used
--format=csv -l 5