Besoins en HPC pour l'IA - Stéphane Canu, INSA Rouen - Normandy University ORAP - 42e Forum

Page created by Aaron Medina
 
CONTINUE READING
Besoins en HPC pour l'IA - Stéphane Canu, INSA Rouen - Normandy University ORAP - 42e Forum
Besoins en HPC pour l’IA

Stéphane Canu, INSA Rouen – Normandy University
   github.com/StephaneCanu/Deep_learning_lecture

                ORAP - 42e Forum

                CNRS, November 5
Besoins en HPC pour l'IA - Stéphane Canu, INSA Rouen - Normandy University ORAP - 42e Forum
Road map

1   Mon expérience HPC

2   l’IA ds’aujourd’hui : le deep learning ?

3   Quoi de neuf avec le deep learning?

4   CPU vs GPU

5   Conclusion
Besoins en HPC pour l'IA - Stéphane Canu, INSA Rouen - Normandy University ORAP - 42e Forum
Mon expérience (HP)C & apprentissage

1989 Réseaux de neurones pour le prédiction
        I   beaucoup d’architectures à comparer

1989 C −→ Fortran parallèle sous Alliant
        I   6 mois de développement
        I   jamais utilisé : trop spécifique

1991 C −→ Matlab
        I   évolution rapide
        I   efficace sur les matrices
        I   exploration des données

1998 un solver SVM rapide (Matlab)
Besoins en HPC pour l'IA - Stéphane Canu, INSA Rouen - Normandy University ORAP - 42e Forum
Mon expérience (HP)C & apprentissage
1998 un solver SVM rapide (Matlab)

2005 Matlab −→ C
       I   SVM sur 8M d’exemples
       I   un succès . . . inutile

2014 Matlab −→ python
       I   a cause du deep learning
       I   de la communauté
       I   theano, Scikit-learn

2016 theano −→ keras (GAFAM)
       I   a cause de la simplicité
       I   GPU transparent

2019 keras −→ pytorch, cupy, dask
Besoins en HPC pour l'IA - Stéphane Canu, INSA Rouen - Normandy University ORAP - 42e Forum
Google colab – https://colab.research.google.com

                               https://colab.research.google.com/drive/1pJ20J4I4bgxnxQJ08VncJj9aTGS9cI48

Notebook jupyter like (python)
    accessible via web par login (permanent)
    collaboratif
    GPU et TPU
    gratuit
Besoins en HPC pour l'IA - Stéphane Canu, INSA Rouen - Normandy University ORAP - 42e Forum
Machine learning et (HP)C

  besoin d’expérimentation
    I   comparer différentes solutions

  besoin de prototypage
    I   besoin d’interactions (en ligne)
          F   matlab
          F   python
          F   R
    I   collaboratif
    I   facilité d’accès (dask)

  domaine à évolution rapide
    I   pas de développement spécifiques
    I   besoin de performances
Besoins en HPC pour l'IA - Stéphane Canu, INSA Rouen - Normandy University ORAP - 42e Forum
Road map

1   Mon expérience HPC

2   l’IA ds’aujourd’hui : le deep learning ?

3   Quoi de neuf avec le deep learning?

4   CPU vs GPU

5   Conclusion
Besoins en HPC pour l'IA - Stéphane Canu, INSA Rouen - Normandy University ORAP - 42e Forum
Deep learning for turning text into speech (and vice versa)

Baidu deep speech 2 (2015) and Deep voice (2017)
   Trained on 9,400 hours of labeled audio with 11 million utterances.
                       données brutes de 2.5To
Besoins en HPC pour l'IA - Stéphane Canu, INSA Rouen - Normandy University ORAP - 42e Forum
Deep learning for healthcare
                       Skin cancer classification
                       130 000 training images
                       validation error rate : 28 % (human 34 %)

                       the Digital Mammography DREAM Challenge
                       640 000 mammographies (1209 participants)
                       5 % less false positive

                       heart rate analysis
                       500 000 ECG
                       precision 92.6 % (humain 80.0 %) sensitivity 97 %

Statistical machine learning: retrieving correlations
                with deep learning end-to-end architecture
                    "April showers bring May flowers"
Besoins en HPC pour l'IA - Stéphane Canu, INSA Rouen - Normandy University ORAP - 42e Forum
Deep learning success in playing GO

                  Mastering the game of Go without human knowledge D. Silver et al. Nature, 550, 2017
Deep learning progresses in playing Dota 2

                                         separate LSTM for each hero

                                         180 years/days of games
                                         against itself

                                         Proximal Policy Optimization

                                         256 GPUs and 128,000 CPU

                                         the OpenAI Five is very much
                                         still a work in progress project

                                 https://blog.openai.com/the- international- 2018- results/
Deep learning (limited) success in NLP

Learning to translate with 36 million sentences
    Near Human-Level Performance in Grammatical Error Correction
    Achieving Human Parity on Automatic News Translation

                                                  https://devblogs.nvidia.com/author/kcho/
What about personal assistant ?

           Text understanding - context + comon sense
Deep learning to drive: the Rouen autonomous lab

Driving Video Database = 100.000 videos – 120 million images
    When It Comes to Safety, Autonomous Cars Are Still "Teen Drivers"
    companies are developing many different levels of automation

                                           https://www.rouennormandyautonomouslab.com/?lang=en
                                                                  http://bdd- data.berkeley.edu
So far, so good

    Deep learning performance breakthrough
      I   Low level perception tasks: speech, image and video processing,
          natural language processing, games. . .
      I   . . . and specific tasks in health care, astronomy. . .

    It requires
      I   Big data
      I   Big computers
      I   Specific tasks

    next step
      I   NLP : prior knowledge
      I   commen sense : unsupervised learning
      I   provide guarantees
Road map

1   Mon expérience HPC

2   l’IA ds’aujourd’hui : le deep learning ?

3   Quoi de neuf avec le deep learning?

4   CPU vs GPU

5   Conclusion
What’s new with deep learning
   a lot of data (big data)
   big computing resources (hardware & software),
   big model (deep vs. shalow)
     → new architectures
     → new learning tricks

                    from Recent advances in convolutional neural networks Gu et al. Pattern Recognition, 2017
Big data: a lot of available training data

                                                                       .

    ImageNet: 1,200,000x256x256x3 (about 200GB) block of pixels

    MS COCO for supervised learning
      I   Multiple objects per image
      I   More than 300,000 images
      I   More than 2 Million instances
      I   80 object categories
      I   5 captions per image

    YFCC100M for unsupervised learning

    Google Open Images, 9 million URLs to images annotated over 6000
    categories

    Visual genome: data + knowledge http://visualgenome.org/
Big architectures
Road map

1   Mon expérience HPC

2   l’IA ds’aujourd’hui : le deep learning ?

3   Quoi de neuf avec le deep learning?

4   CPU vs GPU

5   Conclusion
GPU : 10 fois plus rapides

              https://github.com/StephaneCanu/Deep_learning_lecture/blob/master/jupyter_notebooks/TP1_MNIST.ipynb
Quelles GPU ?

                                What Makes One GPU Faster Than Another?
                http://timdettmers.com/2018/11/05/which- gpu- for- deep- learning/
AI super computer dans le monde (2017)

                                      Rapport sur une Infrastructure de recherche pour l’intelligence artificielle, 2018
       T. Kurth et al. Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific data, 2017
AI super computers en 2018

Japan: 54 petaflop                                         USA: 200 petaflop (IA?)

                      https://devblogs.nvidia.com/summit- gpu- supercomputer- enables- smarter- science/
                                                      https://www.top500.org/green500/lists/2018/06/
Road map

1   Mon expérience HPC

2   l’IA ds’aujourd’hui : le deep learning ?

3   Quoi de neuf avec le deep learning?

4   CPU vs GPU

5   Conclusion
Besoins

Un accès à des GPU (une ou plus) avec deux modes
     interactif (cf colab)
     batch (cf AWS)

Un centre de compétences dédié
     faciliter l’accès
     maintenance hard/soft -
     gestion et mise à disposition des données pour l’IA,

Sécurisé et évolutif
Open issues in Deep learning : A critical apaisal

For most problems where deep learning has enabled transformationally
better solutions (vision, speech), we’ve entered diminishing returns territory
in 2016-2017.
                           Francois Chollet, Google, author of Keras neural network library Dec. 2017
10 Limits on the scope of deep learning

   It is data hungry                    not well integrated with prior
                                        knowledge (NLP)
   specialized training (no learning)
                                        cannot distinguish causation
                                        from correlation
   it has no natural way to deal
   with hierarchical structure
                                        assume stationarity
   it has struggled with open-ended
   inference                            can be fooled (it is not robust)

   it is not sufficiently transparent   is difficult to engineer with
To go further
    books
      I   I. Goodfellow, Y. Bengio & A. Courville, Deep Learning, MIT Press book, 2016
          http://www.deeplearningbook.org/
      I   Gitbook leonardoaraujosantos.gitbooks.io/artificial-inteligence/
    conferences
      I   NIPS, ICLR, xCML, AIStats,
    Journals
      I   JMLR, Machine Learning, Foundations and Trends in Machine
          Learning, machine learning survey http://www.mlsurveys.com/
    lectures
      I   Deep Learning: Course by Yann LeCun at Collège de France in 2016
          college-de-france.fr/site/en-yann-lecun/inaugural-lecture-2016-02-04-18h00.htm
      I   Convolutional Neural Networks for Visual Recognition (Stanford)
      I   deep mind (https://deepmind.com/blog/)
      I   CS 229: Machine Learning at stanford Andrew Ng

    Blogs
      I   Andrej Karpathy blog (http://karpathy.github.io/)
      I   http://deeplearning.net/blog/
      I   https://computervisionblog.wordpress.com/category/computer-vision/
You can also read