Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Applied Deep Learning - Umberto Michelucci

Applied Deep Learning (eBook)

A Case-Based Approach to Understanding Deep Neural Networks
eBook Download: PDF
2018 | 1st ed.
XXI, 410 Seiten
Apress (Verlag)
978-1-4842-3790-8 (ISBN)
Systemvoraussetzungen
66,99 inkl. MwSt
(CHF 65,45)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

Work with advanced topics in deep learning, such as optimization algorithms, hyper-parameter tuning, dropout, and error analysis as well as strategies to address typical problems encountered when training deep neural networks. You'll begin by studying the activation functions mostly with a single neuron (ReLu, sigmoid, and Swish), seeing how to perform linear and logistic regression using TensorFlow, and choosing the right cost function. 

The next section talks about more complicated neural network architectures with several layers and neurons and explores the problem of random initialization of weights. An entire chapter is dedicated to a complete overview of neural network error analysis, giving examples of solving problems originating from variance, bias, overfitting, and datasets coming from different distributions. 

Applied Deep Learning also discusses how to implement logistic regression completely from scratch without using any Python library except NumPy, to let you appreciate how libraries such as TensorFlow allow quick and efficient experiments. Case studies for each method are included to put into practice all theoretical information. You'll discover tips and tricks for writing optimized Python code (for example vectorizing loops with NumPy). 

What You Will Learn

  • Implement advanced techniques in the right way in Python and TensorFlow
  • Debug and optimize advanced methods (such as dropout and regularization)
  • Carry out error analysis (to realize if one has a bias problem, a variance problem, a data offset problem, and so on)
  • Set up a machine learning project focused on deep learning on a complex dataset

Who This Book Is For

Readers with a medium understanding of machine learning, linear algebra, calculus, and basic Python programming. 




Umberto is currently the head of Innovation in BI & Analytics at a leading health insurance company in Switzerland, where he leads several strategic initiatives that deal with AI, new technologies and machine learning. He worked as data scientist and lead modeller in several big projects in healthcare and has extensive hands-on experience in programming and designing algorithms. Before that he managed projects in BI and DWH enabling data driven solutions to be implemented in complicated productive environments. He worked extensively with neural networks the last two years and applied deep learning to several problems linked to insurance and client behaviour (like customer churning). He presented his results on deep learning at international conferences and internally gained a reputation for his huge experience with Python and deep learning. 


Work with advanced topics in deep learning, such as optimization algorithms, hyper-parameter tuning, dropout, and error analysis as well as strategies to address typical problems encountered when training deep neural networks. You'll begin by studying the activation functions mostly with a single neuron (ReLu, sigmoid, and Swish), seeing how to perform linear and logistic regression using TensorFlow, and choosing the right cost function. The next section talks about more complicated neural network architectures with several layers and neurons and explores the problem of random initialization of weights. An entire chapter is dedicated to a complete overview of neural network error analysis, giving examples of solving problems originating from variance, bias, overfitting, and datasets coming from different distributions. Applied Deep Learning also discusses how to implement logistic regression completely from scratch without using any Python library except NumPy, to let you appreciate how libraries such as TensorFlow allow quick and efficient experiments. Case studies for each method are included to put into practice all theoretical information. You'll discover tips and tricks for writing optimized Python code (for example vectorizing loops with NumPy). What You Will LearnImplement advanced techniques in the right way in Python and TensorFlowDebug and optimize advanced methods (such as dropout and regularization)Carry out error analysis (to realize if one has a bias problem, a variance problem, a data offset problem, and so on)Set up a machine learning project focused on deep learning on a complex datasetWho This Book Is ForReaders with a medium understanding of machine learning, linear algebra, calculus, and basic Python programming. 

Umberto is currently the head of Innovation in BI & Analytics at a leading health insurance company in Switzerland, where he leads several strategic initiatives that deal with AI, new technologies and machine learning. He worked as data scientist and lead modeller in several big projects in healthcare and has extensive hands-on experience in programming and designing algorithms. Before that he managed projects in BI and DWH enabling data driven solutions to be implemented in complicated productive environments. He worked extensively with neural networks the last two years and applied deep learning to several problems linked to insurance and client behaviour (like customer churning). He presented his results on deep learning at international conferences and internally gained a reputation for his huge experience with Python and deep learning. 

Chapter 1:  Introduction Chapter Goal: Describe the book, the TensorFlow infrastructure, give instructions on how to setup a system for deep learning projectsNo of pages    :  30-50Sub -Topics1. Goal of the book2. Prerequisites3. TensorFlow Jupyter Notebooks introduction4. How to setup a computer to follow the book (docker image?)5. Tips for TensorFlow development and libraries needed (numpy, matplotlib, etc.)6. The problem of vectorization of code and calculations7. Additional resourcesChapter 2:  Single NeuronsChapter Goal: Describe what you can achieve with neural networks with just one neuron.No of pages: 50-70Sub -Topics8. Overview of different parts of a neuron9. Activation functions (ReLu, sigmoid, modified ReLu, etc.) and their difference (which one is for which task better)10. The new google activation function SWISH (https://arxiv.org/abs/1710.05941?utm_campaign=Artificial%2BIntelligence%2Band%2BDeep%2BLearning%2BWeekly&utm_medium=email&utm_source=Artificial_Intelligence_and_Deep_Learning_Weekly_35) 11. Optimization algorithm discussion (gradient descent)12. Linear regression13. Basic Tensorflow introduction14. Logistic regression15. Regression (linear and logistic) with tensorflow16. Practical case discussed in details17. The difference between regression and classification for one neuron18. Tips for TensorFlow implementationChapter 3:  Fully connected Neural Network with more neuronsChapter Goal: Describe what is a fully connected neural network and how to implement one (with one or more layers, etc.), and how to perform classification (binary and multi-class and regression)No of pages: 30-50Sub -Topics1. What is a tensor2. Dimensions of involved tensors (weights, input, etc.) (with tips on TensorFlow implementation)3. Distinctions between features and labels4. Problem of initialization of weights (random, constant, zeros, etc.)5. Second tutorial on tensorflow6. Practical case discussed in details7. Tips for TensorFlow implementation8. Classification and regression with such networks and how the output layer is different9. Softmax for multi-class classification10. Binary classificationChapter 4:  Neural networks error analysisChapter Goal: Describe the problem of identifying the sources of errors (variance, bias, data skewed, not enough data, overfitting, etc.)No of pages:  50-70Sub -Topics1. Train, dev and test dataset – why do we need three? Do we need four? What can we detect with different datasets and how to use them or size them?2. Sources of errors (overfitting, bias, variance, etc.)3. What is overfitting, a discussion4. Why is overfitting important with neural networks?5. Practical case discussion6. A guide on how to perform error analysis7. A practical example with a complete error analysis8. The problem of different datasets (train, dev, test, etc.) coming from different distributions9. Data augmentation techniques and examples10. How to deal with too few data11. How to split the datasets (train, dev, test)? Not 60/20/20 but more 98/1/1 when we have a LOT of data.12. Tips for TensorFlow implementationChapter 5:  Dropout techniqueChapter Goal: Describe what dropout is, when to employ itNo of pages: 30-50Sub -Topics1. What is dropout ?2. When we need to employ dropout3. Different in usage for dropout between training and test set4. How to optimize the dropout parameters5. Tensorflow implementation6. A practical case discussed7. Tips for TensorFlow implementationChapter 6:  Hyper parameters tuningChapter Goal: explain what hyper parameters are, which one are usually tuned, and what it means “hyper parameters optimization”No of pages: 30-50Sub -Topics1. What are hyper parameters2. What are the usually tuned hyper parameters in a deep learning ML project3. How to setup in TensorFlow a ML project so that this optimization is easy4. Practical tips 5. Visualization tips for hyper parameter optimization6. Tips for TensorFlow implementationChapter 7:  Tensorflow and optimizers (Gradient descent, Adam, momentum, etc.)Chapter Goal: Analyze the problem of optimizers and their implementation in tensorflowNo of pages: 50-60Sub -Topics1. Overview of the different optimisation algorithms (Gradient descent, Adam, momentum, etc.) (also mathematically)2. Speed of convergence of the different algorithms3. Hyper parameters that determine the behavior of those optimizer4. Which of those hyper parameters needs tuning?5. Comparison of performance of the different algorithm6. Learning rate dynamical adaptation strategies7. Practical examples8. Tips for TensorFlow implementationChapter 8:  Convolutional Networks and image recognitionChapter Goal: Give the readers a good basis on convolutional networks and how to implement them in tensorflowNo of pages: 30-50Sub -Topics1. What is a convolutional network2. When to use them3. How to develop them with tensorflow4. Practical case explained in detail5. Tips for TensorFlow implementationChapter 9:  Recurrent Neural NetworksChapter Goal: Give the readers a good basis on Recurrent neural networks and how to implement them in tensorflowNo of pages: 30-50Sub -Topics1. What is a RNN2. When to use them3. How to develop them with tensorflow4. Practical case explained in detail5. Tips for TensorFlow implementationChapter 10:  A practical COMPLETE example from scratch (put everything together)Chapter Goal: in this chapter I will put together all that was explained before and do a real-life example ML project (with all aspects included)No of pages: 30-50Sub -Topics1. Discussion of data set (not a simple dataset, something that have real deep-learning potential)2. Clean-up and preparation of data set3. Complete code implementation4. Results analysis and discussion5. Error analysis6. Conclusions7. Tips for TensorFlow implementationChapter 11:  Logistic regression implement from scratch in TensorFlow without librariesChapter Goal: Give the readers a sense of the complexity of implementing a simple method completely from scratch to let them understand how easy is to work with tensorflowNo of pages: 20-30Sub -Topics1. Complete implementation of logistic regression in TensorFlow from scratch and analysis of the code2. Practical example3. Comparison of implementation with sklearn and tensorflow4. Tips for TensorFlow implementation

Erscheint lt. Verlag 7.9.2018
Zusatzinfo XXI, 410 p. 178 illus., 7 illus. in color.
Verlagsort Berkeley
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Datenbanken
Mathematik / Informatik Informatik Programmiersprachen / -werkzeuge
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Schlagworte convolutional neural networks • Deep learning • Dropout • Neuron Activation Functions • Python • Recursive Neural Networks • Regularization • Skilearn • tensorflow
ISBN-10 1-4842-3790-0 / 1484237900
ISBN-13 978-1-4842-3790-8 / 9781484237908
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 13,0 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
der Praxis-Guide für Künstliche Intelligenz in Unternehmen - Chancen …

von Thomas R. Köhler; Julia Finkeissen

eBook Download (2024)
Campus Verlag
CHF 37,95
Wie du KI richtig nutzt - schreiben, recherchieren, Bilder erstellen, …

von Rainer Hattenhauer

eBook Download (2023)
Rheinwerk Computing (Verlag)
CHF 18,25