Neural computing – Data science Certificate for Alessandro Navone
Certificate ID:
663387
Authentication Code:
89089
Certified Person Name:
Alessandro Navone
Trainer Name:
Anna Kotarba
Duration Days:
2
Duration Hours:
14
Course Name:
Neural computing – Data science
Course Date:
2021-12-07 09:00 to 2021-12-08 16:00
Course Outline:
- Overview of neural networks and deep learning
- The concept of Machine Learning (ML)
- Why we need neural networks and deep learning?
- Selecting networks to different problems and data types
- Learning and validating neural networks
- Comparing logistic regression to neural network
- Neural network
- Biological inspirations to Neural network
- Neural Networks– Neuron, Perceptron and MLP(Multilayer Perceptron model)
- Learning MLP – backpropagation algorithm
- Activation functions – linear, sigmoid, Tanh, Softmax
- Loss functions appropriate to forecasting and classification
- Parameters – learning rate, regularization, momentum
- Building Neural Networks in Python
- Evaluating performance of neural networks in Python
- Basics of Deep Networks
- What is deep learning?
- Architecture of Deep Networks– Parameters, Layers, Activation Functions, Loss functions, Solvers
- Restricted Boltzman Machines (RBMs)
- Autoencoders
- Deep Networks Architectures
- Deep Belief Networks(DBN) – architecture, application
- Autoencoders
- Restricted Boltzmann Machines
- Convolutional Neural Network
- Recursive Neural Network
- Recurrent Neural Network
- Overview of libraries and interfaces available in Python
- Caffee
- Theano
- Tensorflow
- Keras
- Mxnet
- Choosing appropriate library to problem
- Building deep networks in Python
- Choosing appropriate architecture to given problem
- Hybrid deep networks
- Learning network – appropriate library, architecture definition
- Tuning network – initialization, activation functions, loss functions, optimization method
- Avoiding overfitting – detecting overfitting problems in deep networks, regularization
- Evaluating deep networks
- Case studies in Python
- Image recognition – CNN
- Detecting anomalies with Autoencoders
- Forecasting time series with RNN
- Dimensionality reduction with Autoencoder
- Classification with RBM