Skip to main content
  • Español
    • English
    • 简体中文
    • Deutsch
    • Polski
    • العربية
    • Nederlands
    • Français
    • Magyar
    • Italiano
    • 日本語
    • 한국어
    • Português, Brasil
    • Română
    • Русский
    • Español
Home

Certificate Authentication

Artificial Neural Networks, Machine Learning, Deep Thinking Certificate for KAMIL CIOMCIA

Add to LinkedIn

Certificate ID: 
604079
Authentication Code: 
8122b
Certified Person Name: 
KAMIL CIOMCIA
Trainer Name: 
Tomasz Zając
Duration Days: 
3
Duration Hours: 
21
Course Name: 
Artificial Neural Networks, Machine Learning, Deep Thinking
Course Date: 
20 November 2019 09:00 to 22 November 2019 16:00
Venue: 
Warszawa
Course Outline: 

DAY 1 - ARTIFICIAL NEURAL NETWORKS

Introduction and ANN Structure.

  • Biological neurons and artificial neurons.
  • Model of an ANN.
  • Activation functions used in ANNs.
  • Typical classes of network architectures .

Mathematical Foundations and Learning mechanisms.

  • Re-visiting vector and matrix algebra.
  • State-space concepts.
  • Concepts of optimization.
  • Error-correction learning.
  • Memory-based learning.
  • Hebbian learning.
  • Competitive learning.

Single layer perceptrons.

  • Structure and learning of perceptrons.
  • Pattern classifier - introduction and Bayes' classifiers.
  • Perceptron as a pattern classifier.
  • Perceptron convergence.
  • Limitations of a perceptrons.

Feedforward ANN.

  • Structures of Multi-layer feedforward networks.
  • Back propagation algorithm.
  • Back propagation - training and convergence.
  • Functional approximation with back propagation.
  • Practical and design issues of back propagation learning.

Radial Basis Function Networks.

  • Pattern separability and interpolation.
  • Regularization Theory.
  • Regularization and RBF networks.
  • RBF network design and training.
  • Approximation properties of RBF.

Competitive Learning and Self organizing ANN.

  • General clustering procedures.
  • Learning Vector Quantization (LVQ).
  • Competitive learning algorithms and architectures.
  • Self organizing feature maps.
  • Properties of feature maps.

Fuzzy Neural Networks.

  • Neuro-fuzzy systems.
  • Background of fuzzy sets and logic.
  • Design of fuzzy stems.
  • Design of fuzzy ANNs.

Applications

  • A few examples of Neural Network applications, their advantages and problems will be discussed.

DAY -2 MACHINE LEARNING

  • The PAC Learning Framework
    • Guarantees for finite hypothesis set – consistent case
    • Guarantees for finite hypothesis set – inconsistent case
    • Generalities
      • Deterministic cv. Stochastic scenarios
      • Bayes error noise
      • Estimation and approximation errors
      • Model selection
  • Radmeacher Complexity and VC – Dimension
  • Bias - Variance tradeoff
  • Regularisation
  • Over-fitting
  • Validation
  • Support Vector Machines
  • Kriging (Gaussian Process regression)
  • PCA and Kernel PCA
  • Self Organisation Maps (SOM)
  • Kernel induced vector space
    • Mercer Kernels and Kernel - induced similarity metrics
  • Reinforcement Learning

DAY 3 - DEEP LEARNING

This will be taught in relation to the topics covered on Day 1 and Day 2

  • Logistic and Softmax Regression
  • Sparse Autoencoders
  • Vectorization, PCA and Whitening
  • Self-Taught Learning
  • Deep Networks
  • Linear Decoders
  • Convolution and Pooling
  • Sparse Coding
  • Independent Component Analysis
  • Canonical Correlation Analysis
  • Demos and Applications
Staff Login