NEURAL NETWORKS

Description

Syllabus

Beginners

  • Unit-1: Introduction, Motivation and History of Neural Network
    • Neural Networks
    • Applications
      • Classification of Data
      • Anomaly Detection
      • Speech Recognition
      • Audio Generation
      • Time Series Analysis
      • Spell Checking
      • Character Recognition
      • Machine Translation
      • Image Processing
    • General Structure
    • Perception
    • Steps involved in a Neural Network
      • Feedforward
      • Backpropagation
    • Why do we need Backpropagation?
    • Basic Flow of Neural Networks
    • The 100-step rule
    • Simple Application Examples
    • The Classical Way
    • The Way of Learning
    • A Brief History of Neural Networks
    • The Beginning
    • Golden Age
    • Long Silence and Slow Reconstruction
    • Renaissance
    • Intermediate

      • Unit-2: Biological Neural Networks
        • Biological Overview
        • The Vertebrate Nervous System
        • Peripheral and Central Nervous System
        • The Cerebrum in Responsible for Abstract Thinking Processes
        • The Cerebellum Controls and Coordinates Motor Functions
        • The Diencephalon Controls Fundamental Physiological Processes
        • The Brainstem Connects the Brain with the Spinal Cord and Controls Reflexes
        • Neurons are Information Processing Cells
        • Components of a Neuron
        • Synapses Weight the Individual Parts of Information
        • Neurotransmitters
        • Dendrites Collect all Parts of Information
        • In the Soma, the Weighted Information is Accumulated
        • The Axon Transfers Outgoing Pulses
        • Electrochemical Processes in the Neuron and Its Components
        • Neurons Maintain Electrical Membrane Potential
        • Membrane Potential
        • The Neuron is Activated by Changes in the Membrane Potential
        • Threshold and Resting State
        • Initiation of Action Potential Over Time
        • In the Axon, a Pulse is Conducted in a Saltatory Way
        • Receptor Cells are Modified Neurons
        • There are Different Receptors Cells for Various Types of Perceptions
        • Information is Processed on Every Level of the Nervous System
        • The Information Processing is Entirely Decentralized
        • An Outline of Common Light Sensing Organs
        • Compound Eyes and Pinhole Eyes Only Provide High Temporal or Spatial Resolution
        • Single Lens Eyes Combine the Advantages of the other Two Eye Types, but They are More Complex
        • The Retina does not Only Receive Information but is also Responsible for Information Processing
        • Steps of Information Processing
        • Horizontal and Amacrine Cells
        • The Amount of Neurons in Living Organisms at Different Stages of Development
        • Transition to Technical Neurons: Neural Networks are a Caricature of Biology
        • Steps of Information Processing
        • Through Radical Simplification Briefly Summarize the Conclusions Relevant for the Technical Part
        • Our Brief Summary Corresponds Exactly with the Few Elements of Biological Neural Networks we want to Take Over into the Technical Approximation
        • Exercises
      • Advanced

        • Unit-3: Components of Artificial Neural Networks
          • The Concept of Time in Neural Networks
          • Components of Neural Networks
          • Data Processing of a Neuron
          • Connections Carry Information That is Processed by Neurons
          • The Propagation Converts Vector Inputs to Scalar Network Inputs
          • The Activation is the “Switching Status” of a Neuron
          • Neurons Get Activated If The Network Input Exceeds Their Threshold Value
          • The Activation Function Determines the Activation of a Neuron Dependent on Network Input and Threshold Value
          • Common Activation Functions
          • An Output Function May Be Used to Process the Activation once again
          • Learning Strategies Adjust a Network to Fit Our Needs
          • Network Topologies
          • Feed-Forward Networks Consist of Layers and Connections Towards Each Following Layer
          • Feed Forward Network
          • Shortcut Connections Skip Layers
          • Direct Recurrences start and end at the same Neuron
          • Indirect Recurrences Can Influence Their Starting Neuron only by making Detours
          • Lateral Recurrences Connect Neurons Within One Layer
          • Completely Linked Networks Allow any Possible Connection
          • The Bias Neuron is a Technical Trick to Consider Threshold Values as Connection Weights
          • Representing Neurons
          • Take care of the order in which Neuron Activations are Calculated
          • Synchronous Activation
          • Asynchronous Activation
          • Random Order
          • Random Permutation
          • Topological Order
          • Fixed Orders of Activation During Implementation
          • Communication with The Outside World: Input and Output of Data in and from Neural Networks
          • Exercises

        Professional

        • Unit-4: Fundamentals of learning and training samples
          • Learning and Training
          • Neural Networks is their Capability
          • Different paradigms of learning
          • About Neuron Functions
          • Learning Algorithm
          • Training Set
          • Unsupervised Learning
          • Supervised Learning methods
          • Reinforcement Learning
          • Offline or Online Learning
          • Training Patterns and teaching Input
          • Error Vector
          • Using Training Samples
          • Divisions of training samples
          • Training Sample Lesson
          • Order of Pattern Representation
          • Learning curve and error measurement
          • Specific Error
          • Root mean Square and Total Error
          • Stop Learning
          • Gradient Optimization Procedures
          • Gradient Dimension
          • Errors during a gradient descent
          • Gradient Descent
          • Gradient Descents against suboptimal minima
          • Flat Plateaus on the error surface may cause training slowness
          • Even if good minima are reached, they may be left afterwards
          • Steep canyons in the error surface may cause Oscillations
          • Exemplary problems allow for testing self-coded learning strategies
          • Boolean Functions
          • The parity Function
          • The 2-spiral problem
          • The Checkerboard problem
          • Samples for the Checkerboard problem
          • The identity function
          • Other exemplary problems
          • The Hebbian Learning
          • Original Rule
          • Generalized Form
          • Exercises
        • Unit-5: The Perceptron, Backpropagation & its Variants
        • Unit-6: Radial Basis Functions(RBF)
          • Introduction
          • Components & Structure of an RBF Network
          • Center of an RBF Neuron
          • RBF Neuron
          • RBF Output Neuron
          • RBF Network
          • RBF Network with Input Neurons
          • Individual one or two dimensional
          • Information processing of an RBF network
          • Different Gaussian Bells
          • Information Processing in RBF neurons
          • Gaussian bells in two-dimensional space
          • Gaussian bell
          • Analytical Thoughts
          • Equations of Weights
          • Generalization on Several Outputs
          • Computational effort and accuracy
          • Combinations of the equation system
          • Fixed Selection & Conditional fixed selection
          • 2-D Input Space
          • Uneven coverage of a 2-D Input Space
          • Growing RBF Networks
          • Neurons are added to places with large error values
          • Limiting the number of neurons
          • Less Important neurons are deleted
          • Comparing RBF networks & Multilayer Perceptrons
        • Unit-7: Recurrent Perceptron-like networks
          • Recurrent neural networks
          • Jordan Networks
          • Elman Networks
          • Training recurrent networks
          • Unfolding in time
          • Teacher Forcing
          • Recurrent backpropagation
          • Training with evolution
        • Unit-8: Hopfield Networks
          • Hopfield Networks
          • Hopfield networks are inspired by particles in a magnetic field
          • In a Hopfield network, all neurons influence each other symmetrically
          • State of a Hopfield Network
          • Input and output of a Hopfield network
          • Significance of weights
          • A neuron changes its state
          • The weight matrix is generated directly out of the training patterns
          • Learning rule for Hopfield networks
          • Auto association and traditional application
          • Pattern Recognition
          • Hetero association and analogies to neural data storage
          • Hopfield Network
          • Generating the heteroassociative matrix
          • Heteroassociative Matrix
          • Stabilizing the Heteroassociations
          • The weight matrix is generated directly out of the training patterns
          • The biological motivation of hetero association
          • Continuous Hopfield networks
        • Unit-9: Learning Vector Quantization
          • Introduction of Learning Vector Quantization
          • About Quantization
          • LVQ divides the input space into separate areas
          • Quantization of a two-dimensional input space
          • Using codebook vectors: the nearest one is the winner
          • Adjusting codebook vectors
          • The procedure of learning
          • LVQ learning procedure
          • Learning process
        • Unit-10: Self-Organizing Feature Maps
          • Unsupervised Learning
          • Structure of a self-organizing map
          • One-dimensional grid
          • Self-organizing map
          • Topology
          • SOMs always activate the neuron
          • Training
          • Adapting the centres
          • SOM learning rule
          • Topology function defines
          • Introduction of common distance and topology functions
          • Decrease Monotonically
          • Gaussian bell, cone function, cylinder function and the Mexican hat function
          • Learning direction
          • Our topology function
          • The learning rate
          • Topological defects
          • The behaviour of a SOM
          • End states of one-dimensional (left column) and two-dimensional (right column)
          • Topological defect in two-dimensional SOM
          • Adjust Resolution of certain areas in a SOM
          • Training of a SOM
          • SOMs can be used to determine centres for RBF neurons
          • Neural Gas
          • A figure filled by a SOM
          • Multi-SOM
          • Multi-Neural Gas
          • Growing Neural Gases
        • Unit-11: Adaptive Resonance Theory
          • Introduction of Adaptive Resonance Theory
          • Task and structure of an Adaptive Resonance Theory
          • Resonance takes place by activities being tossed and turned
          • Top-down & Bottom-up Learning
          • Pattern input & top-down learning
          • Resonance and bottom-up learning
        • Appendix-A: Excursus Cluster Analysis and Regional & Online Learnable Fields
          • Introduction
          • Metric
          • A.1 k-means clustering allocates data to a predefined number of clusters
          • A.2 k-nearest neighbouring looks for the k nearest neighbours of each data point
          • A.3 ?-nearest neighbouring looks for neighbours within the radius ? for each data point
          • A.3 ?-nearest neighbouring looks for neighbours within the radius ? for each data point
          • A.4 The silhouette coefficient determines how accurate a given clustering is
          • A.5 Regional and online learnable fields are a neural clustering strategy
            • A.5.1 ROLFs try to cover data with neurons
              • A.4 The silhouette coefficient determines how accurate a given clustering is
            • A.5.2 A ROLF learns unsupervised by presenting training samples online
          • ROLF neuron and Perceptive surface
          • Structure of a ROLF neuron
          • Accepting neuron
          • Both positions and radii are adapted throughout the learning
          • The radius multiplier allows neurons to be able not only to shrink
          • As required, new neurons are generated
          • Evaluating a ROLF
          • ROLF
          • Comparison with popular clustering methods
          • Initializing radii, learning rates and multiplier is not trivial
        • Appendix-B: Excursus: Neural Networks used for Prediction
          • Introduction
          • About time series
          • One-step-ahead prediction
          • Moving Average Procedure
          • Two-step-ahead prediction
          • Recursive two-step-ahead prediction
          • Direct two-step-ahead prediction
          • Additional optimization approaches for prediction
          • Changing temporal parameters
          • Heterogeneous prediction
          • Remarks on the prediction of share prices
        • Appendix-C: Excursus: Reinforcement Learning
          • Introduction
          • Reinforcement Learning
          • System Structure
          • Grid World
          • Agent and Environment
          • In the Grid world
          • Environment
          • States, situations and actions
          • Reward and return
          • Closed Loop Policy
          • Exploitation vs. Exploration
          • Learning process
          • Rewarding strategies
          • Avoidance Strategy
          • The state-value function
          • Policy evaluation
          • Policy Improvement
          • Monte Carlo method
          • Temporal difference learning
          • The action-value function
          • Q learning
Search box

Feel Free to Contact Us: