How to Get Artificial Neural Networks by B. Yegnanarayana PDF for Free
Artificial Neural Networks by B. Yegnanarayana: A Comprehensive Guide
If you are looking for a book that covers the fundamentals and applications of artificial neural networks (ANN) in a clear and concise manner, then you should check out Artificial Neural Networks by B. Yegnanarayana. This book is designed as an introductory level textbook on ANN at the postgraduate and senior undergraduate levels in any branch of engineering, but it can also be used as a reference book by researchers and practitioners in the field. In this article, we will give you an overview of what this book has to offer and how you can download it as a PDF file.
artificial neural networks yegnanarayana pdf download
Introduction
What are artificial neural networks and why are they important?
Artificial neural networks are computational models that are inspired by the structure and function of biological neural networks. They consist of a large number of interconnected processing units called neurons that can perform parallel computations on data. They can learn from data and adapt to changing environments by adjusting their synaptic weights through various learning algorithms. They can also handle complex and nonlinear problems that are difficult or impossible to solve by conventional methods.
Artificial neural networks have many applications in various domains such as speech processing, image processing, artificial intelligence, pattern recognition, data mining, control systems, optimization, robotics, bioinformatics, and more. They can perform tasks such as classification, regression, clustering, dimensionality reduction, feature extraction, association, prediction, generation, and more. They can also model complex phenomena such as chaos, memory, cognition, emotion, and creativity.
What are the main features of this book and who is the author?
This book is written by B. Yegnanarayana, who is a professor in the Department of Computer Science and Engineering at the Indian Institute of Technology Madras. He has over four decades of teaching and research experience in the areas of speech processing, image processing, artificial intelligence, and neural networks. He has published over 300 papers in reputed journals and conferences and has authored several books and monographs. He is a fellow of the IEEE, the Indian National Academy of Engineering, the Indian Academy of Sciences, and the National Academy of Sciences India.
This book is based on his lectures and research on artificial neural networks. It highlights the need for new models of computing based on the fundamental principles of neural networks. It gives a masterly analysis of such topics as basics of artificial neural networks, functional units of artificial neural networks for pattern recognition tasks, feedforward and feedback neural networks, competitive learning neural networks, architectures for complex pattern recognition tasks, and applications of ANN. It also provides appendices on mathematical preliminaries, an overview of current trends in neural networks, and a bibliography of over 400 references.
This book is self-contained and well-organized. It uses simple and consistent notation throughout. It provides illustrative examples and exercises at the end of each chapter. It also provides MATLAB codes for some of the algorithms and simulations discussed in the book. It is suitable for both beginners and advanced readers who want to learn about the theory and practice of artificial neural networks.
Basics of Artificial Neural Networks
Characteristics of neural networks
In this chapter, the author introduces the basic characteristics of neural networks such as parallelism, distributed representation, adaptation, fault tolerance, generalization, and emergence. He also compares them with conventional computing systems such as serial processors, centralized representation, fixed programs, brittle performance, overfitting, and reductionism. He explains how these characteristics make neural networks suitable for solving complex and dynamic problems that require learning from data and dealing with uncertainty and noise.
Historical development of neural network principles
In this chapter, the author traces the historical development of neural network principles from the early studies of biological neurons by Ramon y Cajal, Sherrington, Hebb, McCulloch and Pitts, to the modern theories of artificial neurons by Rosenblatt, Widrow and Hoff, Minsky and Papert, Hopfield, Rumelhart and McClelland, Kohonen, Grossberg, Jordan and Elman, Werbos, LeCun, Hinton, Bengio, Schmidhuber, and others. He also discusses the major milestones and challenges in the field of neural network research such as perceptrons, adaptive linear elements (ADALINE), multilayer perceptrons (MLP), backpropagation algorithm, associative memory models (AMM), self-organizing maps (SOM), adaptive resonance theory (ART), recurrent neural networks (RNN), radial basis function networks (RBFN), convolutional neural networks (CNN), deep learning models (DLM), generative adversarial networks (GAN), and more.
Artificial neural networks terminology
Models of neuron
In this chapter, the author describes some of the basic models of neuron that are used to construct artificial neural networks. He explains the structure and function of biological neurons and how they can be simplified and abstracted into mathematical models. He discusses the linear threshold model, the sigmoid model, the hyperbolic tangent model, the radial basis function model, the softmax model, and the rectified linear unit model. He also compares their properties and applications in different types of neural networks.
Topology
In this chapter, the author introduces the concept of topology in artificial neural networks. He defines topology as the arrangement and connectivity of neurons in a network. He explains how topology affects the performance and complexity of neural networks. He discusses some of the common topologies such as feedforward, feedback, fully connected, partially connected, layered, hierarchical, modular, recurrent, convolutional, and self-organizing. He also gives examples of neural networks that use these topologies for different tasks.
Basic learning laws
In this chapter, the author presents some of the basic learning laws that are used to adjust the synaptic weights of artificial neural networks. He defines learning as the process of modifying the network parameters based on the input-output data. He explains how learning can be supervised, unsupervised, or reinforcement-based. He discusses some of the common learning laws such as Hebbian learning, delta rule, gradient descent rule, least mean square (LMS) rule, Widrow-Hoff rule, perceptron learning rule, backpropagation learning rule, contrastive divergence rule, and more. He also compares their advantages and disadvantages in different types of neural networks.
Activation and Synaptic Dynamics
Activation dynamics models
In this chapter, the author explores some of the activation dynamics models that are used to describe the temporal behavior of artificial neural networks. He defines activation dynamics as the change in the activation value of a neuron over time. He explains how activation dynamics can be deterministic or stochastic. He discusses some of the common activation dynamics models such as linear model, nonlinear model, discrete-time model, continuous-time model, discrete-state model, continuous-state model, deterministic model, stochastic model, Markov model, and more. He also gives examples of neural networks that use these models for different tasks.
Synaptic dynamics models
discrete-state model, continuous-state model, deterministic model, stochastic model, Hebbian model, anti-Hebbian model, and more. He also gives examples of neural networks that use these models for different tasks.
Learning methods
In this chapter, the author reviews some of the learning methods that are used to optimize the synaptic weights of artificial neural networks. He defines learning methods as the algorithms that implement the learning laws. He explains how learning methods can be classified into local or global, online or offline, batch or incremental, and more. He discusses some of the common learning methods such as gradient descent method, Newton's method, conjugate gradient method, Levenberg-Marquardt method, genetic algorithm, particle swarm optimization, simulated annealing, and more. He also compares their efficiency and effectiveness in different types of neural networks.
Stability and convergence
In this chapter, the author analyzes some of the stability and convergence properties of artificial neural networks. He defines stability as the ability of a network to maintain a consistent output for a given input. He defines convergence as the ability of a network to reach a desired output for a given input. He explains how stability and convergence can be affected by the network parameters, topology, activation dynamics, synaptic dynamics, and learning methods. He discusses some of the common criteria and techniques for measuring and ensuring stability and convergence such as Lyapunov function, energy function, fixed point theorem, contraction mapping theorem, Banach fixed point theorem, and more. He also gives examples of neural networks that exhibit different degrees of stability and convergence for different tasks.
Recall in neural networks
In this chapter, the author examines some of the recall mechanisms that are used to retrieve the stored information from artificial neural networks. He defines recall as the process of generating an output from a network based on an input. He explains how recall can be direct or associative. He discusses some of the common recall mechanisms such as feedforward recall, feedback recall, autoassociative recall, heteroassociative recall, bidirectional associative recall, content-addressable recall, and more. He also gives examples of neural networks that use these recall mechanisms for different tasks.
Functional Units of ANN for Pattern Recognition Tasks
Pattern recognition problem
and classification. He discusses some of the common challenges and issues in pattern recognition such as data quality, data quantity, data dimensionality, data variability, data complexity, data ambiguity, and more. He also gives examples of pattern recognition tasks such as speech recognition, face recognition, handwriting recognition, fingerprint recognition, and more.
Basic functional units
In this chapter, the author describes some of the basic functional units that are used to construct artificial neural networks for pattern recognition tasks. He defines functional units as the building blocks of neural networks that perform specific operations on the input patterns. He explains how functional units can be classified into linear or nonlinear, static or dynamic, memoryless or memory-based, and more. He discusses some of the common functional units such as linear combiner, threshold logic unit (TLU), sigmoid unit, radial basis function unit (RBFU), softmax unit, rectified linear unit (ReLU), and more. He also compares their properties and applications in different types of neural networks.
Pattern recognition tasks by the functional units
In this chapter, the author demonstrates how the basic functional units can be combined and arranged to form artificial neural networks for different pattern recognition tasks. He explains how different network architectures can perform different functions such as pattern association, pattern classification, and pattern mapping. He discusses some of the common network architectures such as single-layer perceptron (SLP), multilayer perceptron (MLP), radial basis function network (RBFN), self-organizing map (SOM), adaptive resonance theory (ART), and more. He also gives examples of neural networks that use these architectures for different pattern recognition tasks.
Feedforward Neural Networks
Analysis of pattern association networks
In this chapter, the author analyzes some of the feedforward neural networks that are used for pattern association tasks. He defines pattern association as the task of associating an input pattern with a desired output pattern based on some similarity or correlation measure. He explains how pattern association can be autoassociative or heteroassociative. He discusses some of the common feedforward neural networks that are used for pattern association tasks such as linear autoassociator (LAA), nonlinear autoassociator (NAA), linear heteroassociator (LHA), nonlinear heteroassociator (NHA), Hopfield network (HFN), bidirectional associative memory (BAM), and more. He also compares their performance and limitations in different pattern association tasks.
Analysis of pattern classification networks
linearly separable or nonlinearly separable. He discusses some of the common feedforward neural networks that are used for pattern classification tasks such as single-layer perceptron (SLP), multilayer perceptron (MLP), radial basis function network (RBFN), support vector machine (SVM), and more. He also compares their performance and limitations in different pattern classification tasks.
Analysis of pattern mapping networks
In this chapter, the author analyzes some of the feedforward neural networks that are used for pattern mapping tasks. He defines pattern mapping as the task of transforming an input pattern into a desired output pattern based on some function or relation. He explains how pattern mapping can be linear or nonlinear, deterministic or stochastic, static or dynamic, and more. He discusses some of the common feedforward neural networks that are used for pattern mapping tasks such as linear mapping network (LMN), nonlinear mapping network (NMN), multilayer perceptron (MLP), radial basis function network (RBFN), and more. He also compares their performance and limitations in different pattern mapping tasks.
Feedback Neural Networks
Analysis of linear autoassociative FF networks
In this chapter, the author analyzes some of the feedback neural networks that are used for linear autoassociative tasks. He defines linear autoassociation as the task of reconstructing an input pattern from a corrupted or incomplete version of it based on a linear transformation. He explains how linear autoassociation can be used for noise reduction, data compression, dimensionality reduction, and more. He discusses some of the common feedback neural networks that are used for linear autoassociative tasks such as pseudoinverse network (PIN), singular value decomposition network (SVDN), principal component analysis network (PCAN), and more. He also compares their performance and limitations in different linear autoassociative tasks.
Analysis of pattern storage networks
In this chapter, the author analyzes some of the feedback neural networks that are used for pattern storage tasks. He defines pattern storage as the task of storing and retrieving a set of patterns in a network based on some memory capacity and recall quality measures. He explains how pattern storage can be used for associative memory, content-addressable memory, distributed memory, and more. He discusses some of the common feedback neural networks that are used for pattern storage tasks such as Hopfield network (HFN), bidirectional associative memory (BAM), brain-state-in-a-box network (BSB), and more. He also compares their performance and limitations in different pattern storage tasks.
Stochastic networks and simulated annealing
In this chapter, the author explores some of the stochastic networks and simulated annealing techniques that are used for optimization tasks. He defines stochastic networks as the networks that have random elements in their activation dynamics or synaptic dynamics. He defines simulated annealing as a technique that uses a controlled random search to find a global optimum solution in a complex search space. He explains how stochastic networks and simulated annealing can be used for combinatorial optimization, global optimization, constrained optimization, and more. He discusses some of the common stochastic networks and simulated annealing techniques such as Boltzmann machine (BM), Gibbs sampling, Metropolis algorithm, simulated annealing algorithm, and more. He also compares their performance and limitations in different optimization tasks.
Boltzmann machine
and more. He discusses some of the properties and algorithms of Boltzmann machine such as energy function, equilibrium distribution, thermal equilibrium, Boltzmann distribution, Boltzmann learning rule, contrastive divergence learning rule, and more. He also gives examples of Boltzmann machine applications such as image restoration, image generation, and more.
Competitive Learning Neural Networks
Competitive learning principle
In this chapter, the author introduces the competitive learning principle and how it can be implemented by artificial neural networks. He defines competitive learning as a type of unsupervised learning that involves a competition among the neurons in a network to respond to an input pattern. He explains how competitive learning can be used for clustering, vector quantization, feature extraction, and more. He discusses some of the properties and requirements of competitive learning such as winner-take-all mechanism, lateral inhibition, adaptation rule, and more.
Analysis of competitive learning networks
In this chapter, the author analyzes some of the competitive learning networks that are used for different tasks. He defines competitive learning networks as the networks that use the competitive learning principle to adjust their synaptic weights. He explains how competitive learning networks can be classified into hard or soft, online or offline, supervised or unsupervised, and more. He discusses some of the common competitive learning networks such as k-means network (KMN), learning vector quantization network (LVQN), counterpropagation network (CPN), fuzzy c-means network (FCMN), and more. He also compares their performance and limitations in different tasks.
Self-organizing feature maps (SOFM)
In this chapter, the author examines the self-organizing feature maps as a special case of competitive learning networks. He defines self-organizing feature maps as the networks that use a spatially organized array of neurons to map an input pattern onto a lower-dimensional output space while preserving the topological features of the input space. He explains how self-organizing feature maps can be used for data visualization, dimensionality reduction, feature extraction, and more. He discusses some of the properties and algorithms of self-organizing feature maps such as neighborhood function, learning rate function, Kohonen's algorithm, batch algorithm, and more. He also gives examples of self-organizing feature map applications such as color quantization, image compression, image segmentation, and more.
Architectures for Complex Pattern Recognition Tasks
Modular networks and mixture of experts (MOE)
improving the performance, robustness, scalability, and interpretability of neural networks. He discusses some of the properties and algorithms of modular networks and mixture of experts such as module selection, module combination, module specialization, module cooperation, module competition, gating network, expert ne