AI Research Answer
How does neural network work?
4 cited papers · March 16, 2026 · Powered by Researchly AI
🧠
TL;DR
Neural networks (NNs) are computational systems inspired by the human brain, using multiple layers of units to process and learn from data. Deep neural networks…
Neural networks (NNs) are computational systems inspired by the human brain, using multiple layers of units to process and learn from data.12Shrestha & Mahmood (2019)1Deep neural networks (DNNs) employ highly optimized algorithms and architectures to extract hierarchical representations from multi-dimensional training data, overcoming limitations of earlier shallow networks.1
- Deep Neural Network (DNN) — Uses multiple (deep) layers of units with highly optimized algorithms to improve accuracy and reduce training time, enabling hierarchical feature abstraction from raw data.
- Backpropagation (BP) — The dominant training algorithm that assigns blame for errors by multiplying error signals with synaptic weights across layers; however, it requires a precise, symmetric backward connectivity pattern.
- Hebbian Plasticity — A biologically inspired local learning rule where synaptic weights are updated based on correlated activity between neurons, offering an alternative to BP. Stricker et al. (2024)
Want to research your own topic? Try it free →
Diagram
INPUT LAYER HIDDEN LAYERS OUTPUT LAYER [x1] ──────┐ ┌─────────────────┐ [x2] ──────┼──▶ │ Layer 1 │ [x3] ──────┤ │ (neurons + ReLU│ [...] ─────┘ │ activation) │ └────────┬────────┘ Raw Input │ (dim: N) ▼ ┌─────────────────┐ │ Layer 2 │ │ (neurons + │ │ activation) │ └────────┬────────┘ │ ▼ ┌─────────────────┐ │ Layer L │ │ (deep feature │ │ abstraction) │ └────────┬────────┘ │ ▼ ┌─────────────────┐ │ OUTPUT LAYER │──▶ [ŷ] │ (Softmax/Linear│ Prediction │ dim: C) │ └────────┬────────┘ │ ┌───────────▼──────────┐ │ LOSS / COST │ │ FUNCTION │ │ L(ŷ, y) │ └───────────┬──────────┘ │ ┌────────────────▼──────────────┐ │ BACKPROPAGATION │ │ ∂L/∂W propagated backward │ │ through all layers │ └────────────────┬──────────────┘ │ ┌────────────────▼──────────────┐ │ WEIGHT UPDATE │ │ W ← W - η·∂L/∂W │ │ (Gradient Descent) │ └───────────────────────────────┘ ▲ Repeat until convergence
Deep learning encompasses a series of non-linear transformations organized hierarchically, with deep neural networks representing the predominant form of contemporary deep learning methodologies.1The backpropagation algorithm assigns blame for prediction errors by multiplying error signals with all synaptic weights on each neuron's axon and further downstream — however, this requires a precise, symmetric backward connectivity pattern thought to be impossible in the brain.2As an alternative, feedback alignment demonstrates that even random synaptic feedback weights can transmit teaching signals across multiple layers and perform as effectively as backpropagation on a variety of tasks, reopening questions about biologically plausible learning.2
Want to research your own topic? Try it free →
- Backpropagation deviates from biological realism due to imposed constraints and lacks the brain's superior generalization capabilities, motivating the search for more biologically plausible alternatives.
- Deep neural networks use multiple layers to extract hierarchical, distributed features from raw data, outperforming traditional shallow networks.
- Backpropagation remains the dominant training algorithm but requires symmetric weight connectivity that is biologically implausible.
- Random feedback weights can substitute for symmetric backpropagation weights and still enable effective multi-layer learning.
- Biologically inspired rules like Hebbian plasticity and homeostatic mechanisms offer promising alternatives to BP for sparse, realistic neural models.
1
Review of Deep Learning Algorithms and ArchitecturesAjay Shrestha, Ausif Mahmood2019IEEE Access
View 2
Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What’s NextSalvatore Cuomo, Vincenzo Schiano Di Cola et al.2022Journal of Scientific Computing
View 3
Random synaptic feedback weights support error backpropagation for deep learningTimothy Lillicrap, Daniel Cownden et al.2016Nature Communications
View Want to research your own topic? Try it free →
- "How does backpropagation algorithm work mathematically in deep neural networks?"
- "Biologically plausible learning rules as alternatives to backpropagation in neural networks"
- "Convolutional neural networks vs recurrent neural networks: architecture and applications"
Research smarter with AI-powered citations
Researchly finds and cites academic papers for any research topic in seconds. Used by students across India.