Training Neural Networks
Today we’re going to talk about how neurons in a neural network learn by getting their math adjusted, called backpropagation, and how we can optimize networks by finding the best combinations of weights to minimize error. Then we’ll send John Green Bot into the metaphorical jungle to find where this error is the smallest, known as the global optimal solution, compared to just where it is relatively small, called local optimal solutions, and we'll discuss some strategies we can use to help neural networks find these optimized solutions more quickly.
Tags
Comments
Leave a Comment
Comments are loading... If you don't see any, be the first to comment!
Related Videos

How Neural Networks Learn | Backpropogation Intuition
KidzSearch
Backpropagation calculus | Deep learning, chapter 4
3Blue1Brown

Intro to Machine Learning and Neural Networks. How Do They Work
Owner - Math and Science
What is backpropagation really doing? Deep learning - Part 3 of 4
3Blue1Brown
Neural Networks and Deep Learning
Crash Course Artificial Intelligence

Neural Networks: Crash Course Statistics #41
KidzSearch
Do You Need a Brain to Sleep?
SciShow
Neural Networks - Crash Course Statistics
Crash Course Statistics

Understanding neural networks
KidzSearch
Algebra 73 - Factoring Quadratics by inspection - part 1
Why U
