# But what is a Neural Network? | Deep learning, chapter 1

5 okt. 2017
9 089 943 Weergaven

Help fund future projects: www.patreon.com/3blue1brown
Additional funding for this project provided by Amplify Partners
An equally valuable form of support is to simply share some of the videos.
Special thanks to these supporters: 3b1b.co/nn1-thanks
Full playlist: 3b1b.co/neural-networks
Typo correction: At 14 minutes 45 seconds, the last index on the bias vector is n, when it's supposed to in fact be a k. Thanks for the sharp eyes that caught that!
For those who want to learn more, I highly recommend the book by Michael Nielsen introducing neural networks and deep learning: goo.gl/Zmczdy
There are two neat things about this book. First, it's available for free, so consider joining me in making a donation Nielsen's way if you get something out of it. And second, it's centered around walking through some code and data which you can download yourself, and which covers the same example that I introduce in this video. Yay for active learning!
github.com/mnielsen/neural-networks-and-deep-learning
I also highly recommend Chris Olah's blog: colah.github.io/
For more videos, Welch Labs also has some great series on machine learning:
nlworld.info/key/video/ynB1a5Wnonp9noA
nlworld.info/key/video/w7CWZLmQjW-JiKo
For those of you looking to go *even* deeper, check out the text "Deep Learning" by Goodfellow, Bengio, and Courville.
Also, the publication Distill is just utterly beautiful: distill.pub/
Lion photo by Kevin Pluck
-----------------
Timeline:
0:00 - Introduction example
1:07 - Series preview
2:42 - What are neurons?
3:35 - Introducing layers
5:31 - Why layers?
8:38 - Edge detection example
11:34 - Counting weights and biases
12:30 - How learning relates
13:26 - Notation and linear algebra
15:17 - Recap
16:27 - Some final words
17:03 - ReLU vs Sigmoid
------------------
Animations largely made using manim, a scrappy open source python library. github.com/3b1b/manim
If you want to check it out, I feel compelled to warn you that it's not the most well-documented tool, and has many other quirks you might expect in a library someone wrote with only their own use in mind.
Music by Vincent Rubinetti.
vincerubinetti.bandcamp.com/album/the-music-of-3blue1brown
Stream the music on Spotify:
open.spotify.com/album/1dVyjwS8FBqXhRunaG5W5u
If you want to contribute translated subtitles or to help review those that have already been made by others and need approval, you can click the gear icon in the video and go to subtitles/cc, then "add subtitles/cc". I really appreciate those who do this, as it helps make the lessons accessible to more people.
------------------
3blue1brown is a channel about animating math, in all senses of the word animate. And you know the drill with NLworld, if you want to stay posted on new videos, subscribe, and click the bell to receive notifications (if you're into that).
If you are new to this channel and want to see more, a good place to start is this playlist: 3b1b.co/recommended
Various social media stuffs:
Website: www.3blue1brown.com
Patreon: patreon.com/3blue1brown
Reddit: www.reddit.com/r/3Blue1Brown

Reacties
• dont watch guys ,i cant recognize my own handwriting now ,thanks to this guys expalining it i am Mesmerized

jdragon jdsDag geleden
• The vector [b] should be a k \times 1 matrix rather than (n+1)\times 1 matrix ??

• Yes correct

Rohan Bingi17 uur geleden
• Yeah but no are really just passing numbers back abs forth until the right number is found... But when you don't know what the correct value should be how do you use the NN? I think the NN needs to be reactionary taking in data... finding best reaction...while then adding extra organrlls to NN like core, muscle memory and main memory, attach to each node...but that would mean each neuron has a different analysis function...

Ambient Soda3 dagen geleden
• you are on a different level bro..

Harsh Agarwal4 dagen geleden
• bro hats off to your smooth explanation

• In the time at 10:32, it's mentioned sigmoid is used to transfer the weighted sum into a number that has a range of 0 to 1. However, at the end of the video, ReLU is used which is max(0, a). By using ReLU, will tbe program run into problem of having a number outside of the range 0 to 1?

Chan Kin Sung5 dagen geleden
• May I know is it a typo at 14:42? The column vector for bias should go from b0 to bk, rather than b0 to bn. As there are a total of k neurons in the 2nd layer, rather than n neurons in the 2nd layer?

Chan Kin Sung5 dagen geleden
• Cool really helped me learn

NotOkYes Animations6 dagen geleden
• Absolutely the way to train someone! Good Job

Abraham George Chackungal6 dagen geleden
• if i have 5000 training examples of handwriten digits of 20 by 20 pixels each image of 20 by 20 gives a matrix that i need to convert (unrolled) into 400 dimensions vector and each image that is of 400 pixels each single pixel in my 400 pixels image will have one of its own input out of 400 inputs neuron in first layer . so when 5000 images will go through the input layer it will still have 400 inputs node in the first layer but every every digit from 5000 will go through the input layer separately

Learn to code7 dagen geleden
• This is the best explanation I’ve seen on the subject. Excellent. Subscribed.

Youssif Hassanein8 dagen geleden
• First time someone actually convinces me to subscribe to the channel through requesting in the video

Iuri Guilherme8 dagen geleden
• absolutely nailed it. professors having years of experience and PhDs can't explain this. I watched my professors video at least 3 times but still can't understand it

anpowersoft19 powersoft198 dagen geleden
• Awesome explanation 👏

Namita Gaud8 dagen geleden
• SUDOKU

Lucian Maximus10 dagen geleden
• super comprehensive

Aylín Mena10 dagen geleden
• If anyone trying to understand how the bias plays its role. It is a behaves the same way as a "shifting function" .

nightRanger007711 dagen geleden
• Wat - @pp +1..:3:1::4::6::4::9::0::6::7::3..let her know i directed you.

3blue 1brown11 dagen geleden
• This video is incredible!

Anton Ledo11 dagen geleden
• Wat - @pp +1..:3:1::4::6::4::9::0::6::7::3..let her know i directed you.

3blue 1brown11 dagen geleden
• NOW I CAN CRACK THE CAPTCA CODE thanks for sharing this info

Chitransh Tiwari11 dagen geleden
• Wat - @pp +1..:3:1::4::6::4::9::0::6::7::3..let her know i directed you.

3blue 1brown11 dagen geleden
• Hey, i would like to cite You in my master thesis. How should i do it?

Igor Olczak12 dagen geleden
• Wat - @pp +1..:3:1::4::6::4::9::0::6::7::3..let her know i directed you.

3blue 1brown11 dagen geleden
• Thanks For Hand written notes for this lecture series visit scwripple website and go to hand written notes category.

Somal Chakraborty12 dagen geleden
• SUDOKU

Lucian Maximus12 dagen geleden
• Hey pal, did you get a load of the nerd? 🤓

Guy Incognito12 dagen geleden
• Thank u for clearing the concepts. Every second of ur lectures is precious. Stay blessed. ( from pakistan)

Saran zaib13 dagen geleden
• i FIND HIS THE BEST EXPLANATION ON NLworld BUT STIL STUPID EXPLANATION .... at some points contradicts itself

emil13 dagen geleden
• Thnx man.. it was a very easy and simple explanation I understood neural networks right away!!

Akshat Kumar Mishra14 dagen geleden
• Dear Grant, I worked out y=x^x (x to the power of x) and found for x=0, y approaches 1. Also for x=1, y will be 1. For x=1/e=~0.3678, y will have its minimum which will be 0.6922. beyond x=1, y increases continuously until +infinity. Here's my problem and I need your help, perhaps a video: for x= -infinity, y=0. For all even negative integers, y>0. For all odd negative integers, y

Behnam Ashjari15 dagen geleden
• Am I the only one who thinks that actually you need 11 outputs 0-9 for recognised numbers and 10 to report erroneus input. If you look the next video you can see that without teaching what is not a number one can get false positive answers.

• legends say that every second is important in this video

Shreyas KUlkarni15 dagen geleden
• Wat - @pp +1..:3:1::4::6::4::9::0::6::7::3..let her know i directed you.

3blue 1brown10 dagen geleden
• So I came up with this analogy today. I'm no math genius or scientist. I'm not even sure if this is right but I think it might be: An artist paints a highly abstract painting of his own dog. Then he takes his approximation to his friends house and shows him the abstract painting... His neighbour shouts out "wow you painted my dog!" the artist exclaims "no it's my dog!". So the artist and the neighbour are the neural nets outputs both looking at their dogs and comparing them to the image and realising its a dog. The data in the net is the painting and the inputs are the actual dogs themselves. It's the randomisation and abstraction of the original input that allows it to be inpreted in different ways and compared with multiple things. The uncertainty of what the image is, is what is creating some sense of certainty. Am I way off with this? It's quite a heavy topic 😅

Gavin Piliczky15 dagen geleden
• @Amirali mo cool thanks do you mind if I ask what your background is in the topic? Personally I'm just a computer tech with a keen interest in this stuff

Gavin Piliczky15 dagen geleden
• Cool explanation coming from an expert you are quite right.

Amirali mo15 dagen geleden
• Making a list of appropriate software , hardware , and laboratory resources that are most appropriate to supplement quantum human medical science in the environment of a vehicle capable of quantum networking and navigation about the solar system/universe

Skool Scribe15 dagen geleden
• Fourier Vectors for neural networks

Skool Scribe15 dagen geleden
• Thank you very much for this video! It was really well visualized :)

Tymothy Lim16 dagen geleden
• For 15:00, should the last matrix for the bias be b_k, rather than b_n? Cause it looks like there are n+1 neurons in layer 0, and k+1 neurons in layer 1, and the bias is for the neurons in layer 1, so there should be k biases, right?

Jaekyung Song16 dagen geleden
• Thank you..

Jamey Brown16 dagen geleden
• Thanks for the translation into Arabic

XM Global Music17 dagen geleden
• you are generous god, thanks a lot

Erick Valencia17 dagen geleden
• @7:07 it would've been nicer if you'd used 187. just saying.

M H17 dagen geleden
• What are the future lottery numbers?

Joachim Dietl17 dagen geleden
• This is what schools should be teaching.

Abhinav Chaudhary17 dagen geleden
• Awesome. This is exactly how we might just create our destruction. The nukes at least still needed someone to press the button.

elaichiChai17 dagen geleden
• Best explanation eveeerrr

Alvaro Humeres18 dagen geleden
• I think my neurons are all sigmoid, whereas those genius kids got ReLU brains.

Baalaaxa18 dagen geleden
• At 15:00, I believe the dimension of bias should be k by 1 instead of n by 1.

Ye Xu18 dagen geleden
• Amazing vid! Question at 14:46, shouldn‘t the bias vector be of dimension k?

Tilio Schulze20 dagen geleden
• Maybe I'm missing something but shouldn't the bias vector at 19:13 go from b_0 to b_k? The result of multiplying a the weights (which are k x n) by the input vector (n x 1) will be a (k x 1) column vector - or are we adding bias prior to multiplication?

DjAperson21 dag geleden
• sir, I need diabetic retinopathy using cnn project explanation.... can you please help me... thanking you sir

Satya Virtual Gaming21 dag geleden
• Fantastic

Sam Peterson21 dag geleden
• Incredible. As always

Science Casey22 dagen geleden
• I hadnot understod anything its not for human

Mahmoud Abdlshafi22 dagen geleden
• sigmoid squeshification :D:D @11:16

• Any fellow AI devs in 2021 watching this?

S Y23 dagen geleden
• If u want A Human Neuron Experiment Go Check Vsause lol

Sᖽᐸᖻ ᒪᗅᘉᗫS23 dagen geleden
• Someone used this video series to make a neural network in SCRATCH. Yeah, Scratch the block-based kid programming language. His name is nishpish, you can search him up there.

Cool Scratcher23 dagen geleden
• Absolutely the best explanation of this common visualization and how neural networks work.

Scott Williams25 dagen geleden
• Currently doing my capstone on deep learning and this is among the best, and easiest to understand descriptions I have seen.

churchofmarcus25 dagen geleden
• I love how smooth the sine waves you use for the animation movement are, (when the pi's move their eyes from one side to the other the acceleration is determined by a sine function if you didn't know)

bit_mate ֫26 dagen geleden
• Electric engineers: ramp Computer scientists: *_ＲＥＣＴＩＦＩＥＤ　ＬＩＮＥＡＲ　ＵＮＩＴ_*

紅樓鍮28 dagen geleden
• If the activation range of the input layer is 0 to 1.0 where zero is black and 1 is white, how does using a negative weight for the pixels just outside of an edge increase the activation of the edge-detecting neuron in the second layer? If the activation in the input layer can be a negative value then I can see how using a negative weight would work to increase the activation in the edge-detecting second layer neuron (negative * negative = positive). Is there an error in your video or am I missing something? Love this series.

Percy Segui28 dagen geleden
• But this work with chinese idiograms?

L DS29 dagen geleden
• No one can make it more simpler than this.

Hiteshwar Singh29 dagen geleden
• help why isnt my brain working

MR. EMaand geleden
• Many of Us have a Good school Many of Us have a Good education Many of Us have a good lecturer ---------------------------------------------------- But just few of us have a (Great) of those things But When I saw this video. Everything of (Great) come with 19 minutes. God bless you and who ever in charge to evolve this chanel. Thank you from Indonesia

Joshua Fransisco DelpieroMaand geleden
• This is fantastically well explained, thanks so much!

Peridorito The MightyMaand geleden
• Oh, where are we going to be in a decade when neural networks are applied to defense, economics, science of all kinds, and manufacturing? I would like to upload my brain as a neural network, please.

Buck RothschildMaand geleden
• Wakey wakey! We're already there.

Brauggi the boldMaand geleden
• Am I the only person who took more than an hour to watch and understand this...? lol

Victoria LeighMaand geleden
• 11:10 This is my third watch and now I finally understand what bias for inacativity is..........Thank you so much 3B1B

Victoria LeighMaand geleden
• TYSM YOU SIMPLY EXPLAINED IT

Nila KSMaand geleden
• I need a Math's Major girlfriend

Ale GhMaand geleden
• I think the pandemic has taught people a big lesson, having one stream of income is not really a good idea cause your job doesn't secure your financial needs. The pandemic has really set out business-minded people from the rest that is why I'm so lucky to be among the investors trading with Mrs. Kamilah Thurston as his student it's been success and happiness since the beginning of my trades

cheng hungMaand geleden
• I’ve completely stopped working my 9-5 job, which earned me nothing but bread crumbs, now I can take care of my kids and it’s all thanks to Mrs. Kamilah Thurston too

Gelia BanksMaand geleden
• @pinyin ming i got connected with her last week and have already cashed out some dollar's and my trading is still going on. am now called an investor. lol!!!!

Greg WilliamMaand geleden
• Bitcoin can get you the Hollywood life with zero effort. It's a digital gold, a fortune growing more and more each day. 13 years since its inception, it has climbed exponentially from US$0.08 to US$51,170.00. Then Just imagine the future

Hurley MooreMaand geleden
• wow! it's surprising to come across someone acquainted Mrs. Kamilah Thurston. She has really changed my life. Thanks to my brother who referred me to her, she's one of a kind.

Laurence GreayMaand geleden
• The secret of your future is hidden in your daily routine. Successful people do daily what the unsuccessful only on occasionally.

John MichealMaand geleden
• 8 seconds in, well.... it's a/the representation of an English Three of what humans once created of something that we had discovered.. but okay, I'll carry on watching >,

programming With LogicMaand geleden
• Man, I can't thank you enough for this video really helpful.

Mohammed Al-lamiMaand geleden
• I’m new to neural networks but there seems to be a form of over complication. If we consider that very loosely neural networks represent a human brain and the learning process is back propagation aren’t we over complicating it some what? As when we teach a child how to recognise numbers we don’t show the child one thousand number nines, we teach it there is a circle with a vertical line to the right of it. I suppose my point is, shouldn’t we add some basic rule set to a specific pattern recognition thus if we have a circle with a vertical line to the right of it could be a nine. By using a neural network to identify specific patterns then apply these rule sets, theoretically it would be less complex and more accurate? Excuse my ignorance if it’s a silly question, but I’m new to the field but HIGHLY interested. By the way your explanations are a sheer masterpiece, you turn such a complex subject into simple digestible bytes that everyone can understand. Love your work and please give us more.😁😁😁👊👊👊

Manny MannyMaand geleden
• Piano Transcription app for Android uses Magenta

Piano Transcription AndroidMaand geleden
• this was beautiful

• Beast

Nicolás MerchánMaand geleden
• pog

Jhareign SolidumMaand geleden
• According to the video at various points where the intersections of the neural network are crossed, if not all the intersections is where chaos is produced. As long as the neural network has an end, chaos can be even fun, also if the neural network connections have no common end between them again chaos also fun but they are not the expected results, another connection method being preferable without forgetting the different databases, enough is enough libraries now have another atypical system we are closed. Downloading any book, document by different means is really complicated in the "net"!

201 201Maand geleden
• i like how we ourselves don’t understand how neural networks do their thing

aresuMaand geleden
• The creepy spain interestedly admire because cheese molecularly prepare including a old minibus. slimy, medical beetle

Dang Hoang KhangMaand geleden
• The trite ash aetiologically slow because mint socially crash across a orange michael. alleged, disgusted fired

Jose LopezMaand geleden
• I remember when he uploaded this and I thought... "Ah, I think I'll skip this one for once." Would ya look at that, looks like I need it now. May as well watch the whole series.

Rayshaun PrestonMaand geleden
• A bookmark: 10:46

Tina LMaand geleden
• omg this is the first YT Video which I did not skip in between🖤

Saicharan SigiriMaand geleden
• The four frail buffer observationally wrap because mass angiographically tip below a plain ptarmigan. thinkable, narrow cycle

melodie smithMaand geleden
• A stunning beautiful video that truly simplifies a really complicated problem.

Robert TangMaand geleden
• niggas taught me using calculus instead of this

• 17:00 When you are just skimming your eyes through the patrons names and suddenly you see Ryan Dahl the creator of Node.js

• Ryan loves math as he loves programming and Grant work with this channel is pure art, because he explains math in a way it should be.

Everton AlmeidaMaand geleden
• You are teaching but I don't know what u r teaching bcoz of Ur pharatedar englis.....

my hobby studyMaand geleden
• 28x28 is pretty high resolution for a single character....

Jacob PowersMaand geleden
• Simple and effective explanation... thank you!

Humberto PedrazaMaand geleden
• One could easily use some python and dsp designs to apply this to ekg's

• The abstracted moon aesthetically trip because sailor scilly smash minus a fascinated beard. null, animated rowboat

SouperMaand geleden
• awesome!! just started learning and it went straight into the mind via neurons. Thank you so much for your work.

• Can someone explain to me why we need 'negative' weights for pixels surrounding pixels of interest (example at 10:03) in the first place? Wouldn't having 'zero' weights (as described in earlier bits of the explanation) be enough from the perspective of creating/interpreting distinction?

Abhyudaya RanglaniMaand geleden
• Who tf dislikes this vid?

Sidharth SinghMaand geleden
• Can we get a comprehensive series for neural networks, and deep learning

Sidharth SinghMaand geleden
• That's a great explanation! Thank you so much!

Filipe AnalyticsMaand geleden
• This is fantastic! Thank you so much!

Filipe OliveiraMaand geleden
• This is a great video! Thank you, very well delivered.

collierew1Maand geleden
• What would happen if you mirror the network, using the numbers as inputs and the image as output?

DomnibusMaand geleden