Press "Enter" to skip to content

Better training of our neural network – Let’s code a neural network in plain JavaScript Part 3


sometimes it seems like the four
examples that we’re using is not enough
to teach it correctly let’s explore this
a bit more like yeah let’s generate a
bunch more example data right so let’s
actually create let’s make a function of
this copy this I’m gonna call it
generate points function there and then
we’re gonna use that here generate
points and cool we get like the random
points still a random set here and then
when we’re true going here dr. training
set instead of like the manually
generated things here I’m going to do
I mean generate random points was that
the pet name of it no no it was just
generate points generate points and for
oops
we all got points and then we are going
to have the actual team which is the
team that we extract from the point cool
and we’re gonna call this home stick
samples let’s see what that looks like
return examples forget the price let’s
let’s comment this out for now because
it’s broken so these examples are now
like two hundred correct points all
right so let’s strain the
ái with every example poor Kahn’s
example all examples yeah train
let’s call it current weights and
initially it’s going to be really just
random weights now we’re gonna train it
on the current weights and we’re gonna
train it on the example point and the
example team so that it gets you telling
it that this point is gonna have this
team and once we’re done we just return
the current weights weights there we go
and delete this because it’s no longer
used all right so the network is now
kind of it’s kind of off I would like to
actually visualize the training sequence
so that we can see what is and what is
going wrong so let’s just you know I
just want to pause the rendering for
every training point every time of
training point so that you can visualize
this as we go I need to write a little
sleep function first somewhere here
like that I think let’s go to the train
weights here and for every training we
are going to wait sleep for one second
and we also want to yield this on every
loo every loop see what that looks like
oh hey I’m not actually reassigning
current weights so that’s why the neural
network is staying still so yeah
interactivity is really good for
debugging let’s see like that now we
should see jump around a little bit
so every training session here for every
training point you see it jumping around
a bit and after a while it looks like it
stabilizes and their jump out completely
whoof and then it jumped back in let’s
actually make this 10 times faster so
that we can see it more oh there we go
and a jump jump jump jump jump jump jump
so our new network is sort of okay but
you see here like these points here they
are not quite they’re not quite as smart
as the actual logic so our neural
network is kind of smart now it still
has a little bit of a problem with these
these things here they’re kind of off
and I think it’s because we’re adjusting
too much for every training session it’s
kind of like imagine like a car that
whenever it seems in like a it’s
slightly wrong it’s just like Oh making
huge turns every time it’s not like a
drunk person like overreacting to every
single like oh my god I have to say
right
a normal person is more like okay I’m
getting a little close to the edge I’m
adjusting a little bit okay I’m still
getting closer to the edge
I’m adjusting a little bit more okay now
we’re fine so let’s add something like
that and in neural networks this is
called the learning rate it’s
multiplayer I think we just add it here
learning rights we multiply by the
learning rate learning all right bad
like I’m bad at math like the operator
order think that actually this we don’t
really need the Prince and the learning
rate learning rate this is just kind of
arbitrary just an adjustment so that
like we how small adjustment we do you
dream wait hang on like the sleep is
still like way too we need we need to
shore to sleep next 25 seconds yeah it’s
training and it ends up somewhere come
out what that’s horribly wrong
let’s remove the yield as well because
at this point here like it’s just being
a little bit distracting by the way if
the yield here and a weight confuses you
yes you need to watch the sync
generators episode or the series that is
leading up to explaining them you can
find out here because these in in
observable please pretty much everything
works as an a sync generator so you can
think of them as a sing this
oops I think I can’t type sorry about
that so written it out it it’s like this
but this is a little bit of syntactic
sugar that allows us to just write this
which is super handy but anyway our
neural network here is is I don’t know
if it’s actually smarter let’s see if we
can adjust the learning rate to perhaps
0.2 is that better now no absolutely not
0.3 0.3 seems to be are good could learn
nah different one every time I press
enter here it generates a new and you
learning set so it jumps still jumps
around quite a bit I’m thinking about
perhaps we just need more training
points let’s go to the Train weights and
generate put hang on generate point yeah
generate points let’s let’s give it a
num here and range and I need to be like
200 random points and the Train weights
I want like this just have a thousand
like machine learning tends to need a
lot of data to be accurate that became
worse if the donor rate is you want to
does that improve things
oh yeah it actually does it actually
does it seems like 0.1 is like that is
too little that means it jumps around
quite a bit
no why did it break before say if I if I
made the learning rate one instead with
that yeah you see that that means like
the data set can can jump around quite a
bit
let’s see there now now there see that
sometimes we get like data that is
really you get perhaps a point that is
really really far off and that skews the
learning way too much like we need to
give less credence less authority to
each point so perhaps go back to zero
point one and see if yeah now we don’t
see it
jumping around as as much in it oh sorry
chef its bill jobs but personal does
often though I I imagine like it’s way
less maybe I don’t know about the
learning rate really
perhaps the there are so many examples
of the learning rate doesn’t really
doesn’t really matter but if I go and
yes I’m at 100 points oh yeah like with
a hundred points it’s it’s very very
jumpy seems like it jumps around a lot
now like like look at this case like wow
that’s a huge jump let’s see if I adjust
it down to 0.1 0.1 see if you can we see
an equally large deviance now I don’t
think that we can when we have a lower
learning rate like the skew becomes
clearly smaller like that we get less
crazy crazy brains just way that one you
can could see these prices are totally
off here like almost everything was
right but with the learning rate it
becomes slightly more robust let’s
generate a lot of points like 10,000
points and see if what that makes a
birdbrain
okay you see like wow the brain is now
way more stable way more stable it
almost doesn’t move at all and if I have
100,000 points then the new your network
is pretty much perfect it’s a little bit
of jumping over here little bit of
jumping over here let’s have a million
points now that seems to be enough to
make the
Network right near perfect determination
all the points every time pretty much
and that is it we’ve written our little
AI that we can teach things
there’s absolutely more things to this
and they’re going to going to explore
this little AI a lot more I think but
this is a good start
the code is linked in the episode
description and if you have any
questions and boy you should have
because these things are weird then
please post them down below in the
comments or if you are patron on the Fun
Fun forum there is a link to the
dedicated discussion topic for this
episode in the episode description and
you have just watched an episode of fun
fun function I release these every
Monday morning
oh wait hundred GMT you can subscribe
here so that you don’t miss it or you
can just watch another episode right now by clicking here
Please follow and like us: