Press "Enter" to skip to content

GOTO 2017 • Machine Learning with TensorFlow • Robert Saxby & Rokesh Jankie


[Music]
so I Robert this rakesh what we’re going
to do today is go through ten story
obviously at some point we’re going to
start off with a little bit background
why machine learning why now and also
some of the steps that we need to be
able to actually run machine learning in
production and then we’re going to go
through some example were going to some
demoing in the second half of the
session where we’re actually going to
look at how we can yeah take out our
model and actually deploy it and train
it and make predictions so we’re very
much interested in what it takes to get
machine learning into production so I’ve
got a clicker somewhere actually where’s
that okay there we go thank you so
machine learning at Google I don’t know
how many of you are familiar with Google
most people know Google as a search
company as an advertising company as
we’re getting more and more of a name
now as a cloud company we’re working
very hard to build up that name but this
photo best describes what happens at
Google I would say that 50% of the
people that work at Google well more
than 50% are engineers and most of those
engineers are busy with actually making
sure that this happens that we have data
infrastructure across the planet it’s a
big part of story I’m going to tell you
so we’re going to start off with it this
just to give you kind of an idea this is
our network globally and these are the
data centers that we use for cloud why
am I telling you this
well our experience of Google is
basically if you want to be able to do
machine learning if you want to do
something let’s say intelligent with
your data you have access to your data
the data needs to live somewhere so it
all really kind of starts off at the
infrastructure level so before we can
get anything into production we need to
make sure that all of this is in order
and we will look at one of the
challenges here today before we go into
the machine learning components
basically very shortly just how we
process data to actually get into a
place where we can actually start doing
something with it so
I like this slide because when you’re
trying to do anything with data or
machine learning if you’re not really
thinking about from the ground up how
you’re going to put this stuff together
you’re going to end up in a situation on
the left and that’s that we’ve had a lot
of experience in the company of actually
learning what it takes to to be able to
build stuff up and tend to flow is
actually part of that story so Google
clouds
that’s our cloud platform we will talk
about what we do with dense flow and
there in a moment
but Google Cloud basically our objective
with Google cloud is to give you an
option to compute let’s say in a
different way rather than just in a
different space so for example we’ve
tend to flow or kubernetes
what we’re trying to do is make sure
that we have open source platforms that
you can use to develop your code and
then we will give you a space within the
Google cloud to actually run this be a
managed service so when you’re actually
running this in production you don’t
need to worry about you know scaling up
you don’t need to worry about let’s say
downtime those kind of problems you can
really focus on the functionality of
your application this is a very brief
introduction and I’m moving through this
part a little bit quickly so we can get
on to the meat of it but this is a very
brief overview of what Google Cloud is
you can see some of the are big focus to
see our data analytics and machine
learning so today we’re going to look at
and and the demo we’re going to look at
we were looking at deepen your network
tensorflow is particularly good for
creating deep neural networks can I have
a quick hands up in the audience who’s
actually working on deep neural networks
is that she done some stuff on deep
neural networks
she of those people have you done some
stuff with tencel floridi ok a few any
other frameworks that you’re using out
there maybe MX net and you want to shout
out a framework that they’re using at
the moment ok the cognitive talking care
us ok caris isn’t glad you mentioned
that one thank you we’ll come back to
that in a little while so I guess you’ve
all seen this kind of like insurgents of
and machine learning in these last few
years and I now why is it happening now
because the a lot of these concepts have
been around for quite a while it’s not
that we decide you know
years ago with machine learning and
neural networks is some of this stuff
has been around since the 60s I think
that and that’s also why we’re bringing
these presentations they along with
clouds it’s a cloud is playing a very
important part in this whole movement to
actually do machine learning if you
think about what you need the
ingredients for good machine learning
it’s obviously having good models and
we’ve seen in the last year’s that those
models have been improved as well
obviously when you can start programming
and start actually running more and more
models you have more opportunity start
improving those you also need to better
store large amounts of data and you need
about a processor and I don’t know if
anyone can identify this this this chip
on the right
TenPoint see sorry CPU good we got a
much better picture of it later
something makes me quite happy so in
Google Cloud we have two ways of
actually tackling machine learning one
is with tens flow and the other is our
managed AP is so with a managed AP is
what we’re trying to do is in us in this
way we want to democratize machine
learning so for let’s say common
problems to think about things like OCR
think of think about speech we’re gonna
look at a demo in a moment with video
for things that we can actually you know
we can create this model we can train it
and we can’t you bring it to the market
and you know people can use it in a in
an easy way let’s say for a developer to
actually use that embed that in their
application it could be a natural
language processing for example
translation we basically want to run
that by creating you know easy to use
open api’s but for the stuff where
you’re actually building your own models
our approach and this is to say okay
let’s have an open source framework and
we’re looking a moment about what
tensorflow is and what the idea is
behind the architecture of turns floor
so that you can actually take your model
go and train it wherever you want or you
know build it where ever you want train
it wherever you want and then also place
it wherever you want so you can do your
predictions and what we do in the cloud
space is actually offer a managed
service where you can actually take your
tense flow models and we’ll be looking
at that to actually run those and train
them in in the cloud also make
predictions so I’m gonna give a little
demo of one of those ap
and then I’ll show you the architecture
behind it as well of course I went the
wrong way one second the demo effect
before the demo is even started
wonderful this is what it this is one of
those new Macs I’ve been giving them a
hard time about it all day so I will
continue the theme now so here’s an
application that a couple of colleagues
of ours put together to actually use our
video API so we can make videos
searchable so in this example here I’m
going to search for dog and it’s going
to give me all the videos that have dogs
so we’re gonna click through to this
video I’m gonna search for dog here as
well there we go
and if I actually go to this point in
the video I should be able to see if
there’s a dog for the same way if I go
to this point in the video here I should
be able to see it there’s a cake there
we go
I can go on but you get the idea so what
does it take to build an application
like that how long do you think it took
to put this application together hours
days that’s definitely not the one we
thought the challenge from Keg on there
no no you’ve got to work really hard for
that one sorry did we do tests on this
one the guys have put together is from
my dev rel I’m not going to say they did
a lot of testing no couple of hours
let’s get back to the presentation this
is the architecture and this used so I
mean tests or not yeah of course you
could you can write your tests if it’s
you know an applications going into
production for I don’t a–first but
we’re looking here what what you can do
with an open API so what can you do with
one of our API is for machine learning
so basically what we’ve done is we’ve
uploaded videos on to cloud storage just
an object storage we have service
functions there listening so when a
video arrives it will fire off an API
call and actually process that video and
get back the metadata which will consist
of what’s the content of that video and
it will also consist of things like the
the timestamp when you can find that so
with that metadata we can then serve it
up we’re using App Engine here which are
platform as-a-service
to actually create a small
web application and we’re tying this
together with elasticsearch so you can
actually search through those
applications search through those video
sorry so this is really an example of
something you can put together and
obviously if you’re going to build this
for production right your tests and
everything else but it’s an example
without actually having any knowledge of
machine learning what you could create
by you know using one of these api’s but
the rest this talk we’re going to look
about what it takes to actually make
those models yourself or importantly
what it takes that you get those models
to production so this is the Wow our
interpretation about what the popular
imagination of machine learning is so
lots of data difficult maths and magical
results it’s quite a bad interpretation
of what happens with machine learning it
doesn’t do a lot of justice to all the
hard work that goes into it reality is
this collecting data cost a lot of
effort organizing that data cost a lot
of effort and before you can even think
about creating a model you need to
really do those first two steps then we
have the the creation of the model then
we have to think about where we’re going
to go and train that model having a the
data available etc and finally the end
of that we have something that we can
deploy to actually make predictions so
this we’re going to come back to times
today but this is an architecture of how
we might actually tackle a problem that
where we want to actually create our own
model and bring it to production so
there are two lines here that I asked
you to look at the top and the bottom at
the top we’re basically looking at what
it takes to Train and at the bottom
we’re looking at what it takes to
actually serve so there’s a few things
here or worthy of note so we’re using
the the cloud ml service to both train
our model are tense flow model and we’re
using it to make our predictions we
could have those predictions made from a
mobile phone as well and instead of
going you know via an API call to
actually get our prediction we could
actually deploy the model compiled to a
telephone for example if you want to do
something offline but in this particular
case we’re going to look at making that
API call to get our result on the top
we’re going to be using data flow we’re
going to examine what data flow is let’s
say a map produce light framework and
it’s basically for unified batch and
streaming and I’ll come into that in a
moment and you’ll see also at the bottom
we’re also
using data flow and the idea here is
that anything that you’re going to do to
be able to train your data and to
extract those features and then actually
make those predictions you’re probably
going to need to do the same when you’re
actually making this prediction so if
you think about okay where’s that data
coming from what things do I need to do
to extract the necessary features out to
actually make that request so wouldn’t
it be good to have some kind of
framework where you know when I’m
working in perhaps batch mode or in
streaming mode I could actually use the
same pipeline so we look first at the
collecting and organizing of data so I’m
just going to take an example of one of
the difficult problems we have when it
comes to actually collecting data and
how we might solve that so this diagram
here is actually if you let me ask the
people to put their hands on if you’re
already working with streaming data if
you’re working with real-time data and
put your hands up if you’re doing stuff
with batch so now put your hands off if
you’re happy with how you’re processing
data and put your hand and put your
hands up if you’re not happy with how
you’re processing data so you’ve got a
few hands on they’re not happy most of
the time when ask me for that batch
they’re quite happy because when you’re
processing batch and everything’s
already laid out for you so basically
your data set is complete you know
exactly what you you know the whole
entire population is there you know
what’s going on you don’t have to wait
for things to arrive for the things
don’t come in the wrong order or
whatever you can process it’s quite
comfortable to process when you move
that into a streaming let’s say space
the problem is you don’t know when your
data’s going to arrive you don’t know if
that data’s coming in the right order
you don’t know if you’re missing data so
how are you actually going to tackle
those things before you can start
processing so in this example here what
we’re going to look at is processing
time and event time and along the bottom
event time is basically something
happens now and on the processing times
when we receive it and we decide to do
something so all those little dots up in
the top at the top there are kind of
difficult for us because they all came
you know much later than they were
actually then they occurred so how we
going to deal with those so this is you
know something we’re normally explaining
like 10 or 20 slides they’re gonna go
through and attempt to try and do it in
one
if you look at the ideal watermark
that’s a yeah dashed line
so this is where event time and process
and time happen at the same moment and
if you look at the scores now so scores
so I should explain that so let’s
imagine that this data is coming from
somewhere playing a game and we’re
looking at one person playing a game and
basically every time they get some
points we’re sending that to our server
and then we want to process that so the
first thing that we do here is create a
heuristic watermark they here see my
watermark is in green and then we chop
this into Windows that we want to
process and those windows can be fixed
time they could be sliding there can be
sessions depends you know what you’re
dealing with in this particular example
we’ve got fixed time windows as soon as
those windows closed let’s say that’s
the furthest right point of the
watermark as you come to the end of the
window you can basically do something
with your data so in the first window
for example as soon as we get that
window closes we can say we’ve got a
score of five second window closes and
so on what Apache beam allows you to do
is to take that idea of windowing and
also take it further in the idea of
triggering for things that happened too
late or too early so for example those
nine points that we scored there it
actually happened at 202 but we didn’t
process it into two away so what are you
going to do in that particular moment
and in this case what we’re going to do
is actually accumulate the data in that
pane and then add it up to get the
complete result you could also discard
it really depends on your use case but
what we want to show here is Apache beam
is an open source framework that we can
use again with a managed service data
flow to actually take our data and
collect our data and then do some kind
of MapReduce like processing to actually
have that data sources so we can
actually do our machine learning move
later I just wanted to look at one of
the particular challenges and these
things particularly typically take up a
lot more time than people expect you
know people want to get busy with with
the actual machine learning part and in
our experience the actual focusing on
collection of data and processing of
data takes a considerable amount of
effort so now we’re going to move on to
the second part of the of the of the
chain which is the actual creating of
the model and then using it how am i
doing for time
good okay cool let’s come back to the
pattern again and now we’re going to
move on to tenth flow so to start with
this is a very brief history of tense
run I’m not going to read it all up but
what I’m going to tell you about tense
of flow is a bit the reason why we
created it so when we were trying in our
own company to actually we had
researchers working on machine learning
problems they would come up with a great
model a great solution and then we would
have to take that to production and then
there was this big effort to actually
take what we’d done in the research and
then bring it into production so we
needed to make sure that we had a
framework or a project that we could
actually take the stuff that we were
doing in the research space in the
training space and bring that all the
way to production and actually the
prediction space so that was one of the
key ideas behind tense flow another key
idea about tense flow was to actually
democratize it so to make sure that we
open this up and get as much
collaboration and contribution from the
community at hole in that way you know
you start to see things grow and you
know ideas improving and this kind of
stuff so tends to flow on the
architecture side to actually take it
from let’s say a space where you’re
actually playing with a model for the
first time to a space where you’re
training that model to a place where
you’re gonna actually make those
predictions you think about what kind of
architecture you want to use behind us
so in in the one case you might want to
do that on your laptop so you know using
a jupiter notebook or something working
on your laptop and you’re trying to you
know play around with a model for the
first time but as soon as you’ve done
that you might say well actually this
looks interesting let’s start training
up some data and suddenly your laptops
not good enough anymore
so you want to run that on a cross store
you want to take advantage of GPUs or
TPS which we look at as well once you’ve
done that and you’ve trained your model
you want to serve it so much you want to
deploy it somewhere and you want to be
able to make predictions with it so that
same framework should be able again to
run it in a space where you can make
predictions let’s say from a cloud
platform and it should be a fashion but
it should also be able to deploy to a
small device like a Raspberry Pi or a
telephone so you can actually use that
that that works you’ve made to actually
do something useful with it so those
ideas was like in let’s say in in the
background or in of how we were going to
actually create ten to flow so a few
things about tens flow itself it’s
another data flow system so this should
look all in
quite obvious to most people it’s a case
of nodes interconnected that are going
to perform certain functions that graph
of nodes all of the edges are actually n
dimensional arrays or tensors hence the
name tensor flow what’s a tensor a
tensor is a multi-dimensional array why
is that useful but if you think about
something like a convolutional network
one of the things that you want to do
there is retain some shape of data as
you go through from one node to another
so you might in this particular case if
you were just dealing with a grayscale
image you might just have one channel
you know so a matrix if you’re going to
deal with a color image you might have
three channels and depending on what
kind of data you’re processing this
could you know grow quite considerably
and if you weren’t able to actually
transport that data between the the one
node to the next node and keep its shape
you wouldn’t be able to do things like
convolutional sampling so all you’d have
to do a hell of a lot of work to
actually go and put all that back
together again if you trying to move
that stuff across in a race for example
another thing is having state so when
we’re actually making this process is
something I could buy us we need to go
up and update and we need to retain that
state so there’s something else that
tends flow takes care of and the last
part of the tencel architecture is that
is distributed so we can actually run
some of those operations on some of
those nodes on different machines or
different chips so this is a really kind
of high-level overview of the
architecture behind tensor flow itself
tensor flow is written in C++ we have
API in C++ and Python and Python that’s
obviously the most popular one this
gives you an example
well it doesn’t give you an example this
shows you the architecture of how we
actually use tensor flow so that pie in
orange is what we just looked at and
we’re going to concentrate now on the
Python front-end so there there are a
number of layers that you can work in
here we already mentioned Charis Charis
is the same kind of level as our
estimators so in the layers you can go
in there you can create all kinds of
networks and whatever you want to but if
you’re working in that level you need to
think about how you’re going to
distribute it as well
so if you how are you going to paralyze
that work so that becomes for bigger
models quite complicated quite quickly
if you move up a layer to the S
mater’s so we have already some common
estimators like things like LTM cells or
a convolutional but we also have there
are the ability to create a custom
estimator and if you use a custom
estimator then you don’t need to worry
about how you can actually distribute
this or parallelize your your your
network when it comes to actually
training it
caris operates at that same kind of
level and above that we have which is
quite new is canned estimators and these
are basically fully fledged estimators
which you can basically use out the box
for a particular let’s say a regression
or a classification problem and we will
have more and more of those but they can
be for very common problems like I want
to just run a regression on this I don’t
want to do all the work this is my data
you know this is what it looks like you
just have to map out the shape and then
you can start using those so going back
to the the TPU again so one of the one
of the great things that that we can
offer in this space is to go beyond a
GPU so a tipi of these things are
running a thing is 180 teraflops per
chip so if you’re really going to start
to like train your models at scale the
first generation of TP is we were only
using for inference and this second
generation they’re available for
training and for inference in the cloud
so you could actually take your
tensorflow model that maybe you’re
running on your own server and say
actually I need some more power for this
and then we could actually look at using
TP use in a yeah in a cloud space so
give you some example cuz those numbers
you know what does 180 teraflops ER look
like to you an example some of our
translation models that we were running
with chick-fil-a kitty take us a few
days to train and now with their age of
TP use we can do that in ours so it’s a
real kind of game changer at that level
and with that we’re going to move on to
the demo yeah so what I’m going to show
you is indeed coding right we’re all
developers if I’m not mistaken right so
by the way just to give you an idea who
was at i/o this year Google i/o and
nobody that’s unfortunate because if you
see this TPU it’s really this big in
this hi it’s and it’s a processor right
imagine that compute power okay what
we’re going to do now is
but I’m going to show you is how turns
flow can be used in a best practice
situation so with what we call the
Google Cloud ml engine and what we’re
going to do is we’re going to more or
less develop a model we train it and we
test it and testing I’m not like the
context like unit test but see if a
prediction works right so it’s time for
some life stuff let’s see next slide so
what I’m gonna do today is I’m gonna
focus on the data that is available and
United States Census income data set
that’s about 32,000 rows in CSV format
and yeah and it’s this kind of data set
that you have here and what we’re gonna
do is how you can use it for yourself
imagine any data set now you can use it
for yourself and start using tensorflow
in a very nice way that allows you to go
to production quite fast there’s a
little mistake on that slide she not say
comp additional layer that’s a
copy/paste okay it’s like confuses
anyone so just to give you an idea this
is the data set that you see here that’s
a bit big but that this for you guys so
such that it’s readable but actually we
try to classify if a certain person as
can be classified as less than 50k per
year or higher or more than 50k per year
that’s that’s what we try to do here so
this is the data set that they’re gonna
use and just to go back resent again so
what we normally do and I think that is
advisable for everybody is instead of
start coding in official studio code of
VI or Emacs whatever you can start using
cloud data lab data lap is nothing else
than Jupiter notebook who’s familiar
with Jupiter notebooks okay cool
so for the people who aren’t it’s in
fact an interactive Python page where
you can document in markdown most of the
time and in Python code and
and change the coat and see the changes
life happening what I’m going to show
you now is that this is immediately
available in the Google Cloud console so
well cloud console is the environment
where you can start up your surfaces and
let me see where is it is it readable
yeah more or less
so what you can do here is here on the
top there’s an activate cloud shell and
in fact what it does it creates small as
I said and Linux instance where you can
fire commands and now it’s provisioning
and if it’s there I can start the data
lab instance and connect to it so yes
Here I am
so I have a command line on my browser
it’s a data lab connect data lab for I
know engine that’s a random name by the
way I did press ENTER it
so it’s asking for purification bang and
here with go to port 80 80 81 that’s the
port where it’s hosted I’ll end up in a
data lab situation where we can see the
notebook itself so now this is
proficient all in a cloud I didn’t
install anything locally I just convert
so just to give you an idea of the code
that is involved so we have asked where
you can if it’s a bit faster than this
yep so this is a Jupiter notebook here
you can see part of the code I can fire
it but the whole idea is to show you
that we have an environment where you
can start you know experimenting which
attends which attends for all
application because it’s not something
you write immediately it’s something you
iterate and you test and try it out and
Jupiter notebook is something that you
can use for them so just give you a
little bit idea of what in fact an
estimator could be like this is an
example code sample here you make a deep
neural network with some specific
classifiers style with a couple of
parameters and that’s it and here what
we’re creating is creating apparently
100 hundred hidden units followed by 70
followed by 50 followed by 25 that’s
quite deep already so in imagine that
you have to program this this is all
tangled together the fully connected
network by just this statement so that’s
quite nice and productivity wise also
very good thing to do so let’s go back
then the next step is okay how do I
train my application the code that you
just saw how do you train it using best
practices and using the Google cloud
machine learning engine so we explicitly
put it like this so you have a situation
where you say local training so that
means on my machine only we have a part
and there’s a reason why I why we set it
this environment entries like the dollar
train data and the dollar eval data the
reason why we did that is for
flexibility and I’ll show you in a bit
also what what it’s all about so just
one statement and I can show life so to
just show you that we are replacing the
train data with dynamic value of a local
file system on my system and then if I
say run sorry so I’m actually at this
moment running the code and I’ll show
you a bit more in so you see here are
this also very interesting so you see
here warnings they’re not errors they
actually giving you a signal that you’re
using a default tensorflow
implementation that is built for you
already but the good good for distance
flows over source you can download it
you can compile it such this all these
warnings go away yeah and the good idea
the good part is it will be faster in
runtime in the end because it’s tuned
for your system at this moment
apparently I’m missing a couple of flags
on my system and as I mentioned you can
say there’s some another thing that’s
coming up
let’s call tensor board and here in the
middle somewhere here you see a message
yeah you stands aboard to do the log
there and have an output so what does it
mean what what is it in the end so 10
support I’m gonna start it up DanceSport
there’s nothing else than a small Python
application which is packaged with
tensorflow all together
and it allows let’s see it’s open
already
we’re on port 66 I’m copying this this
piece of code go back to my browser
instance where is it and this is
actually quite interesting so now I have
a tool transfer board which allows me to
give me more insight and what’s
happening actually with my application
what’s the accuracy what I saw accuracy
is one of the important terms and
machine learning if it’s high enough
you’ll get a better score in the end and
it’s more real life situation so you get
some graphs here and at this moment the
the the it’s still trying to improve a
little bit but it’s not that good and
another thing but the these are some of
the ways to you know start investigating
your code and start understanding what’s
happening and if it’s giving a better
result another thing that is quite
important is that as Robert mentioned
it’s a graph that you’re trying to
compute in the end so a moment ago we
talked about estimators right so
estimator is like this one statement
that we had is a deep neural network
that’s here so this is in fact the graph
of execution so if you click on it you
get a much better view of what’s
happening in in the graph so this is
indeed an hidden layer on layer layer
layer etc and then you get really fully
connected quite big network and that’s
that’s something you get quite fast so
the good part is you can follow the flow
you can see if the network is behaving
like you want and you can also start
thinking how should I tune it right so
this is Stan’s report this is something
you also get for free and the whole
thing so what I just did I trained it
locally and with the output created I
can be tense border I can look into it
but of course we want to have more speed
we want to have more output or better
output so what we can do also is to be
just one flag
make it a dish beautiful so what its
gonna do it’s gonna try to figure out
okay what CPUs head do I have available
and what can I use or what GPUs etc
right but just one flag not changing the
application I’m also not changing the
Train data etcetera it’s all on my local
system so the next step after this is
okay now it’s interesting now I realized
I could do it locally and my data set is
growing so another thing that I can do
is I can start running this thing in the
cloud of Google so Google Cloud ml
engine has a concept called jobs and in
fact everything that you have so your
tensorflow application which your
parameter switch a data set you can say
okay here’s my package more or less and
executed in the cloud and what you get
is that you’re gonna get finally the
benefit of having cloud in the
background supporting your calculations
which can be quite heavy depending on
their on the how to say on the size of
your data and also in this case the
Train data and the eval data are not
your local file system anymore but it’s
your cloud Google Cloud storage location
so what we’ve done is we uploaded the
files to that location and said okay
from now on because you’re in a cloud
environment you want to have fast access
and not Network latest here whatever
executed over there so what you’re going
to get is if you do it a couple of times
let me see
so what you see here is that you get an
overview of the jobs that have been
executed so you can see the jobs but
nothing else than just an rest request
that we’re doing to the server this is
it and in the end it starts executing
and it writes its output to the
filesystem of the Google Cloud Storage
because and there’s a reason why you
want to go not single training mode of
course there’s a way to do this actually
in an distributed way also and this
abuse computing is very interesting in
the cloud because you can decide quite
easily just to start upgrading to
certain skills that we know so this is a
default scale of just saying okay
Whittle this is a situation where I have
less parameters or not a few parameters
and I have an a deep neural network
that’s it so you can start use using the
standard one distributed we also have
other configurations so with just one
flag you can also say okay I start using
GPUs and actually also start using TP
use if you want to so the interesting
result is that
[Music]
my might be some noise cancellation
situation here the ten minutes to go oh
yeah yeah I think that’s the cue so what
you see here is that I did a census one
census to a single and what you also see
that it took about 10 to 12 minutes but
we just won’t flag I made it possible to
run the distributed version of the same
code in six minutes
so imagine the time savings that you
have and we were talking about case
where Google Translate took like a week
to train the diversion that you guys are
using also but the tip use you could do
it in a couple of hours as really
reducing like days two hours in that
scale and that’s just by flagging it
so we experiment you play with it and
you see the output so um then another
step is okay finally have a distributed
model train are you gonna yeah yeah you
come to the back dance the next one the
way were so on the pod nowadays on yeah
one more sorry one more so this was the
these are the pods of TP use that we
actually can offer up in the cloud so
they’re 11 and a half petaflop spur pod
so if you’re really gonna train
something it’s go and and this is you
know the powerful thing about tensorflow
is that you can just take what you’ve
created and move it onto these different
architectures just by the changing those
flags that are Rockettes just showed you
so ok so now we have a train model so
what do you want to do with it
make it put it in production right that
should be easy as easy as you guys know
how to do a java web application
development or nodejs or whatever it
should be easy so what we said is okay
first of all you create a model in your
cloud environment that’s the first
statement that you have create model
with model name can be anything and a
region so the US region all the reasons
why do we have to do with latency but
that’s a different story we can talk
about it later on and then in fact what
you do is you create a version of your
model because you might be tuning it
right you might be playing around with
it might be changing the format you
might be changing the net
so in the end it’s possible for you to
have a trained model deployed in the
cloud that you can start using for
execution so others it look like so here
I have a model there’s some census model
and it has a specific version so you
want to have I don’t think you will see
a version sorry a model without a
version there’s always a version behind
it and in the end it’s deployed it’s
available and you can start using it so
what does it mean to use right so what
we said it’s not the the interesting
part is so now I have a model so I
showed you how to go from code data lab
to training locally to training in a
cloud to go to deployed version the next
step is to test the prediction and the
nice part is it’s all rest based so if
you have an application that runs any
other language than Python no problem
even we don’t it you have a rest
endpoint call it get an answer back
start using start showing it to people
or use it for your calculations of
whatsoever but in this case I’m going to
show you also live that let me close
this one that if you run test predict
and test predict is I’m gonna fire it
off so now it’s in fact sending a test
file with just one entry in this case
one the entry of a person in the US with
those fields in there and it’s sending
it to the Google cloud to make a
prediction and also specified which
version number so you can play with it
and in a couple of seconds there it is
it could predict that it’s in the class
zero so it if you look at the code it
will try to categorize the output
instead of saying larger than 50k or
smaller than 50k it will try to create
numbers for it zero and one and with the
probability of
quite high 0.99 percent it says that
it’s in class zero and it’s less than
50k so I guess that’s my demo right yeah
okay just move back to architecture how
much time we got left on the clock
six minutes so we just do this very
quickly so as Rob just showed you we’ve
we’ve basically gone through this whole
pipeline at the beginning we looked at
how we could do something like data flow
to actually process data to do that
feature engineering and we just looked
at how we can use cardamom the really
important part I just want to emphasize
an economy we’re making a model intense
flow but we’re using the cloud ml API to
both run that and train that model both
locally and in the cloud and make the
predictions so that we’re actually using
the card amount API that really makes
that hurdle from going from our own
local instance to going to a cloud space
or district very easy and again if you
want to do sync on the distribution you
don’t wanna do all that plumbing because
you go down into those layers then
you’re gonna do a lot more plumbing then
you use the estimators and if you have
your own if you don’t find an estimator
that fits what you want to use and you
create a custom estimator
there is a code exactly there’s a whole
tutorial which kind of supports this as
well so we can put those on the slides
and share that out as well so if you
want to actually go through this example
and this example is it’s not the best
example to learn what does in your
network look like and how do I make in
your network but this example really
shows you how do I actually start
getting a neural network from you know
my local machine to training it in a
space to be able to predict it as well
and I think we’re good
Please follow and like us:

Be First to Comment

Leave a Reply