From the course: Microsoft Azure Data Scientist Associate (DP-100) Cert Prep
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Configure compute for a job run - Azure Tutorial
From the course: Microsoft Azure Data Scientist Associate (DP-100) Cert Prep
Configure compute for a job run
- [Instructor] One of the more important things that happens when you're dealing with complex packaging in a machine learning operation is that you need to bootstrap your compute, right? If you're running PyTorch, you're running some kind of machine learning library and you don't have it on your compute node, it'll fail. So if you're running a job or an endpoint or notebook, you're going to need to get that new library or new configuration onto your compute node. And so one of the ways you can do this, for let's say a worker note or a notebook, et cetera, is that you can define a script inside of the compute configuration and attach that script so that it runs either every time the new machine launches or initially on creation. Another thing you can do is you can also customize an environment. So you can take a base docker image and you can tweak it a little bit, and then also do that for one of your worker nodes. So let's go ahead and take a look at how this would work inside of…
Contents
-
-
-
-
-
(Locked)
Configure compute for a job run2m 48s
-
(Locked)
Consume data from a data asset in a job11m 48s
-
(Locked)
Run a script as a job by using Azure Machine Learning1m 44s
-
(Locked)
Use MLflow to log metrics from a job run3m 24s
-
(Locked)
Describe MLflow model output2m 1s
-
Identify an appropriate framework to package a model5m 26s
-
(Locked)
Describe MLflow model workflow in Databricks5m 42s
-
(Locked)
-