Custom Vision

Wildlife Classification in Azure ML Studio AI Vision

Now that you have set up your Custom Vision resources and gathered your credentials, it's time to work with the Jupyter Notebook in Azure ML Studio.

Configure the Jupyter Notebook

Go back to https://ml.azure.com and select (or create) a workspace.

Select Launch studio.

On the leftmost pane select Notebooks.

Click on the camera_trap_classification.ipynb notebook that we uploaded earlier.

Scroll down the notebook, and in the third code cell under the Azure Custom Vision Configuration heading, enter your endpoint, both keys and your prediction resource ID between the double quotes.

Azure Custom Vision Configuration showing variables to be filled in

Azure Custom Vision Configuration cell showing where to enter your credentials

Working with the Notebook

From this point onwards in this section of this resource, you can move fully to Azure ML. The Jupyter Notebook contains full instruction and code on how to proceed. If you haven't completed the ML Studio module, head to there first for an overview of the ML Studio Platform.

Important: The code is pre-written, and extensive. Whether you are an expert, or Python-agnostic, this resource is written in a way that you simply have to run the code in the cells, in order to achieve the set of operations corresponding to that cell, and ultimately, the goal of making viable predictions on testing images, using a trained model – completely away from Custom Vision's web UI.
That said, we highly encourage those with some experience in Python to play with the code as they please. You can always download a fresh copy of the notebook from the scryptIQ GitHub repository again, if you need to. Feel free to enter your own images into the training and testing dictionaries, and create multiple projects of your own, to see if you can make independent use of Custom Vision, for yourself.