Azure Machine Learning using Cognitive Services
4.2 (162 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
1,887 students enrolled

Azure Machine Learning using Cognitive Services

Learn the Azure Machine Learning Studio, Azure Bot Service, Video Indexing service, Computer Vision for OCR and more!
4.2 (162 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
1,887 students enrolled
Last updated 2/2020
English
English [Auto]
Current price: $69.99 Original price: $99.99 Discount: 30% off
5 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 17.5 hours on-demand video
  • 1 article
  • 2 downloadable resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • By the end of the course you'll be able to add powerful Machine Learning functionality to your apps with simple REST requests!
  • You will know how Microsoft Cognitive Services provide advance Machine Learning functionality to any kind of app.
  • You will be able to create amazing apps (or improve those you’ve already created) by adding superior functionalities and superpowers such as language understanding for executing user-requests, verification of users through speech, face detection, identify people in images, and so much more!
  • You can bring into play Machine Learning models to simplify business processes when moderating content.
  • You’ll be able to create better experiences for your users by adding more interfaces and functionalities such as bots.
Course content
Expand all 88 lectures 17:36:15
+ Intro to Machine Learning
5 lectures 47:09

In this lecture you will learn about some machine learning models, and what are the two categories into which machine learning is classified: supervised and unsupervised learning. 

Preview 15:36

In this lecture we will talk about the differences between artificial intelligence and machine learning, and even talk about deep learning and how it compares to the other two, as well as identify a scenario describind what each may be able to accomplish. 

Artificial Intelligence vs Machine Learning
07:00

In this lecture we will take a look at some common applications and talk about how they use machine learning and artificial intelligence to imporve the user experience, as well as get a sense of what you can accomplish with ML and AI 

Examples of Apps that Use Machine Learning
06:24

In this lecture we will explore both the Cognitive Services that you will learn to use, the tasks that they can help with, as well as the Azure Machine Learning Studio and how it works to provide powerful custom ML models to you. 

Preview 09:32

In this lecture you will follow the steps required to get your Azure subscription ready along with 200 USD of credit for your first 30 days of usage, as well as access to some services for the first 12 months. 

Getting your Azure Subscription Ready
08:37
+ Intro to Python
5 lectures 44:48

After this lecture you will understand the main reasons to why we will use Python, as well as understand a bit more about how we will work throughout the course and the options that you have in case you want to use a different language to test the Azure Services 

Why will we use Python?
07:38

In this lecture you will create your Azure Notebooks account so you have access to the service in which we will be coding our Python functionality tests, as well as create your first Pythone Notebook in the form of an ipynb file 

Azure Notebooks
05:44

In this lecture you will practice the creation of variables with Python, take a look at how different types cause operations to behave differently, and at the if statements along with elif and else to make boolean evaluations along some values 

Variables and Statements
13:15

In this lecture you will learn about lists in Python, how to define them, what values can be added to them, and how to access the values that are inside the lists themselves, as well as identify potential errors when handling them. 

Working with Lists
08:01

In this lecture you will create your own function with Python and learn how to use it and return values, as well as how to use negative indexes to get values from a list, perform slicing, and understand Python lists better. 

Creating Functions and Slicing Lists
10:10
+ Text Analytics
5 lectures 01:17:32

In this lecture you will provision an Azure Service from the Cognitive Services pool that will allow us to make some text analysis. Among the things that this service will allow are sentiment analysis, language detection and entity detection. 

The Text Analytics API
11:31

In this lecture you will use the Text Analytics API to perform some sentiment analysis into some text. You will make the requests through a POST HTTP request, setting the body, the headers and some parameters, and then evaluating the result to better understand the sentiment reflected in the text 

Performing Sentiment Analysis
23:53

In this lecture you will use an artificial intelligence model that will extract key phrases from text documents for humans and machines to easily understand the context of these documents by simply glancing over a few phrases 

Preview 08:20

In this lecture you will learn. about removing clutter such as stopwords, punctuation, numbers to make the text easier to read and be processed by a machine 

Removing Stopwords and Other Techniques
14:32

In this lecture you will process a body of text through some stemming so it becomes more efficient when identifying the key words and topics inside the document. You will also chart the frequency distribution of the words in the form of a bar chart using Python. 

The Stemming Technique
19:16
+ Machine Learning for Spell Checking
3 lectures 31:28

In this lecture you will setup a Bind Spell Checking service that will be helpful when checking for spelling and grammar errors in text, even when there are no spelling errors, based on context, this service can identify possible errors and offer recommendations. 

Creating a Spell Checking Service
06:47

In this lecture we will take a look at how to implement the Bing Spell Checking functionality through an Azure Python Notebook. By sending the correct parameters through a URL, instead of through the body, we will receive a JSON formatter desult with a series of suggestions on how to solve a misspelled word 

Preview 18:11

In this lecture you will add some context awareness to the service so it can also identify words that may need to be changed, even if they are not misspelled. 

Spell Checking with Context Awareness
06:30
+ Text and Speech Translation
7 lectures 01:50:48

In this lecture you will be setting up the Text Translator Service from Azure Cognitive Services, and taking a look at its documentation, familiarizing your self with the parameters required, the headers and some options that will help you better udnerstand how will this service translate text to almost any language. 

Text Translator Service Setup
10:10

In this lecture you will implement the Text Translation service by sending some translation requests from an automatically detected language to another. And you will even learn how you can, in one single request, translate more than one text, to more than one language at the same time. 

Implementing Text Translation
17:23

In this lecture you will use the Text Analytics API again to identify languages form text. Later on, the identified language will be used to pass the value of a from parameter to the Text Translator service that we used in the previous lecture, so you see how the result will be the same. 

Identifying languages
14:24

In this lecture we will crate a new Azure Cognitive service in the Form of the speech Translator service, that will help us add functionality for translating voice almost in real time, and even create some audio with the translated text spoken to us on a robotic vice that we can choose! 

The Speech Translator Service
08:22

In this lecture you will install Jupyter notebooks directly on your computer so they can use the Microphone to receive the speech that will be streammed to the service. You will also install a couple of packages that will come in handy when using the microphone from a new python notebook. 

Preparing for Speech and Microphone Use
08:09

In this lecture you will setup the functions that will create the headers that will be necessary for our example to stream audio to the Speech Translation Cognitive Service, as well as setup the web socket request 

Making Speech Translation Requests - Part 1
25:17

In this lecture we will create the callback functions that a web socket service is going to call when it is opened, when some error occurs, when the data is received and finally when it is closed. This functions will contain the functionality that sets up the communication between the client and the server, executes the streaming and creates the wav audio files with the received translation 

Making Speech Translation Requests - Part 2
27:03
+ Language Understanding Intelligent Service
4 lectures 50:35

In this lecture you will create a Language Understanding Service from Azure, as well as set up a new account in the Language Understanding (LUIS) Portal, where you will be creating all the apps, their intents and entities to work with this intelligent service. 

The Language Understanding Service
10:50

In this lecture you will be creating intents from the LUIS Portal after creating the Language Understanding app that we will work with. After the intent, you will assign some entities to the utterances that will be used to identify objects or subjects in the commands. 

Creating Intents and Entities
08:49

In this lecture you will be training the model with the previous information about the intents, entities and and utterances. Then, testing the service directly from the LUIS Portal and identifying some other utterances that the service does understand, and some that it doesn't. Finally, we will improve the service based on the utterances it did not get and publish it for it do be available as a REST service. 

Train and Publish the Service
10:42

In this lecture you will be consuming the service from a Python Notebook, learning how to request text input through the input function, and more importantly, retrain the LUIS Cognitive Service based on user inputs so it becomes more intelligent, as any machine learning model should, as it receives more data. 

Consuming the LUIS Service
20:14
+ Speech to Text. Text To Speech
10 lectures 01:59:52

In this lecture you will create a new Cognitive Service from Bing, that will allow to make speech to text and text to speech conversions that generate both audio and text files for you to use inside your applications. 

Creating the Azure Speech Service
06:11

In this lecture you will be converting spoken audio to text that you could use as an automatic transcription app that uses either the microphone to detect speech or an audio file that you may already have with recorded audio. 

Getting Text from Speech
15:01

In this lecture you will learn how to request an authorization token from an Azure service, which will be needed in the next lectures to make the request correctly to the service. You will do this through a Python function that will either generate a token or used a previous one if it is still valid (hasn't expired) 

Issuing an Authorization Token
07:42

In this lecture you will be sending text to a service and requesting it to return an MP3 file with that text turned into speech with the help of the same Bing Speech API. That MP3 file will be saved locally from the Jupiter notebook and played to listen to the results. 

Getting Audio from Text
08:31

In this lecture you will create a new service and explore its documentation to understand what will be possible when using it. This cognitive service will allow users to be verified through speech, and to be identified on any audio file that may be recorded 

The Speaker Recognition Service
05:24

In this elcture you will learn a bit more about how this service is going to work, for verifying users a specific phrase will be required for them to speak, and in this lecture we will get the list of phrases that the users may use to be enrolled and later verified by this azure cognitive service 

The Verification Phrases
12:43

In this lecture we will create a profile for the speaker recognition API to be able to eventually verify users. This enrollment will be in the verification profile endpoint and will be creating a unique id that will later be used by the enrollment endpoint to enroll users through audio 

Performing the Enrollment of Users - Part 1
12:16

In this lecture you will continue the enrollment of users by setting us a new audio file recorded through the microphone that should contain the user speaking one of the verification phrases. This audio file will be sent to the service, which will return the remaining enrollments (the Speaker Verification API requires 3) as well as if this user is being enrolled or is already successfully enrolled. 

Performing the Enrollment of Users - Part 2
25:24

In this lecture you will be using a couple of endpoints to get the verification profile that was previously created, as well as delete the verification profiles that are not going to be useful and that were created by mistake 

Preview 13:22

In this lecture you will generate a couple of functions that will help you verify the users. First, the function that creates an audio file and saves it to the current working directory. Then a function that will get the profile id from which the user must be verified, and finally simply calling the service that will make the evaluation against the previous enrollments. 

Verifying users: Verifying thorugh audio files
13:18
+ Computer Vision
6 lectures 01:30:52

In this lecture you will be creating a new service in the form of the new Computer Vision API, updated to its version 2.0. You will identity the things that you can do with it, such as describing images, getting categories and tags, analyzing text in images (OCR), even handwritten text. 

The Computer Vision API
05:51

In this lecture you will use the analysis endpoint from the Computer Vision API to identify landmarks in images, get descriptions and tags, get categories, and even identify people, if they are celebrities. This analysis will even return some information that can be useful to draw squares around the faces that were identified in the picture 

Analyzing an Image
23:36

In this lecture you will be retrieving some description of the images that you send to the service, along with some tags, categories and the possibility to establish how many descriptions you need. With this endpoint from the Computer Vision API, it is easy to describe what is happening in your images. 

Getting an Image Description
08:15

The OCR endpoint of the Azure Computer Vision API Cognitive Service is going to allow you to identify words in an image, and in this lecture you will implement its functionality. You will see how it will be returning regions, lines and words that are identified, and that it will contain information about the location and size of this chunks of words. 

Implementing Optical Character Recognition
11:39

In this lecture you will take the JSON response from the OCR endpoint of this Computer Vision API from the Azure Cognitive Services and draw the bounding boxes for regions, lines and words in the image. First learning to download an image with the help of python, and then actually drawing in the image you will end up with an image that has marked the boxes with the identified text. 

Drawing Lines in an Image
23:25

In this lecture you will be using the recognition endpoint for the Computer Vision API cognitive service, which will allow you to identify printed and handwritten text in images. This service will require you to first get one of the headers from the response, and then make another request to the URL in that header, for you to finally get de identified text along with its bounding boxes. 

Implementing Handrwitting Recognition
18:06
+ Face Detection
7 lectures 01:16:49

In this lecture you will create a new Azure Cognitive Service in the form of the Face API, and explore what is possible with this service. You will learn that this service is able to identify faces, but also identify the people in pictures, and classify them into different groups that you may create. 

The Face API
06:40

In this lecture we will use the Face API's detect endpoint to be able to identify certain attributes from a face that exists inside an image. Additional to these attributes, the service will be able to identify the landmarks of the face, so you get information about whether this face has makeup, if it has glasses, the age and gender of the person, where is the eye, the mouth, very accurate information regarding the face. 

Detecting Faces
10:24

In this lecture you will be creating a people group inside your Azure Face API Cognitive Service so eventually, with the people that you add to the group, people can be identified in pictures, along with name and the group they belong to. 

Creating a People Group
11:20

In this lecture you will be adding people to a group for the Face API cognitive service to be able to classify images in groups and people who are in them, as well as identify names of people that have been previously uploaded through the method that we will use in this lecture. 

Adding a Person to a Group
07:19

In this lecture you will continue to setup the identification of people on your Face API Cognitive Service, by assigning faces to people through uploading images to the servie using python. After this, the model should be ready to be trained. 

Adding Faces to a Person
17:16

In this lecture, finally, after registering groups within the Face API Cognitive Service, adding people to those groups, assigning faces to that people, we will be using the detect endpoint to get a face id from an image, and using that face id to use the identify endpoint and get information about who is in that image, based on the previously registered people. 

Identifying People
19:26

In this lecture we will quickly test our Face API cognitive service for identifying a different person, as well as implementing some emotion detection through the same detect endpoint that we used previously. With Python, we will continue to iterate through the registered people but also check the JSON for emotion values on happiness, sadness, contempt and others. 

Detecting Emotion in Faces
04:24
+ Content Moderation
8 lectures 01:24:24

In this lecture you will be creating a brand new Azure Cognitive Service in the form of the Content Moderator service, which will enable your applications to moderate violent text, profanity, pornography in videos, etc 

Content Moderator Service
06:58

In this lecture you will be implementing the Content Moderator Cognitive Service to identify personal information such as email and phone number that may be required in a marketplace scenario (and many others such as social apps for kids or corporate emails) to prevent users from sharing this kind of data. 

Moderating Personal Information
10:42

In this lecture you will implement into your azure notebook code, a new parameter for the Content Moderator Cognitive Service to return information about profanity in the form of a tems list of bad words that your text includes. For this example, you will be hiding these terms by replacing them with asterisks. 

Moderating Profanity
12:57

In this lecture you will use another one of the endpoints available inside this Content Moderator Cognitive Service that will allow you to identify racy and adult content in images so you can decide to exclude them from your community. By sending an image to the service, you will be able to detect whether or not this image should be reviewed or right away removed from your storage. 

Moderating Images
15:15

In this lecture you will take a look at the Microsoft Content Moderator Portal, or the ReviewTool which allows for humans to verify the machine learning classification, and make sure that it is correctly classified. This portal or review tool will be connected to the Azure Content Moderator Cognitive Service, and will eventually be used by human reviewers to asses results. 

The Content Moderator Portal
07:57

In this lecture, from the Content Moderator Portal, you will be creating a workflow that will be executed every time you upload an image to the service, for it to determine if a human should review an image. This will give your team greater control over what content gets posted in your application, without having to depend entirely on what a machine learning model from the Content Moderator Cognitive Service identifies. 

Creating a Moderation Workflow
10:17

In this lecture you will be creating a new job, which means executing a workflow, programatically. This job will immediately return an id, but will take a few seconds to be ready. Once it is ready you will be able to see whether or not a new review was created, for a human reviewer to analyze, directly from the Content Moderator site. 

Executing a Workflow Programatically
07:37

After sending an image to a workflow, and the workflow creates a review for a human to analyze it from the Content Moderator Review Tool, we will be able to get the result from this job, and from the review. In this lecture precisely this entire process will be executed, from the sending of an image to the Cognitive Service, to the human reviewing its content, to the review being received back from python code to analyze the actual result. 

Getting a Job's Result
12:41
Requirements
  • Basic skills in any programming language.
  • No previous experience with Python is necessary. You will learn the basics of Python programming throughout the course.
Description

Has Microsoft’s Cognitive services piqued your interest, but you haven't been able to find a decent course that will teach you how to use those services effectively?

Or maybe you have just recognised how a valuable skill like machine learning can open up big opportunities for you as a developer.

Perhaps you just wanted to find out how to add "superpowers" to your programs to do amazing things like face detection, but had no idea how to go about it.

Whatever the reason that has brought you to this page, one thing is for sure; the information you are looking for is contained in this course!


Why learn about Azure Machine Learning?

Machine Learning is not only a hot topic but more excitingly, Python Developers who understand how to work with Machine Learning are in high demand!

Azure, combined with Microsoft Cognitive Services, are a huge opportunity for developers.  

In this course you will learn how to add powerful Machine Learning functionality to your applications.

You’ll learn how Microsoft Cognitive Services provide advanced machine learning functionality for any kind of application

You will be able to create amazing apps that add “superpowers”, such as language understanding for executing user-requests, verification of users through speech, face detection, identification of people in images, and much more!

You will learn how to bring into play Machine Learning models to simplify business processes when moderating content.

And you will find out how you can create better experiences for your users by adding more interfaces and functionalities such as bots.

Adding these skills to your résumé will greatly boost your future job or freelancer opportunities.


Why choose this course?

  • This course covers a much wider range of Cognitive Services than other similar courses.

  • It guides you step by step through the usage of these services instead of just covering the creation inside Azure.

  • Your instructor, Eduardo Rosas, has been working with Azure services for 4 years.

  • He has created many apps that leverage Azure services, including one with the implementation of Machine Learning models and image analysis that got him to the Microsoft Imagine Cup World-Wide finals.


The Key Topics Covered include :

  • The Azure Machine Learning Studio - how to create your own machine learning models with drag and drop interfaces.

  • The Azure Bot Service - how to create conversational bots that can be connected with Messenger, Slack, Skype, Telegram, and more.

  • The Video Indexing service - how to identify people in a video, actions, get a transcript of the conversation, with a timestamp, how to translate it and more.

  • Computer Vision for OCR - handwritten text recognition, image analysis.

  • Custom Vision for your own image classification model tailored completely to your needs.

  • Plus an additional nine (9) Cognitive Services!

You'll come away with a concrete understanding of Azure Machine Learning and how to maximize it to create superior apps with amazing functionalities!

The ideal student would be someone who has a basic knowledge of programming and wants to learn about machine learning using Azure and Microsoft Cognitive Services.

If you’re ready to take your skills and app functionalities to the next level, then today is the best day to get started!


Click the enroll button to sign up for the course and we look forward to seeing you on the inside!


Who this course is for:
  • This course is suitable for any developer who has some basic knowledge of programming (any language).
  • This course is for you if you want to quickly implement Machine Learning functionality to your own programs. This course is not for developers wanting to learn the ins and outs of ML.
  • Anyone who's finds complex Math uncomfortable but is interested in Machine Learning and how to apply it to problems.