- 17.5 hours on-demand video
- 1 article
- 2 downloadable resources
- Full lifetime access
- Access on mobile and TV
- Certificate of Completion
Get your team access to 4,000+ top Udemy courses anytime, anywhere.Try Udemy for Business
- By the end of the course you'll be able to add powerful Machine Learning functionality to your apps with simple REST requests!
- You will know how Microsoft Cognitive Services provide advance Machine Learning functionality to any kind of app.
- You will be able to create amazing apps (or improve those you’ve already created) by adding superior functionalities and superpowers such as language understanding for executing user-requests, verification of users through speech, face detection, identify people in images, and so much more!
- You can bring into play Machine Learning models to simplify business processes when moderating content.
- You’ll be able to create better experiences for your users by adding more interfaces and functionalities such as bots.
In this lecture you will learn about some machine learning models, and what are the two categories into which machine learning is classified: supervised and unsupervised learning.
In this lecture we will explore both the Cognitive Services that you will learn to use, the tasks that they can help with, as well as the Azure Machine Learning Studio and how it works to provide powerful custom ML models to you.
In this lecture you will use the Text Analytics API to perform some sentiment analysis into some text. You will make the requests through a POST HTTP request, setting the body, the headers and some parameters, and then evaluating the result to better understand the sentiment reflected in the text
In this lecture you will use an artificial intelligence model that will extract key phrases from text documents for humans and machines to easily understand the context of these documents by simply glancing over a few phrases
In this lecture we will take a look at how to implement the Bing Spell Checking functionality through an Azure Python Notebook. By sending the correct parameters through a URL, instead of through the body, we will receive a JSON formatter desult with a series of suggestions on how to solve a misspelled word
In this lecture you will be setting up the Text Translator Service from Azure Cognitive Services, and taking a look at its documentation, familiarizing your self with the parameters required, the headers and some options that will help you better udnerstand how will this service translate text to almost any language.
In this lecture you will implement the Text Translation service by sending some translation requests from an automatically detected language to another. And you will even learn how you can, in one single request, translate more than one text, to more than one language at the same time.
In this lecture you will use the Text Analytics API again to identify languages form text. Later on, the identified language will be used to pass the value of a from parameter to the Text Translator service that we used in the previous lecture, so you see how the result will be the same.
In this lecture we will crate a new Azure Cognitive service in the Form of the speech Translator service, that will help us add functionality for translating voice almost in real time, and even create some audio with the translated text spoken to us on a robotic vice that we can choose!
In this lecture you will install Jupyter notebooks directly on your computer so they can use the Microphone to receive the speech that will be streammed to the service. You will also install a couple of packages that will come in handy when using the microphone from a new python notebook.
In this lecture we will create the callback functions that a web socket service is going to call when it is opened, when some error occurs, when the data is received and finally when it is closed. This functions will contain the functionality that sets up the communication between the client and the server, executes the streaming and creates the wav audio files with the received translation
In this lecture you will be training the model with the previous information about the intents, entities and and utterances. Then, testing the service directly from the LUIS Portal and identifying some other utterances that the service does understand, and some that it doesn't. Finally, we will improve the service based on the utterances it did not get and publish it for it do be available as a REST service.
In this lecture you will be consuming the service from a Python Notebook, learning how to request text input through the input function, and more importantly, retrain the LUIS Cognitive Service based on user inputs so it becomes more intelligent, as any machine learning model should, as it receives more data.
In this lecture you will learn how to request an authorization token from an Azure service, which will be needed in the next lectures to make the request correctly to the service. You will do this through a Python function that will either generate a token or used a previous one if it is still valid (hasn't expired)
In this elcture you will learn a bit more about how this service is going to work, for verifying users a specific phrase will be required for them to speak, and in this lecture we will get the list of phrases that the users may use to be enrolled and later verified by this azure cognitive service
In this lecture we will create a profile for the speaker recognition API to be able to eventually verify users. This enrollment will be in the verification profile endpoint and will be creating a unique id that will later be used by the enrollment endpoint to enroll users through audio
In this lecture you will continue the enrollment of users by setting us a new audio file recorded through the microphone that should contain the user speaking one of the verification phrases. This audio file will be sent to the service, which will return the remaining enrollments (the Speaker Verification API requires 3) as well as if this user is being enrolled or is already successfully enrolled.
In this lecture you will be using a couple of endpoints to get the verification profile that was previously created, as well as delete the verification profiles that are not going to be useful and that were created by mistake
In this lecture you will generate a couple of functions that will help you verify the users. First, the function that creates an audio file and saves it to the current working directory. Then a function that will get the profile id from which the user must be verified, and finally simply calling the service that will make the evaluation against the previous enrollments.
In this lecture you will be creating a new service in the form of the new Computer Vision API, updated to its version 2.0. You will identity the things that you can do with it, such as describing images, getting categories and tags, analyzing text in images (OCR), even handwritten text.
In this lecture you will use the analysis endpoint from the Computer Vision API to identify landmarks in images, get descriptions and tags, get categories, and even identify people, if they are celebrities. This analysis will even return some information that can be useful to draw squares around the faces that were identified in the picture
In this lecture you will be retrieving some description of the images that you send to the service, along with some tags, categories and the possibility to establish how many descriptions you need. With this endpoint from the Computer Vision API, it is easy to describe what is happening in your images.
The OCR endpoint of the Azure Computer Vision API Cognitive Service is going to allow you to identify words in an image, and in this lecture you will implement its functionality. You will see how it will be returning regions, lines and words that are identified, and that it will contain information about the location and size of this chunks of words.
In this lecture you will take the JSON response from the OCR endpoint of this Computer Vision API from the Azure Cognitive Services and draw the bounding boxes for regions, lines and words in the image. First learning to download an image with the help of python, and then actually drawing in the image you will end up with an image that has marked the boxes with the identified text.
In this lecture you will be using the recognition endpoint for the Computer Vision API cognitive service, which will allow you to identify printed and handwritten text in images. This service will require you to first get one of the headers from the response, and then make another request to the URL in that header, for you to finally get de identified text along with its bounding boxes.
In this lecture you will create a new Azure Cognitive Service in the form of the Face API, and explore what is possible with this service. You will learn that this service is able to identify faces, but also identify the people in pictures, and classify them into different groups that you may create.
In this lecture we will use the Face API's detect endpoint to be able to identify certain attributes from a face that exists inside an image. Additional to these attributes, the service will be able to identify the landmarks of the face, so you get information about whether this face has makeup, if it has glasses, the age and gender of the person, where is the eye, the mouth, very accurate information regarding the face.
In this lecture you will be adding people to a group for the Face API cognitive service to be able to classify images in groups and people who are in them, as well as identify names of people that have been previously uploaded through the method that we will use in this lecture.
In this lecture, finally, after registering groups within the Face API Cognitive Service, adding people to those groups, assigning faces to that people, we will be using the detect endpoint to get a face id from an image, and using that face id to use the identify endpoint and get information about who is in that image, based on the previously registered people.
In this lecture we will quickly test our Face API cognitive service for identifying a different person, as well as implementing some emotion detection through the same detect endpoint that we used previously. With Python, we will continue to iterate through the registered people but also check the JSON for emotion values on happiness, sadness, contempt and others.
In this lecture you will be implementing the Content Moderator Cognitive Service to identify personal information such as email and phone number that may be required in a marketplace scenario (and many others such as social apps for kids or corporate emails) to prevent users from sharing this kind of data.
In this lecture you will implement into your azure notebook code, a new parameter for the Content Moderator Cognitive Service to return information about profanity in the form of a tems list of bad words that your text includes. For this example, you will be hiding these terms by replacing them with asterisks.
In this lecture you will use another one of the endpoints available inside this Content Moderator Cognitive Service that will allow you to identify racy and adult content in images so you can decide to exclude them from your community. By sending an image to the service, you will be able to detect whether or not this image should be reviewed or right away removed from your storage.
In this lecture you will take a look at the Microsoft Content Moderator Portal, or the ReviewTool which allows for humans to verify the machine learning classification, and make sure that it is correctly classified. This portal or review tool will be connected to the Azure Content Moderator Cognitive Service, and will eventually be used by human reviewers to asses results.
In this lecture, from the Content Moderator Portal, you will be creating a workflow that will be executed every time you upload an image to the service, for it to determine if a human should review an image. This will give your team greater control over what content gets posted in your application, without having to depend entirely on what a machine learning model from the Content Moderator Cognitive Service identifies.
In this lecture you will be creating a new job, which means executing a workflow, programatically. This job will immediately return an id, but will take a few seconds to be ready. Once it is ready you will be able to see whether or not a new review was created, for a human reviewer to analyze, directly from the Content Moderator site.
After sending an image to a workflow, and the workflow creates a review for a human to analyze it from the Content Moderator Review Tool, we will be able to get the result from this job, and from the review. In this lecture precisely this entire process will be executed, from the sending of an image to the Cognitive Service, to the human reviewing its content, to the review being received back from python code to analyze the actual result.
- Basic skills in any programming language.
- No previous experience with Python is necessary. You will learn the basics of Python programming throughout the course.
Has Microsoft’s Cognitive services piqued your interest, but you haven't been able to find a decent course that will teach you how to use those services effectively?
Or maybe you have just recognised how a valuable skill like machine learning can open up big opportunities for you as a developer.
Perhaps you just wanted to find out how to add "superpowers" to your programs to do amazing things like face detection, but had no idea how to go about it.
Whatever the reason that has brought you to this page, one thing is for sure; the information you are looking for is contained in this course!
Why learn about Azure Machine Learning?
Machine Learning is not only a hot topic but more excitingly, Python Developers who understand how to work with Machine Learning are in high demand!
Azure, combined with Microsoft Cognitive Services, are a huge opportunity for developers.
In this course you will learn how to add powerful Machine Learning functionality to your applications.
You’ll learn how Microsoft Cognitive Services provide advanced machine learning functionality for any kind of application
You will be able to create amazing apps that add “superpowers”, such as language understanding for executing user-requests, verification of users through speech, face detection, identification of people in images, and much more!
You will learn how to bring into play Machine Learning models to simplify business processes when moderating content.
And you will find out how you can create better experiences for your users by adding more interfaces and functionalities such as bots.
Adding these skills to your résumé will greatly boost your future job or freelancer opportunities.
Why choose this course?
This course covers a much wider range of Cognitive Services than other similar courses.
It guides you step by step through the usage of these services instead of just covering the creation inside Azure.
Your instructor, Eduardo Rosas, has been working with Azure services for 4 years.
He has created many apps that leverage Azure services, including one with the implementation of Machine Learning models and image analysis that got him to the Microsoft Imagine Cup World-Wide finals.
The Key Topics Covered include :
The Azure Machine Learning Studio - how to create your own machine learning models with drag and drop interfaces.
The Azure Bot Service - how to create conversational bots that can be connected with Messenger, Slack, Skype, Telegram, and more.
The Video Indexing service - how to identify people in a video, actions, get a transcript of the conversation, with a timestamp, how to translate it and more.
Computer Vision for OCR - handwritten text recognition, image analysis.
Custom Vision for your own image classification model tailored completely to your needs.
Plus an additional nine (9) Cognitive Services!
You'll come away with a concrete understanding of Azure Machine Learning and how to maximize it to create superior apps with amazing functionalities!
The ideal student would be someone who has a basic knowledge of programming and wants to learn about machine learning using Azure and Microsoft Cognitive Services.
If you’re ready to take your skills and app functionalities to the next level, then today is the best day to get started!
Click the enroll button to sign up for the course and we look forward to seeing you on the inside!
- This course is suitable for any developer who has some basic knowledge of programming (any language).
- This course is for you if you want to quickly implement Machine Learning functionality to your own programs. This course is not for developers wanting to learn the ins and outs of ML.
- Anyone who's finds complex Math uncomfortable but is interested in Machine Learning and how to apply it to problems.