Keras 2.x Projects
- 5.5 hours on-demand video
- 1 downloadable resource
- Full lifetime access
- Access on mobile and TV
- Certificate of Completion
Get your team access to 4,000+ top Udemy courses anytime, anywhere.Try Udemy for Business
- Study in detail the process used to develop deep learning applications
- Discover how optical character recognition works
- Control the movements of a robot using Deep Q-Network (DQN)
- Explore and apply various reinforcement learning techniques
- Label sentences in the Reuters newswire with Keras deep neural network
- Analyze, understand, and generate texts using Natural Language Toolkit
- Sound knowledge of the Python language, machine learning, and basic familiarity with Keras library would be useful to easily grasp concepts explained in this course.
Keras is a Python library that provides a simple and clean way to create a range of deep learning models. This course introduces you to Keras and shows you how to create applications with maximum readability.
You take your first steps by getting introduced to Keras, its benefits, and its applications. As you get comfortable with Keras, you will learn how to predict business outcomes using time series data and various forecasting techniques. By learning the basic concepts of reinforcement learning, you will be able to create algorithms that can learn and adapt to environmental changes and control your robots. Then, you will learn various natural language processing techniques and use the Natural Language Toolkit to analyze, classify, and tag text.
By the end of the course, you’ll have the skills and the confidence to work on existing deep learning projects or create your own applications.
About the Author
Giuseppe Ciaburro holds a PhD in environmental technical physics and two master's degrees. His research was focused on machine learning applications in the study of the urban sound environments. He works at Built Environment Control Laboratory—Università degli Studi della Campania Luigi Vanvitelli (Italy). He has over 15 years of professional experience in programming (Python, R, and MATLAB), first in the field of combustion and then in acoustics and noise control. He has several publications to his credit.
Nimish Narang has a degree in biology and computer science. He has worked with application development and machine learning. His recent achievement was building the biggest ever mobile machine learning course which has many different machine learning and deep learning models in Python and translated into both Android and iOS applications to incorporate some elements of machine learning into mobile application.
- If you are a data scientist, machine learning engineer, deep learning practitioner, or an AI engineer who wants to build speedy intelligent applications with minimal lines of codes, then this course is ideal for you.
Keras is written in Python, so in order for it to work, it is necessary to have a previously installed version of Python (Keras is compatible with Python 2.7-3.6). Platforms that support Python development environments can support Keras as well. Furthermore, before installing Keras, it is necessary to provide for the installation of the backend engine, and some optional dependencies useful for the implementation of machine learning models.
The functional API is much better when you want to do something that diverges from the basic idea of having an input, a succession of levels, and an output, for example, models with multiple inputs, multiple outputs, or a more complex internal structure, such as using the output of a given layer as an input to multiple layers or, on the contrary, combining the output of different layers to use them together as an input of another level.
Let us begin with the second lesson and understand what we are going to cover in our learning journey.
Forecasting the data and information related to the evolution of variables is of crucial importance for the setting of plans for the policies of any activity. For example, to plan the production of a company, it is not enough to know that the demand for products or services is increasing or decreasing, but it is essential to predict the trend of future demand for products, prices, and raw material costs. All of these factors are considered influential in production activity. Here are the topics that we will cover now:
How Do We Forecast?
A time series constitutes a sequence of observations on a phenomenon that's carried out in consecutive instants or time intervals that are usually, even if not necessarily, evenly spaced or of the same length. The trend of commodity prices, stock market indices, the BTP/BUND spread, and the unemployment rate are just a few examples of times series. Here are the topics that we will cover now:
Time Series Analysis
Types of Time Series
Time Series Patterns
Time Series Components
The Classical Approach to Time Series
Time Series Formulas
Estimation of the Trend Component
In the previous section, time series analysis, we explored the basics behind time series. To perform correct predictions of future events based on what happened in the past, it is necessary to construct an appropriate numerical simulation model. Choosing an appropriate model is extremely important as it reflects the underlying structure of the series. In practice, two types of models are available: linear or nonlinear. These can be selected based on whether the current value of the series is a linear or nonlinear function of past observations. Here are the topics that we will cover now:
Time Series Models
Autoregressive (AR) Models
Moving Average (MA) Models
Autoregressive Moving Average (ARMA) Model
Autoregressive Integrated Moving Average (ARIMA) Models
An LSTM network consists of cells (LSTM blocks) that are linked together. Each cell is, in turn, composed of three types of ports: input gate, output gate, and forget gate. They implement the write, read, and reset functions on the cell memory, respectively. So, the LSTM modules are able to regulate what is stored and deleted. This is possible thanks to the presence of various elements called gates, which are composed of a sigmoid neural layer and a pointwise product. The output of each gate is in the range (0,1), representing the percentage of information that flows inside it. Here are the topics that we will cover now:
Long Short-Term Memory (LSTM) in Keras
Long Short-Term Memory Cell Diagram
The stock market forecast has always been a very popular topic: this is because stock market trends involve a truly impressive turnover. The interest that this topic arouses is clearly linked to the opportunity to get rich through good forecasting by a stock market title. A positive difference between the purchased stock price and that of the sold stock price entails a gain on the part of the investor. But, as we know, the performance of the stock market depends on multiple factors. Here are the topics that we will cover now:
Implementing an LSTM to Forecast Stock Volatility
Our goal is to improve predictive accuracy and not allow a particular feature to impact the prediction due to a large numeric value range. Thus, we may need to scale values under different features so that they fall under a common range. Through this statistical procedure, it is possible to compare identical variables belonging to different distributions and also different variables or variables expressed in different units. Here are the topics that we will cover now:
Let us begin with the third lesson and understand what we are going to cover in our learning journey.
A robot is a machine that performs certain actions based on the commands that are provided, either on the basis of direct human supervision, or independently based on general guidelines, using Artificial Intelligence (AI) processes. These tasks should typically be performed to replace or assist humans, such as in the fields of manufacturing, construction, or the handling of heavy and dangerous materials, in prohibitive or incompatible environments with the human condition, or simply to free a person from commitments. Here are the topics that we will cover now:
Robot Control Overview
Three Laws of Robotics
Features of Robotics
Short Robotics Timeline
A typical feature of autonomous systems is mobility. A robot that performs the intended task needs to move physically within an environment, and must inevitably incorporate a certain autonomy that allows it to move safely, avoiding obstacles and not posing a threat to any nearby living beings.
Reinforcement learning aims to create algorithms that can learn and adapt to environmental changes. This programming technique is based on the concept of receiving external stimuli that depend on the actions chosen by the agent. A correct choice will involve a reward, while an incorrect choice will lead to a penalty. The goal of the system is to achieve the best possible result, of course. Here are the topics that we will cover now:
Reinforcement Learning Basics
Agent's Interaction with the Environment
Reinforcement Learning Terminology
Reinforcement Learning Algorithms
Decision Process (DP)
Monte Carlo (MC) Methods
Temporal Difference (TD) Learning
Q-learning is one of the most used reinforcement learning algorithms. This is due to its ability to compare the expected utility of available actions without requiring an environment model. Thanks to this technique, it is possible to find an optimal action for every given state in a finished MDP. Here are the topics that we will cover now:
A general solution to the reinforcement learning problem is to estimate, thanks to the learning process, an evaluation function. This function must be able to evaluate, through the sum of the rewards, the convenience or otherwise of a particular policy. In fact, Qlearning tries to maximize the value of the Q function (action-value function), which represents the maximum discounted future reward when we perform actions, a, in the state, s. Here are the topics that we will cover now:
DQN to Control a Robot's Mobility
OpenAI Gym Installation and Methods
Now we have to face the most demanding phase: training of our system. In the Q-learning section, we learnt that the Gym library is focused on the episodic setting of reinforcement learning. The agent's experience is divided into a series of episodes. The initial state of the agent is randomly sampled by a distribution, and the interaction proceeds until the environment reaches a terminal state. This procedure is repeated for each episode, with the aim of maximizing the total reward expectation per episode and achieving a high level of performance in the fewest possible episodes.
Deep Q-learning identifies a reinforcement learning method of the approximation of a function. It therefore represents an evolution of the basic Q-learning method since the state-action table is replaced by a neural network, with the aim of approximating the optimal value function.
Let us begin with the fourth lesson and understand what we are going to cover in our learning journey.
Natural language processing (NLP) is the process of automatic processing of information written or spoken in a natural language using an electronic calculator. This is made particularly difficult and complex due to the intrinsic characteristics of the ambiguity of human language. When it's necessary to make the machine learn methods of interaction with the environment typical of man, the question isn't so much that of storing data, but that of letting the machine learn how this data can be translated simultaneously to create a concept. Natural language interacts with the environment generating predictive knowledge. Here are the topics that we will cover now:
Natural Language Processing (NLP)
Robot Control Overview
Automatic Processing Problems
Information Retrieval (IR)
Information Extraction (IE)
Automatic Translation Types
The term sentiment analysis refers to the set of natural language processing techniques, text analysis, and computational linguistics to identify and extract subjective information in written or spoken text sources. If this subjective information is taken from large amounts of data, and therefore from the opinions of large groups of people, the sentiment analysis can also be called opinion mining. Here are the topics that we will cover now:
Part-of-Speech (PoS) Tagging
Named Entity Recognition
The NLTK is a suite of libraries and programs for symbolic and statistical analysis in the field of natural language processing, mainly in the English language, written in Python language. It was developed by Steven Bird and Edward Loper at the University of Pennsylvania's Department of Computer and Information Science. The NLTK includes graphical tools and sample data and is accompanied by a book that exposes the concepts behind natural language problems solved by the toolkit programs, as well as a cookbook for the most common procedures.
Linguistic corpora are collections, mostly large, of oral or written texts produced in real communication contexts (recordings of speeches or newspaper articles), stored in electronic format and often accompanied by computerized consultation tools. Here are the topics that we will cover now:
Stemming is the process of reducing the inflected form of a word to its root form, called the stem. The stem doesn't necessarily correspond to the morphological root (lemma) of the word: it's normally sufficient that the related words are mapped to the same stem, even if the latter isn't a valid root for the word. The creation of a stemming algorithm has been a long-standing problem in computer science. The stemming process is used in search engines for query expansion and other natural language processing problems. Here are the topics that we will cover now: