"Junior Level Data Scientist Median Salary from $91,000 and up to $250,000".
As an experienced Data Analyst I understand the job market and the expectations of employers. This data science course is specifically designed with those expectations and requirements in mind. As a result you will be exposed to the most popular data mining tools, and you will be able to leverage my knowledge to jump start (or further advance) your career in Data Science.
You do not need an advanced degree in mathematics to learn what I am about to teach you. Where books and other courses fail, this data science course excels; that is each section of code is broken down through the use of Jupyter and explained in a easy to digest manner. Furthermore, you will get exposed to real data and solve real problems which gives you valuable experience!
This is introduction to the topic of Data Science. We discuss what is Data Science and some of the buzz words surrounding this subject.
We look at the most popular views on the Data Science Process to gain import insights behind this topic. The topics include the Knowledge Discovery Process (KDD), Industry Standard Data Mining Process (CRISP-DM) and much more.
In this lecture we will install Anaconda, which is a completely free (and popular) Python distribution.
We look at the update version of iPython now known as Jupyter.
In this lecture, we cover the basics of a very popular scientific library in Python, called NumPy.
For the purpose of creating visuals, we look at matplotlib which is a 2D plotting library.
Pandas "aims to be the fundamental high-level building block for doing practical, real world data analysis in Python". This is one of the most important libraries for a data analysis to be familiar with when using Python. It leverages the power of NumPy and matplotlib among other things.
In this two part lecture on Data (or Variable) Types we look at identifying different types of variables.
In the second part, we learn numerical methods of summarizing individual variables, whether they are qualitative or quantitative.
Here we look at calculating descriptive statistics in Python.
We use Excel to generate Descriptive Statistics.
This can be thought of as a bonus lecture, where we use SAS to access Descriptive Statistics.
Perhaps the most commonly used data visualization technique is a Histogram. This lecture answers: What is a Histogram and How to generate one in Python.
Probability Mass Functions are not routinely included in texts of Statistics, however, it can provide you with more information than a Histogram. We look at implementing Probability Mass Functions in Python.
The next logical concept in Exploratory Data Analysis (after Probability Mass Functions) is Cumulative Distribution Functions. We use smoothing to gain insights about the underlying distrubution of our emperical data.
In this lecture we look at Probability Density Functions and the difference between Empirical and Analytical distributions.
We look at the differences between Probability Density and Probability Distribution. Additionally, we look at how to generate a Kernel Density Plot in Python.
We move away from analysing individual variables and look at how variables affect each other. Specifically, we look at a very common technique Box Plot to examine relationship between two variables.
We continue with the Exploratory Data Analysis techniques to visualize two variables in concert. In this lecture, we look at Scatter Plots.
Here we look at methods that quantify relationship between two variables. Specifically the two common measures are known as: Correlation and Covariance.
Analyzing relationship between two Categorical Variables can prove to be very insightful. In this lecture we look at comparing different populations, testing the difference and visualizing the relationship.
We conduct exploratory data analysis on the Titanic passanger data set made popular by Kaggle.
Central Limit Theorem is a critical concept in statistics. The properties of this theorem allow us to make inferences about a population without knowing its true distribution. In this lecture we use simulations (in Python) to prove Central Limit Theorem (CLT) and use the CLT properties to evaluate central tendency and variance of a non-normal (population) distribution.
We expand on the previous lecture about Central Limit Theorem and introduce estimation, specifically looking at the probability of correctly estimating a parameter.
In this lecture we answer:
You do not need to rely on any external packages in order to generate summary statistics. In this lecture, we discuss how matrices can be used to calculate summary statistics of one, two or many variables.
We Introduce Parametric Models (for Statistics) and extend this idea to Linear Response Modelling. Before we can apply this to popular statistical techniques such as Linear Regression, we need to discuss the assumptions of Linear Response Models.
In this lecture we define linear regression, estimate model parameters and list regression assumptions.
In this lecture we estimate regression model parameters through Ordinary Least Squares using Matrices.
Multiple regression in Excel - we look at important regression statistics and how they can be calculated from the sample and our regression line. We also look at the implication of multiple t-tests and why f-test is more important in terms of the Regression model.
Linear Regression forms the basis of Statistical Analysis. We use the trusted Python library to find the Ordinary Least Squares (OLS) estimate in this practical example.
In this practical example we look to extend simple linear regression to multiple regression through the use of Statsmodels python library.
Tools you need to complete the exercises for this section are discussed in this lecture. We also look at an important learning resource for SQL
We discuss the CREATE TABLE statement in SQL and create our demo table.
We look at SELECT statement and SELECT DISTINCT variation in SQL. We also look at the LIMIT Clause, which is equivalent to SELECT TOP Clause.
The ORDER BY keyword is used to sort the output in SQL, we discuss its usage in this video demonstration.
Grouping is commonly used to perform aggregation, and in this lecture we discuss the usage of GROUP BY in SQL.
Data Integration is performed at the early stage of a data science process. This video introduces you to HDF (Hierarchical Data Format) and you will learn how to easily implement this platform indepedent technology in Python.
We look at various methods available in Python that deal with large datasets which do not fit into memory. In addition we will look at combining chunks of these datasets to generate a Data Warehouse.
This lecture contains the update Notebook of the example discussed in the previous video. Specifically, we utilize vectorization instead of for loops for the Table solution. This lecture is compleltely optional.
It is a common business objective to find which products or promotions increase sales. This lecture gives you an idea about how to utilize Exploratory Data Analysis as a means of Feature Selection and as well as Knowledge Discovery. We then use multiple regression to verify whether the effect really exists (based on what we learned in our Exploratory Data Analysis!).
I have an educational background in statistics, data mining and data science. In addition to being a SAS 9 Base certified programmer, I have experience with real world data science projects and research (in the health care sector). Data Science is my passion, and I want to pass my knowledge onto like minded people. Please review my Linkedin Page to learn more about me