*NEW* Web Development Secrets 2020 - CRP, HTTP, AJAX & More
- 7.5 hours on-demand video
- 19 articles
- 1 downloadable resource
- Full lifetime access
- Access on mobile and TV
- Certificate of Completion
Get your team access to 4,000+ top Udemy courses anytime, anywhere.Try Udemy for Business
- 120+ lectures and 7.5+ hours of well-structured content
- *Download lectures (for offline viewing)
- Understand the DOM, CSSOM, Render Tree and Layout
- Master HTTP, HTTP/2 and AJAX
- Learn how to optimize any website for speed by writing better code
- Understand the Network Panel, Performance Panel and Audit Lighthouse functions within DevTools
- Understand HTTP, TCP, Data Packets and a whole bunch more!
- Real examples of how AJAX works (we use both the XMLHttpRequest object and the newer Fetch API)
- Master the Critical Rendering Path
- Understand what are Render Blocking Resources and how we solve this problem
- From beginner to expert (advanced +)
- Ongoing updates to keep you current
- You will emerge an expert
- Write your own Polyfill
- Introduction to HTTP/2 and how it improves the current HTTP/1.1 protocol
- How to use a text editor that is completely free
- Gives you depth of knowledge to boost your ability and confidence
- All the techniques used by professional programmers
- Support from me
- A strong desire to become a full stack web developer
- Desire to KNOW the full process of how your webpage works behind the scenes
- Desire to KNOW how to use DevTools – Performance and Network Panels
- A computer is required as you need to code alongside me to learn effectively
Let me share my web developer secrets with you
What this course covers?
By the end of this course, you'll be able to “speak” CRP by gaining an understanding of how to fetch data from a server and then get that data to your user as quickly as possible. We dig deeper in every lecture, learning about things like HTTP, TCP, data packets, render blocking resources, and a whole bunch more! This course has many bonus lectures which extend your knowledge base and test your skills.
*** The most important Web Development course on Udemy ***
Successful programmers know more than rote learning a few lines of code. They also know the fundamentals of how web development works behind the scenes. If you’re wanting to become a full stack developer, you need to know how to deal with server requests and responses, loading, scripting, rendering, layout, and the painting process of the pixels to the screen.
I want you to become a successful programming Grandmaster.
I want you to be able to apply what your learn in this course to your webpage.
This course is perfect for you.
Hi there, my name is Clyde and together we’re going to learn about the entire critical rendering path and apply it to practical situations. We're going to practice and learn and emerge confident to tackle any challenges modern programs and websites throw at us.
After completing a few university degrees, and post grad studies, I developed a fascination for web design and software languages. For several years I have immersed myself in this. I spent a fair bit on top courses and went on to apply the knowledge practically. I recognized gaps in some of the courses I’ve taken and hence my course teaches what I wish I was taught. My intention is to share the knowledge with you in an easy to follow manner, so that we can benefit together. You benefit form learning, and I from sharing in your success.
This course is for beginners and for intermediates.
A unique view
Understanding web development is a vast topic. To get you up to speed, I’ve spent months thinking about where to focus content and how to deliver it to you in the best possible way.
You will learn "why" things work and not just "how". Understanding the fundamentals of web development is important as it will allow you to write better code. And trust me, every website encounters bugs and slow rendering times, and without understanding the fundamentals you will be totally lost.
How is this course different?
There are lots of great courses on web development. Pity they never get into the detail about how we get our website to your users screen as quickly as possible – which covers full stack development.
In this course, I focus on true web performance. This includes server requests and responses, loading, scripting, rendering, layout, and the painting of the pixels to the screen.
Practice makes perfect
Theory is theory … but there’s nothing like getting behind your computer and typing in code. That’s why we will be coding, laughing and pulling out our hair together as we code real life websites and exercises during this course.
I love practical examples, which is why we build simple pages and analyze the CRP together by using the Network Panel, Performance Panel and Audit Lighthouse within DevTools.
Is this course for you?
It doesn't matter where you are in your web development journey. It's suitable for all levels.
Still unsure? If you fit in any of these categories then this course is perfect for you:
Student #1: You want to dabble in the world of programming: learning the fundamentals of HTTP, AJAX, Data Packets and Rendering will allow you to extend this knowledge to any language
Student #2: You want to gain a solid understanding of web performance
Student #3: You want to start using backend frameworks like Node.js, which are heavily dependent on having a deeper knowledge about how to make AJAX requests, manipulate the response and then deliver it to the screen
Student #4: You kinda know what the Critical Rendering Path is, but have little knowledge about how it works behind the scenes, and how to practically implement it in your code
Student #5: You have taken other courses in web development but just don’t feel like you’ve grasped it
WHY START NOW?
Right this second, your competitors are learning how to become better web developers.
Web development is a blazing hot topic at the moment. But you have a distinct advantage. This course offers memorable learning topics, actionable tactics and real-world examples.
Lets get started!
What do you get?
· Lifetime access to all tutorial videos. No fees or monthly subscriptions.
· Q&A support.
· Quizzes and challenges to help you learn.
Let's get excited about becoming a professional web developer, and to be able to confidently apply it to your own websites.
See you in the lectures.
- NO: This course is NOT only for beginners. It is a complete beginner to advanced master course that is suitable for intermediates who know the basics and have an idea about how a browser fetches data from a server and displays that to a page. Experienced students sometimes prefer to skip a section that they are very familiar with
- YES: This course is for someone wanting to be a professional, to be expert and confident in the entire rendering process
- Those who want to learn modern coding techniques to speed your page loading experience without third party libraries and frameworks
- Those interested in building their own frameworks, or being better able to learn from the source code of other well-known frameworks and libraries
- Those who have some knowledge of web development, but little knowledge about how it works behind the scenes, and how to practically implement best practices in their websites
Talking about the "old-school" way can be a little boring. But trust me, you need to understand this in order to understand how browsers interact with servers today. Enjoy!
AJAX is awesome. It allows web pages to be updated asynchronously by exchanging data with a web server behind the scenes. This means that its now possible to update parts of a web page, without reloading the whole page. Cool hey!
We use AJAX, or in other words, the XMLHttpRequest (XHR) object, to interact with servers. Using AJAX allows us to get data from a URL without having to do a full page refresh. This lecture explains how we set up the XMLHttpRequest object in 3 simple steps. Lets jump right into it!
As developers, we find ourselves in the console a lot, specifically DevTools.
And you have or may not have noticed that when you open up objects, the properties and methods on those objects are in the color purple. But sometimes, if you look closely enough, you'll notice that some are bright and others are dimmed.
This lecture attempts to explain why this is the case.
We've seen that the fetch() request returns a Response object. What you may not know is that the body of this Response object is a Stream object, which means that we need a method to return it to us. When we call the json() method, a Promise is returned since the reading of the stream will happen asynchronously.
We've seen how a GET request works. But for the sake of completeness, lets now look at a POST request. Remember, its not scary. A POST request method requests that a web server accepts the data enclosed in the body of the request message. Mostly your intention is for the server to store this information. A POST request is often used when uploading a file or when submitting a completed web form.
According to Google Developers Documentation Fetch makes it easier to make asynchronous requests and handle responses better than with the old school XMLHttpRequest object. Remember how we built a webpage using XMLHttpRequest. Now lets build the same thing but using Fetch. You'll see first hand how much more intuitive the Fetch API is.
You've seen what an HTTP request is, what AJAX is, and why we use it. You also used methods to help you make requests and receive responses from a server.
Now its time to put your knowledge to the test with these fun questions. Don't worry if you can't get them all right. The most important point at this stage is to have fun and enjoy!
Your browser is nothing more than software that is used to access the internet. Your browser lets you visit websites and do activities within them like login to your accounts, view videos, visit one page from another, print, send and receive emails. The most common browser software titles on the market are Internet Explorer (old school now), Google's Chrome, Mozilla Firefox, Apple's Safari, and Opera. Whether or not you can use a browser on your phone or PC depends on the operating system of your device (for example: Microsoft Windows, Linux, Ubuntu, Mac OS, among others).
We know that a Browser displays information to you, but who defines what the format of this information should look like and what functionality a Browser should perform? Yes, you could say Google, Mozilla, Apple, etc all decide on this, but there also a common set of rules that every Browser should follow. Who creates these rules? Well, the W3C. W3C is short for the World Wide Web Consortium, which is an international consortium of companies involved with the Internet and the Web. The organization's main goal is to develop open standards so that the Web evolves in a single direction rather than being split and scattered.
When you are browsing the web, you are actually calling up a list of requests to get content from various resource directories or servers on which the content for that page is stored. It can be compared to a recipe for a cookie - you have a shopping list of ingredients (requests for content) that when combined in the correct order, these ingredients bake a delicious cookie (the web page).
When you want to visit a web page, chances are you will trigger the use of a TCP/IP protocol. This in turn triggers a chain of events. You computer will bundle data together and that packet of data will pass through the TCP/IP protocol stack on the local machine, and then across the network media to the protocols on the recipient. The protocols at each layer on the sending host add information to the original data.
As the user's command makes its way through the protocol stack, protocols on each layer of the sending host also interact with their peers on the receiving host.
A lot of words I know. Lets see what this all means.
Remember how I said that the application layer is the most important for us developers. This is because it involves the HTTP protocol. HTTP means HyperText Transfer Protocol, and it is the underlying protocol used by the World Wide Web. HTTP defines how messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various commands.
Remember, a polyfill is only used if we're trying to use new code that our Browser does not understand. Therefore, the starting point is to determine whether the forEach() method exists on our Browser.
In this case you'll see that it does exist. BUT for our purposes you need to pretend that it does not exist. That's why we're building a polyfill - because we are pretending that our Browser does not know what the forEach() method is.
Before we can build our own polyfill, we need to look at what it is we're building. The forEach() is a method on every Array that we can use to execute a function on each element in that array. When using forEach() , we simply have to specify a callback function that will be executed on each element in the array.
Up until this point you should have a very good idea about what an HTTP request is and why its important. Now you're ready to dive into what this course is really about - and that is the steps the browser goes through in order to render HTML, CSS and JS onto your device. I hope you enjoy this section. Remember, although it can be tough, sit back, relax and try to enjoy every moment.
There are different execution contexts that exist in your browser. All of these environments have to work together in order to display content to your device. This lecture gives you a high level summary of the environments.
In order to become a brilliant coder, we need to understand the steps the Browser goes through in fetching data from a server, all the way to displaying the content on your screen. A large part of this process is the construction of the Document Object Model - or the DOM. This is a 2-part lecture about how the DOM is constructed. Enjoy!
We've seen that your HTML, CSS and JS files have to be sent to your computer in Bytes. These Bytes are then converted to Characters (usually using UTF-8 encoding), but now what? Well, as I'm sure you can agree with me, a whole bunch of characters - or text - given to the Browser is meaningless. How does your Browser know what these characters mean? How does it know where to place all the characters? etc. etc. Lets find out what happens next.
We've discussed the critical rendering path. Now its time to see it in action by looking at the Performance Tab in DevTools.
We've looked at the Call Tree ... now its time to review the Bottom-Up and Event Log tabs.
Remember, all we're doing here is trying to look at how our website performed. In other words, how quickly it managed to fetch data from a server and then display that to our screen. There are many ways to skin a cat as they say - that's why we're looking at all the different ways we can view this critical rendering process.
Up until now we've constructed the DOM and the CSSOM. Are we done? Unfortunately not.
The next step is to combine the DOM and CSSOM into a render tree. The Render Tree is then passed to the layout phase and eventually to the paint phase which paints the actual pixels on the screen and the content is visible to you.
We've discussed the DOM, the CSSOM and the construction process of the Render Tree. But we're not done yet are we? Nope. There are 3 stages to the render tree process - the construction, layout and painting process. When the render tree is created, its nodes and elements do not have a position and size. Calculating these values is called layout or reflow.
Lets jump into the layout process, which computes the exact position and size of each node on your page.
A brief look at what we've done. I hope you've been enjoying it so far. KEEP going!
WOW. You've come a long way. Well done!
Most of us work with the web every day. We're used to getting all the information we need almost instantly. But how that web page is actually put together and delivered to us is a bit of a mystery for most people. BUT NOT YOU.
One of the best ways to solidify your knowledge is to test yourself. And remember, there's nothing to worry about. These quizzes are purely for your benefit. So have fun, enjoy and see you soon.
By default, CSS is treated as a render blocking resource. This just means that the browser will not construct the Render Tree (i.e. it will not render any content) until the CSSOM is constructed. This kinda makes sense right? You would hate to arrive on a webpage only to see a whole bunch of ugly text with no styling attached to it. This is why both the CSS and .html file are render blocking.
We've seen that CSS is render blocking. But sometimes we don't want ALL of our CSS files to be render blocking. Remember, some CSS files are only applicable under certain conditions - like if the screen orientation is a certain angle, or size. And this is where Media Queries come into the picture.
Media Queries are a CSS3 module allowing content rendering to adapt to conditions that we specify (such as screen size). It became a W3C recommended standard in 2012, and is a cornerstone technology of responsive web design.
This lecture shows you how to use it. Enjoy!
We've seen that when the browser hits a <script> tag, the parser has traditionally been blocked from continuing to read the remaining HTML until that script file has been fully executed.
The async attribute was introduced after defer, and it tells the browser to start downloading the resource right away without slowing down HTML parsing (i.e. without blocking the render). However, once the resource is available, HTML parsing is paused so the resource can be loaded. If the script is modular and does not rely on any scripts then we typically want to use async.
By using the defer attribute, we can tell the browser to hold off on downloading the resource until HTML parsing is complete. Once the browser has finished with the HTML it will then download and render all deferred scripts in the order in which they appear in the document. This is the cool thing with defer - it guarantees that your JS files will be executed in the same order that you defined in your markup.
We've come a long way. Well done. But now its time to take a step back and compare the approaches we've implemented so far. Remember what we're trying to do: we're trying to improve our site page speed by defining when JS should be render blocking. For critical JS resources that is required to display above the fold content, we need it to be render blocking. But for other less important JS, we can defer it until a later time - this allows us to get the first page rendered to our users quickly.
Loading assets (by assets I mean things like css, jpegs, js files, etc.) on a page is one of the most important parts to get right in order to achieve a fast first meaningful paint.
Usually, real world apps load multiple assets, and we've seen in the previous lectures that these assets are render-blocking by default, which negatively impacts the loading performance.
How do we solve this?
One way is by using preload. Preload lets you declare fetch requests in the HTML's <head>, specifying the assets that your page will need very soon and which you want to start loading early in the lifecycle.
Lets see how it works.
It can be very confusing when starting out to conceptualize what the Critical Rendering Path really is, and how preload fits into all of this.
That's why I have dedicated this lecture to talking about preload.
This lecture also speaks about images and their impact on the CRP.
Browsers have traditionally been single-threaded ... but a few years ago browsers implemented a new approach to improve importance.
The good news, is that there is.
In 2008 IE introduced “the lookahead downloader” which was way that allowed the browser to keep downloading the files that were needed while the synchronous script was being executed. Firefox, Chrome and Safari followed, and today most browsers use this technique under different names. The name we'll use in this course is Speculative Parsing.
So far in this course we've used the Performance Panel to assess our webpage activity. But there are many ways to skin a cat - and another way to assess network activity is with the Network Panel. The Network Panel is found within Google Chrome Developers tools, and it allows you to inspect resources as they are accessed over the network. In the upcoming lecture's we're going to use the Network Panel to inspect an HTTP request and its corresponding response so that we can understand what the browser is doing.
As we will see together, there are many things we can see in the Network Panel. The 2 most important things are
1. the blue vertical line displayed in the waterfall timeline on the right (this represents the DOMContentLoaded event); and
2. the red vertical line (this represents the window's Load event).
These two lines help you determine the total time it takes for pages to load and gives you a starting point at which to make improvements to your pages.
So what are you waiting for? Lets jump into it.
There is a wealth of information in the Network Panel, including the ability to assess things like the Size of the transferred file, the name of the file, the status codes returned to us by the server ... and a whole bunch more!
Here we spend some time going through what each column means in the network panel.
Understanding HTTP is very important. This is why I'm taking the time to devote a quick lecture on what HTTP headers look like and where you can find them.
HTTP headers (both response and request) allow the client (browser) and the server to pass information with the HTTP protocol. An HTTP header consists of its case-insensitive name followed by a colon (:), then by its value.
Headers can be grouped into 3 broad categories (which you'll see in this lecture):
General headers - these apply to both requests and responses, but the data within this header has no relationship to the data transmitted in the body; and
Request headers - are provided by the browser and contain information about the resource to be fetched, or about the client requesting the file; and
Response headers - are provided by the server and hold additional information about the response, like its location or about the server providing it.
We now know that the Network panel records information about each network operation on a page, including detailed timing data, HTTP request and response headers, cookies, and more.
Now we're going to jump into the most important part of the Network panel - the timeline. The timeline column displays a visual waterfall of all network requests.
When we talk about a time phase of a particular resource, we mean the time it takes for the browser to initialize the request, send it to the server, wait and then download the entire file. Importantly and luckily for us, Google Chrome DevTools gives us all the time phases that we can use to analyze and speed up our site.
Before we discuss what each time phase means, I want to show you what I'm talking about ... just in case you're totally lost ;)
We've seen that all network requests are considered resources. As they are retrieved over the network, resources have distinct lifecycles expressed in terms of resource timing.
By default, Chrome breaks down the life of a request different parts, including:
Queueing and Stalled, which basically show the time a requests needs to wait before being acted on by the browser.
DNS Lookup, Initial Connection and SSL, which show the the time spent in these respective parts of the request lifecycle.
Request Sent is the amount of time the browser takes to send the request to the server.
Waiting (TTFB) is the amount of time the browser has to wait before beginning to receive data from the server.
Content Download is the amount of time it takes to receive the entire resource from the server.
In this course we have been using Visual Studio Code as the text editor (its entirely free in case you're wondering).
A plugin I've installed is Live Server which allows me to 'serve' my files to the browser in real time. Unfortunately (or fortunately, depending on how you look at it), this means extra code is inserted into the index.html file. A web socket is established.
In our example, we also see an ng-validate file. Whats this?
And finally we also see a request for a favicon.ico file. Whaaaat?
Don't worry, we're going to go into detail about what all these files are, so you can see how I go about analyzing each file and how you can go about figuring it out on your very own website too.
We've come a long way. One of the most important things you need to understand on every website you build (or own) is how many critical resources there are and how many server trips are required at a minimum in order to render that page to the screen. This is what we're going to look at now.
We go further and also identify the number of critical resources and the minimum amount of server round-trips required to get this page delivered to the user in the fastest possible time.
In this lecture we will use Async.
Up until now we've taken quite a hack approach to figuring out the CRP. We've had to use both the Performance Tab and the Network Tab.
But there are better ways and more streamlined ways.
Lighthouse is a great starting point. It is free (i.e. open-source) and is fully automated. You can run it against any web page, public or requiring authentication.
HTTP was developed by Timothy Berners-Lee in 1989 as a communication standard for the World Wide Web. In other words, it is the language that allows browsers and servers to speak to each other. HTTP allows the exchange of information between a client computer and a local or remote web server. In this process, a client sends a text-based request to a server by calling a method like GET or POST. In response, the server sends a resource like an HTML page back to the client.
HTTP/2 began as the SPDY protocol, developed primarily at Google with the intention of reducing web page load latency by using techniques such as compression, multiplexing, and prioritization.
HTTP/1.1 and HTTP/2 share the same formats / semantics. This was done so that the requests and responses traveling between the server and client in both protocols reach their destinations as traditionally formatted messages (i.e. with headers and bodies, using familiar methods like GET and POST).
But while HTTP/1.1 transfers these in boring old plain-text messages, HTTP/2 encodes these into binary, allowing for a different delivery model. At a very high level, this allows HTTP/2 to encode requests/responses and cut them up into smaller packets of information, which is very powerful.