Tracking Slow Requests and Performance Testing with Application Insights

A free video tutorial from Gergely Kalapos
Software Engineer
Rating: 4.3 out of 5Instructor rating
2 courses
10,964 students
Tracking Slow Requests and Performance Testing with Application Insights

Learn more from the full course

High Performance Coding with .NET Core and C#

Learn how to write high performance and scalable .NET Core and ASP.NET Core applications in C#

06:18:10 of on-demand video • Updated December 2017

Get an overview about the current stage of the .NET platform with focus on .NET Core, ASP .NET Core, and C# 7 with their performance aspects
Get to know tools which are essential to measure the performance of a .NET Core application: Visual Studio Performance Tools, PerfView, BenchmarkDotNet, Perf and LTTng on Linux, Prefix, MiniProfiler
Performance characteristics of value types and reference types, the effect of async/await on performance, and performance of collections in the base library
Behind the scenes knowledge about C# 7: you will see what the compiler generates from C# 7 code and what performance implications this has on your application
New performance related APIs like Span<T>, ArrayPool<T>
Data access performance with Entity Framework Core
Ahead of time compilation for .NET Core with CrossGen, and removing dead code with the .NET IL Linker
Production monitoring for .NET Core and ASP .NET Core with Application Insights and Dynatrace
English [Auto]
Welcome to Video 9.2 Tracking slow requests and performance testing with Application Insights. In this video, we will continue to look at performance data from our production deployment. We will track specific requests, especially slow requests, and then you will see how you can performance test your application with application insights. So let's go back to the application insights from the previous demo and let's continue to explore our production performance data. So we have here the main application Insights page from the Azure Portal with the overview timeline. If we scroll further down, we see active users in the last 30 days. Since this is a demo application, we don't have too much here, but this also can be a very useful information from your production system. And below that we see the total of server requests by request performance. Now this is a very valuable information. We immediately see that most of our requests are quite fast, but there are some that take longer than a second. So let's click on these slow requests and try to figure out what's going on with this. We basically have a filter on all requests. You can, of course, also change the filter here, but we leave it now as it is. So we immediately see the number of requests that match the filter with some additional statistics. So application Insights looks for common properties that may help to find a problem. For example, it tells us that most of these requests came from the US and they go to this specific URL and below that we see all our slow requests from this response time range. As you can see on the top, we have the homepage twice. These were probably the first request after a deployment. So let's select the third one because that request went to slash home slash stock data. And according to the statistics, most of our slow requests hit this URL. If we scroll down, we see the dependencies of this page. Remember that this is the frontend that calls into the services API and also does the currency exchange. We immediately see that our service API needed more than two seconds. The two other dependencies were each over 200 milliseconds, which we can accept, but getting the stock data out from our backend service was extremely slow. So with this we immediately know where to look within the source code for this performance problem. You can drill down and find other requests based on this, but I guess you already got the point here with this. You can track down specific requests and you can find the root cause of different problems. Now let's move on to another feature called performance testing. I go back to the main page of Application Insights, and if we scroll down here, you find performance testing. Here, I click it to get to the performance testings. So what we can do here is really amazing. We can create a real performance test for our application. We can either pass a single URL or we can also deploy a Visual Studio web test file, which goes through the application as we defined it within the test. And here you can select where to generate load from and you can also select the length and the number of users that you want to simulate. Now typically you don't do this on a production system for performance testing. It's better to use a pre-production system that basically replicates the same environment as the production system uses. I already created one load test, so let's see the result. I simulated here 100 concurrent users to this specific URL. As you can see, half of the requests failed here. We also see the response time, so we immediately know how the application performs under this specific load. Now let's figure out why 50% of the requests failed. I go back to the main page of Application Insights and I click on the failed requests. I know that this was the point when I run the performance test, so these failed requests must be from the performance test. Again, you typically do this on an isolated environment, so you don't mix real user requests data into the load testing results. So as you can see, we have lots of failed requests to this specific page. This is exactly what we load tested. And if we go to the dependencies, we see the two dependencies that failed. We have two failing dependencies. One is our backend API, but on the top we have the free service that we utilize to get the currency exchange rates. As you can see, that is definitely the bottleneck here. So let's click on it. And 824 times we had Http 429 and 429 means too many requests. So we had 824 failures because we send a request every time to this service when a user comes to our page. And as you can see with 100 concurrent users, this is already a problem. So with application insights, we were basically able. To figure out where our application breaks under load. One option to fix this issue is, for example, to cache the result of this call for a specific time frame. And with that we can make sure that, for example, we only call this service, let's say, not more than once in an hour. But the point is that we basically simulated high load and with that we were able to figure out when and how our application breaks. And with that we can further optimize the application. Before we end this video, I would like to add a few things here. Our application insights runs in Azure, but it's important to note that the application itself can run in fact anywhere. It doesn't have to run in Azure in order to use application insights. You have to make sure that you can send a telemetry data to Azure. But that is the only restriction. You can use your own server or your own data center. Another important point is that if you deploy a Dotnet core or an ASP.Net Core application to Linux, you can monitor it the exact same way as we did here. It is also a common misconception that application insights only works with Dotnet. This isn't true. There are SDKs for other frameworks and runtimes like Java or NodeJS. We focus here on Dotnet core and ASP.Net Core, but if you have, let's say, a NodeJS application, you can still monitor it with application insights for further details on other frameworks, you can look at this website. So let's summarize what we saw here in this demo. First, we saw how you can track slow requests based on response time, and then we saw how you can create performance tests with application insights. Many thanks for watching. In the next video, you are going to see how you can track custom dependencies with application insights.