Tracking Slow Requests and Performance Testing with Application Insights

Gergely Kalapos
A free video tutorial from Gergely Kalapos
Software Engineer
4.5 instructor rating • 2 courses • 6,372 students

Learn more from the full course

High Performance Coding with .NET Core and C#

Learn how to write high performance and scalable .NET Core and ASP.NET Core applications in C#

06:18:10 of on-demand video • Updated December 2017

  • Get an overview about the current stage of the .NET platform with focus on .NET Core, ASP .NET Core, and C# 7 with their performance aspects
  • Get to know tools which are essential to measure the performance of a .NET Core application: Visual Studio Performance Tools, PerfView, BenchmarkDotNet, Perf and LTTng on Linux, Prefix, MiniProfiler
  • Performance characteristics of value types and reference types, the effect of async/await on performance, and performance of collections in the base library
  • Behind the scenes knowledge about C# 7: you will see what the compiler generates from C# 7 code and what performance implications this has on your application
  • New performance related APIs like Span<T>, ArrayPool<T>
  • Data access performance with Entity Framework Core
  • Ahead of time compilation for .NET Core with CrossGen, and removing dead code with the .NET IL Linker
  • Production monitoring for .NET Core and ASP .NET Core with Application Insights and Dynatrace
English [Auto] Welcome to with our nine point two tracking slow requests and performance testing with application insights in this window, we will continue to look at performance data from our production deployment. We will track specific requests, especially slow requests, and then you will see how you can perform and test your application with application insights. So let's go back to the application in Satsuma from the previous demo and let's continue to explore our production performance data. So we have here the main application insights pitch from the airport with the overview timeline. If it's crosswords are done, we see active users in the last 30 day since this is a demo application if you don't have too much here. But this also can be a very useful information from your production system. And below that, we see the total of several requests by request performance, not this is a very valuable information. We immediately see that most of our requests are quite fast, but there are some that take longer than a second. So let's click on these slow requests and try to figure out what's going on with this. We basically have a filter on all requests. You can, of course, also change the filter here, but we leave it now as it is. So we immediately see the number of requests that match filled out with some additional statistics. So application insights looks for common properties that may help to find a problem. For example, it tells us that most of these requests came from the US and they go to these specific users. And below that we see all our requests from this response time range. As you can see on the top, we have the homepage twice. These were probably the first requests after a deployment. So let's select a third one, because that request went to slash homeslice stock data. And according to the statistics, most of our slower requests this year. Whereas if we scroll down, we see the dependencies of this page. Remember, this is the front end that goes into the services API and also does their currency exchange. We immediately see that our service API needed more than two seconds. There are two other dependencies where each over two hundred milliseconds, which we can accept. But getting the stock data out from our backend service was extremely slow. So we this immediately know where to look within the source code for this performance problem. You can drill down and find other requests based on this, but I guess you already got the point here with this. You can track down specific requests and you can find the root cause of different problems. Now, let's move on to another feature called Performance Testing. I go back to the main page of application insights. And if we scroll down here, you'll find performance testing here. I click it to get to the performance testing. So what we can do here is really amazing. We can create a real performance test for our application. We can either pass a single URL or we can also deploy our studio web test file, which goes through the application as we defined it within the test. And here you can select where to generate load from and you can also select a length and a number of users that you want to simulate. Not typically you don't do this on a production system for performance testing. It's better to use a preproduction system that basically replicates the same environment as the production system uses. I already created Vilo test, so let's see the result. I simulated here a hundred concurrent users to this specific URL. As you can see, half of the request failed here. We also see the response time so we immediately know how that application performs under this specific load, not. Let's figure out why. Fifty percent of the requests failed. I go back to the main page of application in sets and I click on the failed requests. I know that this was the point when I ran the performance test. So these failed requests must be from the performance test. Again, you typically do this on an isolated environment so you don't mix real user requests data into the load testing results. So as you can see, we have lots of requests to this specific page. This is exactly what we tested. And if we go to the dependencies, we see the two dependencies that failed. We have two failing dependencies. One is our backend API. But on the top we have the free service that we utilize to get the currency exchange rates. As you can see, that is definitely the bottleneck here. So let's click on it. And 824 times we had HTP 429 and 429 means too many requests. So we had 824 failures because we send a request every time to this service. When a user comes to our page and as you can see with hundred concurrent users, this is already our problem solving. The application inside, we were basically able to. To figure out where our application breaks and their load, one option to fix this issue is, for example, to catch the result of this call for a specific time frame. And with that, we can make sure that, for example, the only quality service, let's say, not more than once in an hour, but the point is that we basically simulated high load. And with that, we are able to figure out when and how our application breaks. And with that, we can further optimize the application before we end this window. I would like to add a few things here. Our application he cites runs in Aitor, but it's important to note that the application itself can run, in fact, anywhere. It doesn't have to run in Essar, nor that application is that you have to make sure that you can send a telemetric data to Esher. But that is the only restriction you can use your own server or your own data center. Another important point is that if you deploy a Natcore or an of Natcore application to Linux, you can monitor it the exact same way as we did here. It is also a common misconception that application insets only works with that. This isn't true. There are SDK for other frameworks and run times like Java or not. Yes, we focus here on the Natcore and ASP Natcore. But if you have, let's say, energy application, you can still monitor it with Application Insight's for further details on other frameworks, you can look at this website. So let's summarize what we saw here in this demo. First, we saw how you can track slow requests based on response time, and then we saw how you can create performance tests with application insights. Many thanks for watching. In the next video, you are going to see how you can track custom dependencies with application insights.