Tracking Slow Requests and Performance Testing with Application Insights

Gergely Kalapos
A free video tutorial from Gergely Kalapos
Software Engineer
4.4 instructor rating • 2 courses • 5,187 students

Learn more from the full course

High Performance Coding with .NET Core and C#

Learn how to write high performance and scalable .NET Core and ASP.NET Core applications in C#

06:18:10 of on-demand video • Updated December 2017

  • Get an overview about the current stage of the .NET platform with focus on .NET Core, ASP .NET Core, and C# 7 with their performance aspects
  • Get to know tools which are essential to measure the performance of a .NET Core application: Visual Studio Performance Tools, PerfView, BenchmarkDotNet, Perf and LTTng on Linux, Prefix, MiniProfiler
  • Performance characteristics of value types and reference types, the effect of async/await on performance, and performance of collections in the base library
  • Behind the scenes knowledge about C# 7: you will see what the compiler generates from C# 7 code and what performance implications this has on your application
  • New performance related APIs like Span<T>, ArrayPool<T>
  • Data access performance with Entity Framework Core
  • Ahead of time compilation for .NET Core with CrossGen, and removing dead code with the .NET IL Linker
  • Production monitoring for .NET Core and ASP .NET Core with Application Insights and Dynatrace
English [Auto] Work with 0 9 point into tracking slow progress and performance testing with application insights in this we doubt we will continue to look at performance data from our production deployment veiled threats specific requests especially story rests and then we will see how you can performance test your application with the application insights. So let's go back to the application insets us from the previous day. And let's continue to explore our production performance data. So we have here the main application insites page from the portal with the overview timeline. If it's cross 30 or done we see active use aft in the last 30 days. Since this is a demo application we don't have too much here but this also can be very useful information from your production system and below that we see the total of Sarratt request by request performance. Not this is a very wide web information. We immediately see that most of our requests are quite fast but there are some that take longer than a second. So let's click on the slow request and try to figure out what's going on with this. We basically have a filter on all requests. You can of course also change the filter here but we leave it now as it is. So we immediately see the number of requests that match their filter. Read some additional statistics. So application insights looks for common properties that may have to find a problem. For example it tells us that most of these requests came from the US and they go to these specific whereas And below that we see all our slow requests from this response time range as you can see on the top we have the homepage twice. These were probably the first requests after a deployment. So let's select the search one because that request went to slash forms last stop data and according to the statistics most of our slow requests hit this error. If you scroll down we see the dependencies of this page. Remember this is the front end that goes into the surfaces API. And also that's their currency exchange. We immediately see that our service API needed more than two seconds. There are two other dependencies where each over 200 milliseconds which we can accept but getting the stock data out from or back in service was extremely slow. So with this we immediately know where to look within the source code for this performance problem. You can drill down and find other requests based on these but I guess you already got the point here with this. You can track down specific requests and you can find the root cause of different problems. Now let's move on to our other feature called performance testing. I go back to the main page of application insights and if we scroll down here you'll find performance testing here. I click it to get to the performance testings. So what we can do here is really amazing. We can create a real performance test for our application. We can either pass a single UL or we can also deploy. I wish you had to do your work file because through the application as we defined it within the test and here you can select the wrath to generate lot from and you can also select lengths and the number of users that you want to simulate. Not typically you don't do this on a production system for performance testing it's better to use a pre-production system that basically replicates the same environment as the production system uses. I already created one last test so let's see the result I simulated here 100 concurrent users to this specific user. And as you can see half of the requests filled here we also see the response time. So we immediately know how the application performs under this specific load. Now let's figure out why 50 percent of the requests. I go back to the main page application insets and I click on the field Reclast. I know that this was the point when I ran the performance test. So these failed requests must be from the performance test. Again you typically do this on an isolated environment. So you don't mix real ERIC ABETZ data into the testing results. So as you can see we have lots of requests to this specific pitch. This is exactly what we know tested and if we go to the dependencies we see the two dependencies that we had we have to Frayling dependencies. One is our backend API. But on the top we have the free service that we utilize to get the currency exchange rates as you can see. That is definitely the bottleneck here. So let's click on it and 824 times. We had HTP 429 and 429 means too many requests. So we had eight hundred twenty four failures because we send a request every time to the service when a user comes to our page. And as you can see with hundred concurrent users this is already a problem. So with the application in site we are basically figure out where our application breaks and our Lord one option to fix this issue is for example to cash Teresa out of the school for a specific time frame. And with that we can make sure that for example we only call these surveys let's say not more than once in an hour. But the point is that we basically simulated high load. And we that we are able to figure out when and how our application breaks. And with that we can further optimize the application before we end this with. I would like to add a few things here. Our application sites rants in error but it's important to note that the application itself can run in fact anywhere. It doesn't have to remain Esher. Not that use application is that you have to make sure that you can send the TELLEMENT real data to Esher but that is the only restriction you can use on server or your own data center. Another important point is that if you deploy that core or an aspect of that core application to Linux you can monitor it the exact same way as we did here. It is also a common misconception that application insets only works with thought that this isn't true. There are the keys for other frameworks and runtime like chav or not. Yes the focus here on that core and expedite that core. But if you have let's say in North Korea's application you can still monitor it with application and sites for further details on other frameworks. You can look at this website. So let's summarize what we saw here in this Demel. First we saw high contracts law requests based on response time and then we saw how you can create performance desks with application insights. Many thanks for watching in the next with all you are going to see how you can track custom dependencies with application insights.