Analyzing Logs with Kibana Dashboards

A free video tutorial from Sundog Education by Frank Kane
Join over 800K students learning ML, AI, AWS, and Data Eng.
Rating: 4.6 out of 5Instructor rating
37 courses
851,418 students
Analyzing Logs with Kibana Dashboards

Learn more from the full course

Elasticsearch 8 and the Elastic Stack: In Depth and Hands On

Complete Elastic search tutorial - search, analyze, and visualize big data with Elasticsearch, Kibana, Logstash, & Beats

15:26:47 of on-demand video • Updated May 2024

Install and configure Elasticsearch 7 on a cluster
Create search indices and mappings
Search full-text and structured data in several different ways
Import data into Elasticsearch using various techniques
Integrate Elasticsearch with other systems, such as Spark, Kafka, relational databases, S3, and more
Aggregate structured data using buckets and metrics
Use Logstash and the "ELK stack" to import streaming log data into Elasticsearch
Use Filebeats and the Elastic Stack to import streaming data at scale
Analyze and visualize data in Elasticsearch using Kibana
Manage operations on production Elasticsearch clusters
Use cloud-based solutions including Amazon's Elasticsearch Service and Elastic Cloud
English [Auto]
Okay. Okay. So now that we've actually got file beat up and running and imported some access logs into Elasticsearch, let's go back and use Cabana to visualize that data using some of the dashboards that come with file. You'll make life a lot easier, and it's pretty fun, too. So let me show you how to get started. All right. So first we need to install some dashboards for use with Apache access logs and whatnot to make nice, pretty ways of analyzing that log data. Easy to do. First, let's see into where file B lives under a user share file. And from here we're going to say sudo file. Beat setup dash dash dashboards. And that will just import a bunch of premade dashboards for cabana. And we'll wait for that to come down. All right. A couple of minutes later, that succeeded. So let's go and play with it. So let's go back to Cabana at 120 Colon 5601, and I'll select my default workspace. And from here, we're going to go to stack management and add a new data view for that new data that we just imported in the previous activity. So we're going to go to data views under Cabana and it looks like file B already in here. So let's check it out. Cool. We already have. The timestamp field was selected already, so it looks like it already just did this for me automatically. That's very kind of it, isn't it? All right. So let's play with this. Let's go to the Discover tab. Under analytics. And we'll change our view from Shakespeare to Philby. And it's a pretty common error when you're dealing with log data to have this scary message come up to say you have no results. It kind of sounds like there's no data there, but there is. The problem is like it's giving you a hint here is that the time range just isn't right. So by default, Cabana goes and just looks at the last 15 minutes of data. So unless I just imported fresh live log data from the last 15 minutes, I'm not going to see anything easy to fix. So let's go ahead and change that date range to something that's actually is covered by this set of logs. This is a very old log file. It goes back to 2017. So I'm got to change that date range to go back to the middle of 2017. So to set an absolute range, I'm just going to click on where it says 15 minutes ago and change that from relative to absolute. And for the start time, which is what I'm currently editing, I'm going to change this to 2017. And this log actually starts with, it turns out, April 29th. So let's change that to April and set that to 29. And then where it says, Now I'm going to change the end time also to an absolute time, and that is also going to be 2017. And it turns out on May six. So we'll set that as well. All right. And Ellis, click on Update. And now we have data that's much better. And look, it just kind of did the right thing. That's kind of neat. So by default, it's giving me this really handy graph of how much traffic, how many hits I'm getting over each time period. Here, it looks like it's basically every 3 hours by default. So I can see here just what the peaks and valleys are for my traffic on a three minute time intervals within that date range that I just specified. So that's helpful. You know, again, I didn't have to do any work for that, really. It just automatically did that. And I can go and drill in on any given peak if I wanted to see what was going on there. And that will give me an even tighter range of data there. Looks like it's just on five minute intervals now. But let's go ahead and change that date range back to what it was starting back on April 29th of that year. And also note that the individual log entries are here as well for you to explore if you so desire. So you can dig into what the actual data is there. What else can we do here? So let's try to narrow down errors on the website and figure out where they came from. So let's add a filter for status codes or just explore the status codes for now. So how do we find the Stars code? We've got all these 76 different field names here to choose from. Probably easiest just to search for status and see what we find. There it is. HTP dot response dot status code. So if we just click on that automatically, again, we get this convenient histogram of how those status codes break down. And fortunately, five hundreds didn't even make the top four here that we're showing. It says top five, but for some reason it's only showing me four. Let's go ahead and visualize that while we're at it, huh? So let's add a filter. And again, we're going to be dealing with the status code. Wow. So many to choose from. There it is. ATP, ATP Response Status Code. And we want that is 500. And let's save that filter. And we can see very easily here that there's this one spike of activity at the three a hour on May the first, where I got a bunch of 500. So let's dig into that. Let's click on that column and dig in further. And we can narrow this down further to around 5:40 a.m.. It's when that happened. So let's try and dig into what was going on there. Let's go back to discover. Discard changes, get out of lens and we're in that time zone again. I'm going to add that filter back in. And we'll search for that status results. It was HTTP. That status starts something there it is a.s.a.p response status code. And again, so that is 500. Just narrow this view down a little bit. There's our spike again. We can click on that some more and here we can see a couple of examples of where that came from so we can see that these two events came from. It seems to draw. Yeah, they appear to be. Well, this one was a legitimate five or dare. They were getting a a an actual path to a real webpage on my website and the server crashed. So that might be indicative of an actual issue on my website. This one was actually from someone trying to log in and I certainly wasn't trying to log in at 5 a.m. on that day. So that is probably indicative of some sort of attack on my site, which is not cool. So that's one way to drill in another way would be to just use those dashboards that we just imported. So let's play with those. Those are even more fun. So let's go back to the menu here and go to Dashboard and look at all this that wasn't there before. These are all these new dashboards that we just added as the first thing we did in this lecture. So let's find the one about. Gosh, there's so many. Let's just browse here. We want file beat. Apache. And one of the let's visualize error codes. There it is. File, be it Apache access in error logs. Let's check it out. And. Well, look at that. That's interesting. So let's open up our time range again here. It started on April 29th and ended on May 5th. Was it close enough? Look at regenerate and look at all this cool data. So we have this geographical map showing me where my traffic's coming from. Even this is very much like a Google Analytics or something, except for running it ourself on our own little computer here. So for free. Good deal. So you can see here that we have a lot of traffic come from the United States, grand total of 276 unique APIs. And a lot of it for some reason is coming from Washington. It seems sort of that that part of the country. Also, if we zoom out here a little bit. Hide out a little. Let's go ahead and zoom out and we can see that there's also a bunch of traffic coming from. Looks like China somewhere or Europe and anything else interesting. Pretty much scattered all over the globe here, really. And again, we can break down the traffic here. We have these handy charts here and pie charts explaining what's going on. We can look at our distribution of operating systems and browsers here. Looks like a lot of Firefox. It's surprising, actually. A little bit of chrome. I mean, granted, this is 2017, so it was a different world then and a lot of traffic from Windows, which is not too surprising. Very cool. So let's narrow this down as well. It turns out you can actually. So we can highlight specific status codes here, specific response codes, and kind of highlight them on that graph. Awful lot of three oh ones. Top of that. You can enable or disable those dynamically. Let's say you want to dig into one of these in more depth. We could actually click on one of these little sections of the bar charts here and explore it further. So let's see what was going on here where we had a bunch of three O ones for some reason where those coming from. It automatically gives you some filters to apply there in that case, which is quite helpful. So if it wants me to, I can narrow that down automatically by that timestamp and that status code if I want to dig into what that specific time slice was going on. Let's apply that. And you can see now I'm looking at things at a much tighter time interval per 5 minutes and I can see there's a pretty even distribution of these three or one arrows happening. They kind of ramped up and peeked around a little after midnight, it seems, and then tailed off. But what are these things? So if we go down here, we can explore further what those URLs were. Looks like we had a bunch of them coming from just slash people hitting the home page and also trying to hit my RSS feed. Okay, that's interesting. So it would seem that some crawler was probably trying to hit my RSS feed and download my latest blog posts automatically. And they were getting back three or one errors in response from my server. So already I have some insight into what generated those errors and what caused them. And do I care about them? Well, maybe not. Someone's just trying to like crawl my site basically, and doing who knows what with it. Let's go ahead and clear that filter out and reset our date range again. And we're back to our full view here. Very cool. But you can see there's a lot of power here, a lot of information to dig into. And this is all just out of the box in those standard dashboards. So they have one that's already made for Apache, and it works pretty neat. And I do encourage you to fiddle around with this some more, explore this to see what more data you can glean from it, and just play around to get more familiar with this dashboard and to force you to do so. In the next lecture, I'm going to have a little exercise challenge for you to do to dig into the answer to a specific question. So stick with me and we'll dive into that next.