# Beginning Java Data Structures and Algorithms

**3 hours**left at this price!

- 3 hours on-demand video
- 1 downloadable resource
- Full lifetime access
- Access on mobile and TV

- Certificate of Completion

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business- Understand some of the fundamental concepts behind key algorithms
- Express space and time complexities using Big O notation.
- Correctly implement classic sorting algorithms such as merge and quicksort
- Correctly implement basic and complex data structures
- Learn about different algorithm design paradigms, such as greedy, divide and conquer, and dynamic programming
- Apply powerful string matching techniques and optimize your application logic
- Master graph representations and learn about different graph algorithms

In this course you will learn everything the concepts pertaining to algorithms and data structures. Each of the popular examples of algorithms, such as bubble, quick, and merge sort can help you relate the algorithms to familiar programming scenarios such as arranging numbers in ascending or descending order, and so on. Detailed explanations of data structure examples such as binary tree, hash table, and graphs will help you understand the improved efficiency of the program.

Algorithmic complexity is a way to describe the efficiency of an algorithm as a relation of its input. It can be used to describe various properties of our code, such as runtime speed or memory requirements. It's also a very important tool programmers should understand to write efficient software. In this topic, we will start by describing a scenario, introducing the topic, and then dive into the details of the various types of complexities and the different techniques to measure them.

In the previous lesson, we saw how the intersection problem can be improved by using a sorting algorithm. This is common with many problems. If the data is organized in an ordered manner, a more efficient algorithm can be developed. In this lesson, we will start by exploring three types of sorting techniques, which are bubble, quick, and merge sorting. Later, we will learn different ways to organize data using fundamental data structures.

Algorithmic complexity is a way to describe the efficiency of an algorithm as a relation of its input. It can be used to describe various properties of our code, such as runtime speed or memory requirements. It's also a very important tool programmers should understand to write efficient software. In this topic, we will start by describing a scenario, introducing the topic, and then dive into the details of the various types of complexities and the different techniques to measure them.

Although the quicksort on average is pretty fast, it still has the theoretical worst time complexity of O(n²). In this topic, we shall examine another sorting algorithm, called merge sort, in which the worst time complexity is O(n log n). Similar to quick sort, merge sort belongs to the divide and conquer class of algorithms.

Data structures are a way to organize data so that it is efficiently accessible for the problem you are trying to solve. Choosing the right data structure will depend on the type of problem you're trying to solve , the amount of data you need to organize, and the medium you use to store your data. We have already seen and used one example of a data structure. In the preceding topics, we have made extensive use of arrays. Arrays are the most primitive of data structures. They provide access to your data using an index and are fixed in size. This is opposed to other dynamic data structures that can grow and make more space for data whenever it's needed.

In the preceding lesson, we introduced the concept of data structures by looking at arrays, linked lists, queues, and stacks. In this lesson, we will use some of these primitive structures to build more complex ones. We'll start the lesson by looking at hash tables, which are useful data structures for fast key-value lookup. In the second part of the lesson, we will learn about a more complex data structure that supports range queries, called binary trees.

Hash tables provide us with a fast data structure for organizing these key value pairs and implementing our data dictionary. They are useful in a wide variety of applications due to the quick lookup and ease of use for in-memory data storage. Insertion and search operations have a typical average runtime complexity of O(1). Let’s look at it in more detail.

Like hash tables, binary search trees are fast lookup data structures for organizing key value pairs and implement the data dictionary operations. In addition to providing insert, search, and delete, binary tree supports efficient querying such as finding minimum and maximum, successor, and predecessor. When using balanced binary search trees, insert and search operations have a worst-case runtime complexity of O(log n). This is a big theoretical improvement over the worst-case scenario of a hash table, which is O(n).

In the last lesson, we learned about hash tables and binary search trees. Today, we will start with the algorithm design paradigms. These design patterns can be seen as the generic methods or approaches that motivate the design of a class of algorithms. Just as an algorithm is a higher abstraction than a computer program, an algorithm design paradigm is an abstraction higher than an algorithm. The choice of an algorithm paradigm is an important one when designing an algorithm.

This lesson will focus on the following three algorithm paradigms:

Greedy

Divide and conquer

Dynamic programming

Algorithms typically go through a sequence of steps, wherein each step you have a set of choices. Greedy algorithms, as the name suggests, follow the heuristic of making the locally optimum choice at each step, with the hope of arriving at a global optimum. To better understand what we mean by this, let's introduce a problem.

In lesson two, we introduced, among other sorting algorithms, merge and quick sort. A peculiarity of both algorithms is that they divide the problem into subproblems that are smaller instances of the same, solve each one of the subproblems recursively, and then combine the solutions to the subproblems into the solution for the original problem. This strategy forms the basis of the divide and conquer paradigm.

After greedy and divide and conquer, we will turn our attention to dynamic programming. Dynamic programming is an algorithm design paradigm that also attempts to solve optimization problems by combining solutions with subproblems. Unlike divide and conquer, subproblems need to exhibit optimal substructure for dynamic programming to be applicable.

String matching algorithms are quite common in text-editing programs. These kind of programs frequently need to find all occurrences of a pattern in the text, where the text is usually the document being edited and the pattern is a word supplied by the user. Since text-editing programs aim to be responsive, having efficient algorithms to solve the string matching problem is fundamental.

The Boyer-Moore string searching algorithm was introduced by Robert S. Boyer and J. Strother Moore in 1977, and builds upon the naive search algorithm by intelligently skipping certain sections of the text. One key feature of the algorithm is that it matches the pattern from right to left, instead of left to right, using to its advantage a couple of shift rules that improve its running time. To understand the effect of these rules, let's build the Boyer-Moore algorithm from our naive search algorithm.

Even though the Boyer-Moore string search algorithm is the standard benchmark for practical string search literature, there are other string matching algorithms that are also suitable for different purposes. In this small topic, we present the following three, which are the most famous ones:

Rabin-Karp

Knuth-Morris-Pratt

Aho-Corasick

However, we will only give out the implementation of Rabin-Karp.

Graph problems are very common in computer science, and their applications pervade many real-life applications. Everything that can be represented by entities and their relationships can ultimately be modeled by a graph. How we connect with friends on social media, how route–planning applications are able to find the shortest route, and how e-commerce websites are able to provide us with recommendations are all examples of problems modelled by graphs.

There are usually two standard ways to represent a graph G = (V, E) in a computer

program:

As a collection of adjacency lists.

As an adjacency matrix.

You can use either way to represent both directed and undirected graphs. We'll start by looking at the adjacency list representation.

A common activity on a graph is visiting each vertex of it in a given order. We will start by introducing the breadth-first search, and then follow with depth-first search. Both of these techniques form the archetype for many important graph algorithms, as we will see later with the cycle detection and Dijkstra's algorithm for single-source shortest paths.

The shortest path is a path between two vertices so that the sum of the weights of the edges that constitute the path is minimized. The shortest path problem has many applications in the real world, from finding directions on mapping applications to minimizing the moves to solve a puzzle. In this topic, we shall look at two different strategies for computing shortest paths: one that finds the shortest paths from a single source to every other vertex in the graph, and another that finds shortest paths between all pairs of vertices in a graph.

A prime number is a natural number greater than one whose only divisors are one and the number itself. Prime numbers play a very important role in the fundamental theorem of arithmetic: every natural number greater than one is either a prime or a product of primes. Nowadays, number-theoretic algorithms are widely used, mainly due to cryptographic schemes based on large prime numbers. Most of those cryptographic schemes are based on the fact that we can efficiently find large primes, but we cannot factor the product of those large primes efficiently.

- Basic knowledge of Java, mathematics and object-oriented programming techniques

Learning about data structures and algorithms gives you better insight on how to solve common programming problems. Most of the problems faced every day by programmers have been solved, tried, and tested. By knowing how these solutions work, you can ensure that you choose the right tool when you face these problems.

This course teaches you tools that you can use to build efficient applications. It starts with an introduction to algorithms and big O notation, later explains bubble, merge, quicksort, and other popular programming patterns. You'll also learn about data structures such as binary trees, hash tables, and graphs. The course progresses to advanced concepts, such as algorithm design paradigms and graph theory. By the end of the course, you will know how to correctly implement common algorithms and data structures within your applications.

**About the Author**

**Kristian Secor** has been in the industry for quite a while. He a M.S. focusing on web development and an Ed.D focusing on educational technology. He has been developing for about 20 years and has been teaching for about 16 years at the University level for the frontend, backend, and mobile application courses. Currently, he is working as the Program Head for the Web Development Department at San Diego Mesa College. He is also an instructor for both user experience and frontend development at U.C. San Diego Extension.

- If you want to better understand common data structures and algorithms by following code examples in Java and improve your application efficiency, then this is the course for you.