Building a Search Engine

All algorithms and secrets reveled
2.0 (27 ratings)
Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
700 students enrolled
25% off
Take This Course
  • Lectures 52
  • Length 5.5 hours
  • Skill Level All Levels
  • Languages English
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works


Find online courses made by experts from around the world.


Take your courses with you and learn anywhere, anytime.


Learn and practice real-world skills and achieve your goals.

About This Course

Published 9/2013 English

Course Description

With "Developing a Search Engine", you will learn everything about Search Engines, even if you've never build one before!

The full course has several video lectures, divided into several chapters. Each chapter will give you a new level of knowledge in Search Engine development. We'll start from the basics of Search Engine development to more advanced and the most popular algorithms used now a days.

"Building a Search Engine" will give you a new perspective on how the Internet works and after you completed the course you will be able to create your own Search Engine with the latest technology and algorithms. Hope you enjoy!

NOTE: In order to keep you up to date in the world of Search Engine Development all the chapters will be updated regularly with new lectures, projects, quizzes and any changes in future versions of all the programming languages covered on the course.

Why Learn Search Engine Development?

The internet is the fastest and largest platform ever created for humans to learn, communicate, share, or create businesses of any kind, and all of this in just 15 years! It is estimated that in the next 2 or 3 years more than 80%%%% of the companies around the world will become internet dependent which will cause a huge demand for Search Engine developer in this market. As the World Wide Web grows Search Engines needs to upgraded proportionally.

Learning Search Engine Development will give you the opportunity to start ahead of other competitors by giving you the knowledge of the most recent web technologies and how to better apply them on your future projects. Knowing Search Engine Development will give you the ability to control and create anything on the web.

How this course will help you to get a Job?

At present the fastest growing technology in the Internet is Search Engines. Google makes thousands of changes every year and employees larger number of engineers who can make their Search Engine more efficient as the structure of the web is becoming larger and complicated. Other companies are employing Search Engine experts to optimize their websites to appear on top results in Search Engines.

I promise you would have never had such kind of learning experience.

Welcome to "Building a Search Engine"

What are the requirements?

  • Internet
  • OS X, Windows or Ubuntu

What am I going to get from this course?

  • Cover all algorithms used in Search Engines
  • Introduce you to Big Data technologies and explain how to use them to build a Search Engine
  • Couse contents are updated regularly.
  • Reveles all secret spam fighting techniques used by GOOGLE

Who is the target audience?

  • Needs to know basics of Networking
  • Needs to know basics of Web Development
  • Should be familier with atleast one programming language

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.


Section 1: Introduction to Building a Search Engine

This lecture gives you an overview of the course. This lecture will tell you the things you are going to learn and also importance of this lecture in your life.

Things you will learn:

1. Search Engine architecture (crawler, indexer, query processor and parser).

2. Web crawler algorithms and efficiency.

3. Spider traps.

4. Web scraping

5. Spam fighting.

6. Replication and sharding

7. HTTP attacks

8. Query understanding

9. Spell checking algorithm and using apache solr.

10. Auto complete

11. Big data

12. SEO

Much more.


This lecture tells you the what you need to know to understand this course:

1. Basics of web development.

2. Basics of networking

3. Any one programming language

4. Basics of data structures and algorithms.

5. Basics of Database management systems.

Section 2: Getting started with Search Engine

Introduces you to Search Engine. Talks about difference between search engine and web search engine. Gives you an overview of World Wide Web. Provides definition and explanation to these topics briefly.


Festures of a good Web Search Engine:

1. Index large number of documents.

2. Prevents spider traps.

3. Ranking webpages using pagerank algorithm.

4. Understanding user queries.

5. Auto complete

6. Query clustering.

7. Better web scraping techniques.

Much more


This lectures gives you a brief history about search engines. It will help you to get motivated and get started with further lectures.


This lecture explains the difference between web search engine and web directory. Google, bing, yahoo are web search engines. But dmoz, ewd etc are web directories. Once upon a time Yahoo used to be a web directory.


This lecture gives you a difference between metasearch engine and a web search engine. DuckDuckGo is a metasearch engine but google is a search engine. Its easy to create a metasearch engine. Metasearch engine requires less resources and can be rapidly created and deployed successfully.


This lectures explains one of the most important feature integrated into most of the search engines called as social search. This features helps you find more organic results. And makes the search more meaningful. Integrating social search requires and a lot of users using your search engine and must have put up their personal information into your search engine. Social feature can also be enabled using ip tracking and understanding user queries. Social search is a application of machine learning.


Filter bubble is also a search engine feature. Social search looks for related documents and puts algorithms on the top of web graph. But filter bubble used clicks, location, bookmarks, favorites and many more things to rate and display documents.


Instead of trying to build a search engine from scratch its a good choice to use a open source search engine. It will save time and you will have your search engine build up quickly. There are a large number of open source search engine. Most of them are well documentated.


This lectures gives you a overview of common architectures used in modern search engines.

Components of a search engine:

1. Parser

2. Crawler

3. Indexer

4. Query Processor

Bad design of any one component will lead to a bad search engine. Every component needs to be designed carefully and tested for every situation before deploying.

3 questions

Portion is Section1 and 2

Section 3: Web Crawler

A web crawler is a component of a search engine that downloads information from the World Wide Web. features of a good web crawler:

1. Downloads large number of documents.

2. Takes less CPU time

3. Consumes less bandwidth.


There are two types of redirects codes supported by HTTP protocol.

1. 301 -> Web server responds to 301 redirect if the file is moved permanently.

2. 302 -> Web server responds to 302 redirect of the file is moved temporary.


DNS caching a crawler optimization feature. It helps helps to tackle this problems.

1. Bandwidth

2. Time


A good crawler always fetches large number of web pages in less time and consumes less bandwidth.

1. Web pages are downloaded by multiple threads.

2. Web pages are downloaded by asynchronous sockets.

Data Compression and Caching

robots.txt and sitemap.xml are two very important files every website must include in their root directory.

1. robots.txt contains rules for crawler.

2. sitemap.xml provides the crawler architecture of the website web directory.


There are three different algorithms a web crawler must follow:

1. Selection policy

2. Re-visit policy

3. Politeness policy


user-agent field in http protocol is used by the crawler to introduce itself to the web server. Web crawler identification helps the web servers to take many major decisions.

Crawling the deep web

Spider traps are different techniques by which a web crawler can be put into an problem. A good web crawler should prevent all kinds of spider traps. Everyday hackers find new spider traps techniques and you should be intelligent enough to catch them and rectify your crawler code to escape from the traps.

Popular libraries
Open source web crawlers
Crawler questions
5 questions
Section 4: Parser

Parser is a component of a search engine responsible for web scraping. A good parser should always parse different types of documents like:

1. html

2. pdf

3. doc

4. ppt

5. many more

And also prevent spam.


1. invisible text

2. advertisement text.


Parse only what your users want. For example you are creating a mp3 search engine then no need to download and parse pdf files. You only need to download .mp3 url's. So this decision is very important.

Spam fighting
Open source parsers
Section 5: Indexing

Index is a data structure into which documents can put into quickly and also retrieved quickly. Index data structure is used in almost all types of application. A pdf reader indexer the whole document and finds the page number when your search for a word in the document. Similarly a search engine also indexes.


Index design factors:

1. Merge factors.

2. Storage techniques.

3. Index size.

4. Lookup speed.

5. Fault tolerance.


Inverted index is a index data structure most widely used into search application to search for matching documents according to text. Understanding inverted index is very important.

The forward index

Sharding is the best technique to split the inverted index into multiple computers for fast and efficient querying.

3 questions

Questions on indexing

Section 6: Text Processing
Text Analysis and Query Processing
Section 7: Getting deep into Search Engines
Query Clustring
Spell checking
Spell checker
88 pages
3 questions
Section 8: Search Engine Storage
Parallel Computing vs Distributed Computing
Google Big Table
Google File System and MapReduce
Apache Solr
Storage questions
4 questions
Section 9: Search Engine Optimization
Introduction to SEO
White hat versus black hat techniques
13 pages
12 pages
Section 10: Apache Solr
How does solr work?
Configuring and launching Solr
Solr Cloud and Multiple schema.xml

This documents covers everything about apache solr in details. If you have any problem in understanding any topic please let us know we will make a video for that specific topic and explain it to you.

Section 11: Bye, Bye Lesson

If you need any other tutorials regarding this topic please post them on questions section I will create and upload the videos as soon as I can.

Don't forget to give a review.


Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

QScutter Tutorials, a place to learn technology

QScutter is a Indian based company that offers an ever growing range of high quality eLearning solutions that teach using studio quality narrated videos backed-up with practical hands-on examples. The emphasis is on teaching real life skills that are essential in today's commercial environment. We provide tutorials for almost all IT topics.

Ready to start learning?
Take This Course