Find online courses made by experts from around the world.
Take your courses with you and learn anywhere, anytime.
Learn and practice real-world skills and achieve your goals.
With "Developing a Search Engine", you will learn everything about Search Engines, even if you've never build one before!
The full course has several video lectures, divided into several chapters. Each chapter will give you a new level of knowledge in Search Engine development. We'll start from the basics of Search Engine development to more advanced and the most popular algorithms used now a days.
"Building a Search Engine" will give you a new perspective on how the Internet works and after you completed the course you will be able to create your own Search Engine with the latest technology and algorithms. Hope you enjoy!
NOTE: In order to keep you up to date in the world of Search Engine Development all the chapters will be updated regularly with new lectures, projects, quizzes and any changes in future versions of all the programming languages covered on the course.
Why Learn Search Engine Development?
The internet is the fastest and largest platform ever created for humans to learn, communicate, share, or create businesses of any kind, and all of this in just 15 years! It is estimated that in the next 2 or 3 years more than 80%%%% of the companies around the world will become internet dependent which will cause a huge demand for Search Engine developer in this market. As the World Wide Web grows Search Engines needs to upgraded proportionally.
Learning Search Engine Development will give you the opportunity to start ahead of other competitors by giving you the knowledge of the most recent web technologies and how to better apply them on your future projects. Knowing Search Engine Development will give you the ability to control and create anything on the web.
How this course will help you to get a Job?
At present the fastest growing technology in the Internet is Search Engines. Google makes thousands of changes every year and employees larger number of engineers who can make their Search Engine more efficient as the structure of the web is becoming larger and complicated. Other companies are employing Search Engine experts to optimize their websites to appear on top results in Search Engines.
I promise you would have never had such kind of learning experience.
Welcome to "Building a Search Engine"
|Section 1: Introduction to Building a Search Engine|
This lecture gives you an overview of the course. This lecture will tell you the things you are going to learn and also importance of this lecture in your life.
Things you will learn:
1. Search Engine architecture (crawler, indexer, query processor and parser).
2. Web crawler algorithms and efficiency.
3. Spider traps.
4. Web scraping
5. Spam fighting.
6. Replication and sharding
7. HTTP attacks
8. Query understanding
9. Spell checking algorithm and using apache solr.
10. Auto complete
11. Big data
This lecture tells you the what you need to know to understand this course:
1. Basics of web development.
2. Basics of networking
3. Any one programming language
4. Basics of data structures and algorithms.
5. Basics of Database management systems.
|Section 2: Getting started with Search Engine|
Introduces you to Search Engine. Talks about difference between search engine and web search engine. Gives you an overview of World Wide Web. Provides definition and explanation to these topics briefly.
Festures of a good Web Search Engine:
1. Index large number of documents.
2. Prevents spider traps.
3. Ranking webpages using pagerank algorithm.
4. Understanding user queries.
5. Auto complete
6. Query clustering.
7. Better web scraping techniques.
This lectures gives you a brief history about search engines. It will help you to get motivated and get started with further lectures.
This lecture explains the difference between web search engine and web directory. Google, bing, yahoo are web search engines. But dmoz, ewd etc are web directories. Once upon a time Yahoo used to be a web directory.
This lecture gives you a difference between metasearch engine and a web search engine. DuckDuckGo is a metasearch engine but google is a search engine. Its easy to create a metasearch engine. Metasearch engine requires less resources and can be rapidly created and deployed successfully.
This lectures explains one of the most important feature integrated into most of the search engines called as social search. This features helps you find more organic results. And makes the search more meaningful. Integrating social search requires and a lot of users using your search engine and must have put up their personal information into your search engine. Social feature can also be enabled using ip tracking and understanding user queries. Social search is a application of machine learning.
Filter bubble is also a search engine feature. Social search looks for related documents and puts algorithms on the top of web graph. But filter bubble used clicks, location, bookmarks, favorites and many more things to rate and display documents.
Instead of trying to build a search engine from scratch its a good choice to use a open source search engine. It will save time and you will have your search engine build up quickly. There are a large number of open source search engine. Most of them are well documentated.
This lectures gives you a overview of common architectures used in modern search engines.
Components of a search engine:
4. Query Processor
Bad design of any one component will lead to a bad search engine. Every component needs to be designed carefully and tested for every situation before deploying.
|Section 3: Web Crawler|
A web crawler is a component of a search engine that downloads information from the World Wide Web. features of a good web crawler:
1. Downloads large number of documents.
2. Takes less CPU time
3. Consumes less bandwidth.
There are two types of redirects codes supported by HTTP protocol.
1. 301 -> Web server responds to 301 redirect if the file is moved permanently.
2. 302 -> Web server responds to 302 redirect of the file is moved temporary.
DNS caching a crawler optimization feature. It helps helps to tackle this problems.
A good crawler always fetches large number of web pages in less time and consumes less bandwidth.
1. Web pages are downloaded by multiple threads.
2. Web pages are downloaded by asynchronous sockets.
Data Compression and Caching
robots.txt and sitemap.xml are two very important files every website must include in their root directory.
1. robots.txt contains rules for crawler.
2. sitemap.xml provides the crawler architecture of the website web directory.
There are three different algorithms a web crawler must follow:
1. Selection policy
2. Re-visit policy
3. Politeness policy
user-agent field in http protocol is used by the crawler to introduce itself to the web server. Web crawler identification helps the web servers to take many major decisions.
Crawling the deep web
Spider traps are different techniques by which a web crawler can be put into an problem. A good web crawler should prevent all kinds of spider traps. Everyday hackers find new spider traps techniques and you should be intelligent enough to catch them and rectify your crawler code to escape from the traps.
Open source web crawlers
|Section 4: Parser|
Parser is a component of a search engine responsible for web scraping. A good parser should always parse different types of documents like:
5. many more
And also prevent spam.
1. invisible text
2. advertisement text.
Parse only what your users want. For example you are creating a mp3 search engine then no need to download and parse pdf files. You only need to download .mp3 url's. So this decision is very important.
Open source parsers
|Section 5: Indexing|
Index is a data structure into which documents can put into quickly and also retrieved quickly. Index data structure is used in almost all types of application. A pdf reader indexer the whole document and finds the page number when your search for a word in the document. Similarly a search engine also indexes.
Index design factors:
1. Merge factors.
2. Storage techniques.
3. Index size.
4. Lookup speed.
5. Fault tolerance.
Inverted index is a index data structure most widely used into search application to search for matching documents according to text. Understanding inverted index is very important.
The forward index
Sharding is the best technique to split the inverted index into multiple computers for fast and efficient querying.
|Section 6: Text Processing|
Text Analysis and Query Processing
|Section 7: Getting deep into Search Engines|
|Section 8: Search Engine Storage|
Parallel Computing vs Distributed Computing
Google Big Table
Google File System and MapReduce
|Section 9: Search Engine Optimization|
Introduction to SEO
White hat versus black hat techniques
On-Page SEO, TIPS and TRICKS
Off-Page SEO, TIPS and TRICKS
|Section 10: Apache Solr|
How does solr work?
Configuring and launching Solr
Solr Cloud and Multiple schema.xml
This documents covers everything about apache solr in details. If you have any problem in understanding any topic please let us know we will make a video for that specific topic and explain it to you.
|Section 11: Bye, Bye Lesson|
If you need any other tutorials regarding this topic please post them on questions section I will create and upload the videos as soon as I can.
Don't forget to give a review.
QScutter is a Indian based company that offers an ever growing range of high quality eLearning solutions that teach using studio quality narrated videos backed-up with practical hands-on examples. The emphasis is on teaching real life skills that are essential in today's commercial environment. We provide tutorials for almost all IT topics.
30 day money back guarantee
Available on desktop, iOS and Android
Certificate of completion
Hours of video content