Proxy Port logo
Articles > Apache Nutch

Apache Nutch

  1. Web scraping
  2. Architecture
  3. Proxy
Apache Nutch is an open-source web search engine software project that enables users to search and index large collections of web content. It is a highly extensible and scalable web crawler framework that allows users to build custom web search engines or vertical search applications.

Apache Nutch is built using Java and is based on the Apache Hadoop ecosystem. It provides a number of features including web crawling, indexing, search and text analysis. It can be easily extended using plugins, which makes it highly customizable and flexible.

Apache Nutch is used by many organizations to build search engines for various applications such as e-commerce sites, news portals, and social media platforms. Its modular architecture, easy integration with other tools and technologies, and ability to handle large volumes of data make it a popular choice for building web search engines.

Web scraping

Apache Nutch is primarily designed for web crawling and indexing, which is different from web scraping. Web crawling involves traversing the web and collecting information from web pages, whereas web scraping involves extracting specific data from web pages.

While Apache Nutch can be used for web scraping, it may not be the most appropriate solution depending on the specific requirements of the scraping task. Apache Nutch is optimized for crawling large volumes of web pages and storing them in an index, rather than extracting specific data from those pages. Additionally, Apache Nutch's crawling process can be complex and resource-intensive, which may not be necessary or desirable for some web scraping use cases.

For web scraping, there are many other tools and libraries that are better suited for the task, such as Scrapy, Crawlee, and StormCrawler. These tools are specifically designed for web scraping and provide features and functionality that are tailored to the needs of data extraction from web pages.


Apache Nutch architecture is a distributed, modular, and scalable system that consists of several components.
  • Web Crawler: The web crawler is responsible for fetching web pages from the internet. It uses a pluggable protocol framework that supports different protocols such as HTTP, HTTPS, FTP, and file. The web crawler is also responsible for parsing the fetched pages and extracting relevant data.

  • URL Frontier: The URL Frontier is a priority queue that contains a list of URLs to be crawled. The URLs are prioritized based on factors such as the relevance of the page, the importance of the page, and the freshness of the content.

  • Indexer: The Indexer is responsible for indexing the crawled web pages into a search index. Apache Nutch uses Apache Solr or Apache Elasticsearch for indexing.

  • Plugins: Apache Nutch provides a plugin architecture that allows users to extend its functionality. There are several plugins available that enable features such as URL filtering, content extraction, and language identification.

  • Hadoop Distributed File System (HDFS): Apache Nutch uses Hadoop Distributed File System (HDFS) to store the crawled data. HDFS is a distributed file system that provides high-throughput access to large datasets.

  • Apache Hadoop: Apache Nutch is built on top of Apache Hadoop, which provides a distributed computing environment for processing large datasets. Apache Nutch uses Hadoop's MapReduce framework for parallel processing of data.
Apache Nutch architecture is designed to be modular, distributed, and scalable, enabling users to crawl and index large volumes of web content efficiently.


Apache Nutch supports the use of proxy servers during web crawling. Proxy servers can be used to hide the IP address of the crawler and to overcome IP-based blocking or throttling by the target website.

Additionally, Apache Nutch provides a plugin called protocol-httpclient that supports proxy authentication using username and password.

It is possible to rotate proxies while crawling with Apache Nutch by configuring a list of proxy servers and rotating through them during the crawling process.

One way to achieve this is by using the ProxyPool plugin, which is included with Apache Nutch. This plugin provides a mechanism for rotating through a list of proxies during web crawling.

The ProxyPool plugin will use the list of proxy servers specified in the property and rotate through them during web crawling. If a connection to a proxy server fails, it will retry the connection according to the http.proxy.retry property.

By using the ProxyPool plugin, you can rotate through a list of proxy servers while crawling with Apache Nutch, which can help to avoid being detected and blocked by target websites.
Proxy for scraping