Proxy Port logo
Articles >

  1. Architecture
  2. Advantages
  3. Disadvantages
  4. Proxy is a modern web scraping and automation framework for Node.js. It provides an easy-to-use and powerful interface for scraping and extracting data from websites and web applications. supports various scraping techniques, including static, dynamic, and hybrid scraping, and allows you to build complex scraping workflows using a simple and intuitive API.

Additionally, includes built-in support for various web technologies, such as React, Angular, and Vue.js, which makes it easy to scrape dynamic web pages and Single Page Applications (SPAs).

With, you can also automate web interactions, such as form submissions, clicks, and scrolling, using a headless browser powered by Puppeteer or Playwright. is a powerful and flexible web scraping solution for developers and data scientists who need to extract data from the web.

Architecture follows a modular and extensible architecture that consists of several key components, including:
  • Ayakashi Core: This is the core engine of, responsible for managing the scraping and automation process. It includes a high-level API for defining scraping workflows, as well as lower-level APIs for interacting with the DOM and making HTTP requests.

  • Ayakashi Browser: This is a headless browser based on either Puppeteer or Playwright that is used for automating web interactions, such as form submissions and button clicks. It supports multiple tabs and can be used to scrape dynamic web pages and SPAs.

  • Ayakashi Selectors: These are a set of powerful and flexible CSS-like selectors that allow you to extract data from web pages with ease. Ayakashi Selectors support various types of selectors, including element selectors, attribute selectors, pseudo-selectors, and combinators.

  • Ayakashi Entities: These are user-defined data models that represent the data you want to extract from a web page. Ayakashi Entities can be defined using a simple and intuitive API and can be used to extract structured data, such as product information or contact details.

  • Ayakashi Plugins: These are optional modules that extend the functionality of They can be used to integrate with third-party libraries and services, add custom selectors or entities, or implement custom data pipelines.'s architecture is designed to be modular and flexible, allowing developers to build complex scraping workflows that can handle a wide range of scraping scenarios and use cases.

Advantages has several advantages as a web scraping and automation framework, including:
  • Ease of Use: provides a simple and intuitive API for defining scraping workflows and extracting data from web pages. Its powerful and flexible selector engine allows you to extract data with ease, even from complex and dynamic web pages.

  • Modularity:'s architecture is designed to be modular and extensible, allowing you to easily integrate it with other libraries and services or add custom functionality through plugins.

  • Scalability: can handle large-scale scraping tasks with ease, thanks to its support for parallelization and distributed scraping. You can easily configure to run multiple instances in parallel or across multiple machines to speed up scraping tasks.

  • Robustness: is designed to be robust and fault-tolerant, with built-in error handling and retry mechanisms that ensure scraping tasks can continue even in the face of errors or network interruptions.

  • Headless Browser Support: supports headless browsers like Puppeteer and Playwright, which allows it to scrape dynamic web pages and Single Page Applications (SPAs) that are not easily scrapeable with traditional scraping techniques.

  • Data Extraction: allows you to extract data in a structured way using its entity system. is a powerful and flexible web scraping framework that provides developers with the tools and features they need to scrape and extract data from the web efficiently and effectively.


While is a powerful and flexible web scraping framework, it also has some limitations and potential drawbacks, including:
  • Node.js Dependency: is built on top of Node.js, which means that it may not be the best choice for developers who prefer other programming languages or platforms.

  • Limited Community Support: is a relatively new framework, which means that it has a smaller community and less third-party support than more established frameworks like Scrapy or Beautiful Soup.

  • Limited Documentation: Although has comprehensive API documentation, it may be challenging for new users to get started with the framework, as there are limited tutorials and guides available online.

  • Limited Proxy Support: has limited built-in support for proxies and may require additional configuration to work with certain types of proxies.

  • Costs: While has a free version available, some advanced features, such as the ability to run multiple concurrent scrapers, require a paid license. The cost of the license may be a consideration for some users. may not be the best choice for all scraping scenarios and use cases. Developers should carefully evaluate the features and limitations of before deciding whether it is the right choice for their project.

Proxy supports proxy servers for making HTTP requests. You can configure the framework to use a proxy by passing the proxy configuration options to the launch method of the Ayakashi browser instance. has some limitations with certain types of proxies, such as rotating proxies or proxies that require authentication using a token or IP whitelist. If you're using one of these types of proxies, you may need to use a custom HTTP library or proxy manager to handle the requests.

Proxy for scraping