This project is a command line tool and python library that uses Wappalyzer extension (and its fingerprints) to detect technologies. Other projects emerged after discontinuation of the official open source project are using outdated fingerpints and lack accuracy when used on dynamic web-apps, this project bypasses those limitations.
Before installing wappalyzer, you will to install Firefox and geckodriver. Below are detailed steps for setting up geckodriver but you may use google/youtube for help.
Setting up geckodriver
- Visit the official GeckoDriver releases page on GitHub:
https://github.com/mozilla/geckodriver/releases - Download the version compatible with your system:
- For Windows:
geckodriver-vX.XX.X-win64.zip
- For macOS:
geckodriver-vX.XX.X-macos.tar.gz
- For Linux:
geckodriver-vX.XX.X-linux64.tar.gz
- For Windows:
- Extract the downloaded file to a folder of your choice.
To ensure Selenium can locate the GeckoDriver executable:
- Windows:
- Move the
geckodriver.exe
to a directory (e.g.,C:\WebDrivers\
). - Add this directory to the system's PATH:
- Open Environment Variables.
- Under System Variables, find and select the
Path
variable, then click Edit. - Click New and enter the directory path where
geckodriver.exe
is stored. - Click OK to save.
- Move the
- macOS/Linux:
- Move the
geckodriver
file to/usr/local/bin/
or another directory in your PATH. - Use the following command in the terminal:
Ensure
sudo mv geckodriver /usr/local/bin/
/usr/local/bin/
is in your PATH.
- Move the
pipx install wappalyzer
To use it as a library, install it with pip
inside an isolated container e.g. venv
or docker
. You may also --break-system-packages
to do a 'regular' install but it is not recommended.
Steps
- Clone the repository:
git clone https://github.com/s0md3v/wappalyzer-next.git
cd wappalyzer-next
- Build and run with Docker Compose:
docker compose up -d
- To scan URLs using the Docker container:
- Scan a single URL:
docker compose run --rm wappalyzer -i https://example.com
- Scan Multiple URLs from a file:
docker compose run --rm wappalyzer -i https://example.com -oJ output.json
Some common usage examples are given below, refer to list of all options for more information.
- Scan a single URL:
wappalyzer -i https://example.com
- Scan multiple URLs from a file:
wappalyzer -i urls.txt -t 10
- Scan with authentication:
wappalyzer -i https://example.com -c "sessionid=abc123; token=xyz789"
- Export results to JSON:
wappalyzer -i https://example.com -oJ results.json
Note: For accuracy use 'full' scan type (default). 'fast' and 'balanced' do not use browser emulation.
-i
: Input URL or file containing URLs (one per line)--scan-type
: Scan type (default: 'full')fast
: Quick HTTP-based scan (sends 1 request)balanced
: HTTP-based scan with more requestsfull
: Complete scan using wappalyzer extension
-t, --threads
: Number of concurrent threads (default: 5)-oJ
: JSON output file path-oC
: CSV output file path-oH
: HTML output file path-c, --cookie
: Cookie header string for authenticated scans
The python library is a available on pypi as wappalyzer
and can be imported with the same name.
The main function you'll interact with is analyze()
:
from wappalyzer import analyze
# Basic usage
results = analyze('https://example.com')
# With options
results = analyze(
url='https://example.com',
scan_type='full', # 'fast', 'balanced', or 'full'
threads=3,
cookie='sessionid=abc123'
)
url
(str): The URL to analyzescan_type
(str, optional): Type of scan to perform'fast'
: Quick HTTP-based scan'balanced'
: HTTP-based scan with more requests'full'
: Complete scan including JavaScript execution (default)
threads
(int, optional): Number of threads for parallel processing (default: 3)cookie
(str, optional): Cookie header string for authenticated scans
Returns a dictionary with the URL as key and detected technologies as value:
{
"https://github.com": {
"Amazon S3": {"version": "", "confidence": 100, "categories": ["CDN"], "groups": ["Servers"]},
"lit-html": {"version": "1.1.2", "confidence": 100, "categories": ["JavaScript libraries"], "groups": ["Web development"]},
"React Router": {"version": "6", "confidence": 100, "categories": ["JavaScript frameworks"], "groups": ["Web development"]},
"https://google.com" : {},
"https://example.com" : {},
}}
Firefox extensions are .xpi files which are essentially zip files. This makes it easier to extract data and slightly modify the extension to make this tool work.
- fast: Sends a single HTTP request to the URL. Doesn't use the extension.
- balanced: Sends additional HTTP requests to .js files, /robots.txt annd does DNS queries. Doesn't use the extension.
- full: Uses the official Wappalyzer extension to scan the URL in a headless browser.