Web Scarpping
Web Scarpping
Web Scarpping
Objectives
After completing this reading, you will be able to:
Explain key concepts related to HTML structure and HTML tag composition.
Explore the concept of HTML document trees.
Familiarize yourself with HTML tables.
Gain insight into the basics of web scraping using Python and BeautifulSoup.
The process typically begins with an HTTP request. A web scraper sends an HTTP request to a specific URL, similar to how a web browser would when you visit a
website. The request is usually an HTTP GET request, which retrieves the web page's content.
The web server hosting the website responds to the request by returning the requested web page's HTML content. This content includes the visible text and media
elements and the underlying HTML structure that defines the page's layout.
HTML parsing
Once the HTML content is received, you need to parse the content. Parsing involves breaking down the HTML structure into components, such as tags, attributes,
and text content. You can use BeautifulSoup in Python. It creates a structured representation of the HTML content that can be easily navigated and manipulated.
Data extraction
With the HTML content parsed, web scrapers can now identify and extract the specific data they need. This data can include text, links, images, tables, product
prices, news articles, and more. Scrapers locate the data by searching for relevant HTML tags, attributes, and patterns in the HTML structure.
Data transformation
Extracted data may need further processing and transformation. For instance, you can remove HTML tags from text, convert data formats, or clean up messy data.
This step ensures the data is ready for analysis or other use cases.
about:blank 1/4
14/03/2024, 18:36 about:blank
Storage
After extraction and transformation, you can store the scraped data in various formats, such as databases, spreadsheets, JSON, or CSV files. The choice of storage
format depends on the specific project's requirements.
Automation
In many cases, scripts or programs automate web scraping. These automation tools allow recurring data extraction from multiple web pages or websites. Automated
scraping is especially useful for collecting data from dynamic websites that regularly update their content.
HTML structure
Hypertext markup language (HTML) serves as the foundation of web pages. Understanding its structure is crucial for web scraping.
An HTML tag consists of an opening (start) tag and a closing (end) tag.
Tags have names (<a> for an anchor tag).
Tags may contain attributes with an attribute name and value, providing additional information to the tag.
Tags can contain strings and other tags, making them the tag's children.
Tags within the same parent tag are considered siblings.
For example, the <html> tag contains both <head> and <body> tags, making them descendants of <html but children of <html>. <head> and <body> are
siblings.
about:blank 2/4
14/03/2024, 18:36 about:blank
HTML tables
HTML tables are essential for presenting structured data.
Web scraping
Web scraping involves extracting information from web pages using Python. It can save time and automate data collection.
Required tools
Web scraping requires Python code and two essential modules: Requests and Beautiful Soup. Ensure you have both modules installed in your Python environment.
1. 1
2. 2
Copied!
To start web scraping, you need to fetch the HTML content of a webpage and parse it using Beautiful Soup. Here's a step-by-step example:
1. 1
2. 2
3. 3
4. 4
5. 5
6. 6
7. 7
8. 8
9. 9
10. 10
11. 11
12. 12
13. 13
14. 14
15. 15
16. 16
17. 17
1. import requests
2. from bs4 import BeautifulSoup
3.
4. # Specify the URL of the webpage you want to scrape
about:blank 3/4
14/03/2024, 18:36 about:blank
5. url = 'https://en.wikipedia.org/wiki/IBM'
6.
7. # Send an HTTP GET request to the webpage
8. response = requests.get(url)
9.
10. # Store the HTML content in a variable
11. html_content = response.text
12.
13. # Create a BeautifulSoup object to parse the HTML
14. soup = BeautifulSoup(html_content, 'html.parser')
15.
16. # Display a snippet of the HTML content
17. print(html_content[:500])
Copied!
BeautifulSoup represents HTML content as a tree-like structure, allowing for easy navigation. You can use methods like find_all to filter and extract specific HTML
elements. For example, to find all anchor tags () and print their text:
1. 1
2. 2
3. 3
4. 4
5. 5
6. 6
Copied!
Web scraping allows you to navigate the HTML structure and extract specific information based on your requirements. This process may involve finding specific
tags, attributes, or text content within the HTML document.
Beautiful Soup is a powerful tool for navigating and extracting specific web page parts. It allows you to find elements based on their tags, attributes, or text, making
extracting the information you're interested in easier.
Pandas, a Python library, provides a function called read_html, which can automatically extract data from websites' tables and present it in a format suitable for
analysis. It’s similar to taking a table from a webpage and importing it into a spreadsheet for further analysis.
Conclusion
In this reading, you learned about web scraping with BeautifulSoup and Pandas with emphasis on extracting elements and tables. BeautifulSoup facilitates HTML
parsing, while Pandas' read_html streamlines table extraction. The reading also highlighted responsible web scraping, ensuring adherence to website terms. Armed
with this knowledge, you can confidently engage in precise data extraction.
Author
Akansha Yadav
about:blank 4/4