Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Web Scrapping

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Twitter web scrapping using python:

Libraries:
1.Twint
2. Beautiful soup
3. Snscrape
4. Selenium

Twint: 
Twint is an advanced Twitter scraping tool written in Python that allows for scraping
Tweets from Twitter profiles without using Twitter's API.
Twint utilizes Twitter's search operators to let you scrape Tweets from specific users,
scrape Tweets relating to certain topics, hashtags & trends, or sort out sensitive
information from Tweets like e-mail and phone numbers. I find this very useful, and
you can get really creative with it too.
Twint also makes special queries to Twitter allowing you to also scrape a Twitter
user's followers, Tweets a user has liked, and who they follow without any
authentication, API, Selenium, or browser emulation.

tl;dr Benefits
Some of the benefits of using Twint vs Twitter API:

 Can fetch almost all Tweets (Twitter API limits to last 3200 Tweets only);
 Fast initial setup;
 Can be used anonymously and without Twitter sign up;
 No rate limitations.

import twint
import nest_asyncio
nest_asyncio.apply()

c = twint.Config()
c.Limit = 1
c.Username = 'iamsrk'
c.Min_likes = 30000
c.Pandas = True
c.Since = '2021-06-05'
c.Media = False

twint.run.Search(c)

Tweets_df = twint.storage.panda.Tweets_df

Link: https://github.com/twintproject/twint
https://pythonsimplified.com/how-to-scrape-tweets-without-twitters-api-using-twint/

2. Beautifulsoup
Beautiful Soup is a Python library for pulling data out of HTML and XML files. It
works with your favorite parser to provide idiomatic ways of navigating, searching,
and modifying the parse tree. It commonly saves programmers hours or days of
work.
https://pypi.org/project/beautifulsoup4/
https://medium.com/search?q=twitter+web+scrapping+python

Part 1
# script to scrap tweets by a twitter user.
# Author - ThePythonDjango.Com
# dependencies - BeautifulSoup, requests

from bs4 import BeautifulSoup


import requests
import sys
import json

def usage():
    msg = """
    Please use the below command to use the script.
    python script_name.py twitter_username
    """
    print(msg)
    sys.exit(1)

def get_tweet_text(tweet):
    tweet_text_box = tweet.find("p", {"class": "TweetTextSize TweetTextSize--normal js-tweet-
text tweet-text"})
    images_in_tweet_tag = tweet_text_box.find_all("a", {"class": "twitter-timeline-link u-
hidden"})
    tweet_text = tweet_text_box.text
    for image_in_tweet_tag in images_in_tweet_tag:
        tweet_text = tweet_text.replace(image_in_tweet_tag.text, '')

    return tweet_text

def get_this_page_tweets(soup):
    tweets_list = list()
    tweets = soup.find_all("li", {"data-item-type": "tweet"})
    for tweet in tweets:
        tweet_data = None
        try:
            tweet_data = get_tweet_text(tweet)
        except Exception as e:
            continue
            #ignore if there is any loading or tweet error

        if tweet_data:
            tweets_list.append(tweet_data)
            print(".", end="")
            sys.stdout.flush()
    return tweets_list

def get_tweets_data(username, soup):


    tweets_list = list()
    tweets_list.extend(get_this_page_tweets(soup))

    next_pointer = soup.find("div", {"class": "stream-container"})["data-min-position"]

    while True:
        next_url = "https://twitter.com/i/profiles/show/" + username + \
                   "/timeline/tweets?include_available_features=1&" \
                   "include_entities=1&max_position=" + next_pointer + "&reset_error_state=false"

        next_response = None
        try:
            next_response = requests.get(next_url)
        except Exception as e:
            # in case there is some issue with request. None encountered so far.
            print(e)
            return tweets_list

        tweets_data = next_response.text
        tweets_obj = json.loads(tweets_data)
        if not tweets_obj["has_more_items"] and not tweets_obj["min_position"]:
            # using two checks here bcz in one case has_more_items was false but there were
more items
            print("\nNo more tweets returned")
            break
        next_pointer = tweets_obj["min_position"]
        html = tweets_obj["items_html"]
        soup = BeautifulSoup(html, 'lxml')
        tweets_list.extend(get_this_page_tweets(soup))

    return tweets_list

# dump final result in a json file


def dump_data(username, tweets):
    filename = username+"_twitter.json"
    print("\nDumping data in file " + filename)
    data = dict()
    data["tweets"] = tweets
    with open(filename, 'w') as fh:
        fh.write(json.dumps(data))

    return filename

def get_username():
    # if username is not passed
    if len(sys.argv) < 2:
        usage()
    username = sys.argv[1].strip().lower()
    if not username:
        usage()

    return username

def start(username = None):


    username = get_username()
    url = "http://www.twitter.com/" + username
    print("\n\nDownloading tweets for " + username)
    response = None
    try:
        response = requests.get(url)
    except Exception as e:
        print(repr(e))
        sys.exit(1)
    
    if response.status_code != 200:
        print("Non success status code returned "+str(response.status_code))
        sys.exit(1)

    soup = BeautifulSoup(response.text, 'lxml')

    if soup.find("div", {"class": "errorpage-topbar"}):


        print("\n\n Error: Invalid username.")
        sys.exit(1)

    tweets = get_tweets_data(username, soup)


    # dump data in a text file
    dump_data(username, tweets)
    print(str(len(tweets))+" tweets dumped.")

start()

Part 2
from bs4 import BeautifulSoup
import requests
handle = input('Input your account name on Twitter: ')
ctr = int(input('Input number of tweets to scrape: '))
res=requests.get('https://twitter.com/'+ handle)
bs=BeautifulSoup(res.content,'lxml')
all_tweets = bs.find_all('div',{'class':'tweet'})
if all_tweets:
  for tweet in all_tweets[:ctr]:
    context = tweet.find('div',{'class':'context'}).text.replace("\n"," ").strip()
    content = tweet.find('div',{'class':'content'})
    header = content.find('div',{'class':'stream-item-header'})
    user = header.find('a',{'class':'account-group js-account-group js-action-profile js-user-
profile-link js-nav'}).text.replace("\n"," ").strip()
    time = header.find('a',{'class':'tweet-timestamp js-permalink js-nav js-
tooltip'}).find('span').text.replace("\n"," ").strip()
    message = content.find('div',{'class':'js-tweet-text-container'}).text.replace("\n"," ").strip()
    footer = content.find('div',{'class':'stream-item-footer'})
    stat = footer.find('div',{'class':'ProfileTweet-actionCountList u-
hiddenVisually'}).text.replace("\n"," ").strip()
    if context:
      print(context)
    print(user,time)
    print(message)
    print(stat)
    print()
else:
    print("List is empty/account name not found.")

3. Snscrape
https://medium.com/swlh/how-to-scrape-tweets-by-location-in-python-using-snscrape-
8c870fa6ec25

More results:
https://medium.com/towards-data-science/search?q=twitter+web+scrapping+python

3. Selenium:

https://dev.to/petercour/twitter-scraping-with-python-1nmo

https://medium.com/@wyfok/web-scrape-twitter-by-python-selenium-part-1-b3e2db29051d

5.. Using all 4 libraries:


https://medium.com/towards-data-science/web-scraping-for-beginners-beautifulsoup-scrapy-
selenium-twitter-api-f5a6d0589ea6

You might also like