(Ebook) Natural Language Processing Recipes: Unlocking Text Data with Machine Learning and Deep Learning Using Python by Akshay Kulkarni, Adarsha Shivananda ISBN 9781484273500, 1484273508 All Chapters Instant Download
(Ebook) Natural Language Processing Recipes: Unlocking Text Data with Machine Learning and Deep Learning Using Python by Akshay Kulkarni, Adarsha Shivananda ISBN 9781484273500, 1484273508 All Chapters Instant Download
com
OR CLICK HERE
DOWLOAD EBOOK
ebooknice.com
ebooknice.com
ebooknice.com
ebooknice.com
Akshay Kulkarni and Adarsha Shivananda
Adarsha Shivananda
Bangalore, Karnataka, India
Apress Standard
The publisher, the authors and the editors are safe to assume that the
advice and information in this book are believed to be true and accurate
at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the
material contained herein or for any errors or omissions that may have
been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
Adarsha Shivananda
is a lead data scientist at Indegene Inc.’s product and technology team,
where he leads a group of analysts who enable predictive analytics and
AI features to healthcare software products. These are mainly
multichannel activities for pharma products and solving the real-time
problems encountered by pharma sales reps. Adarsha aims to build a
pool of exceptional data scientists within the organization to solve
greater health care problems through
brilliant training programs. He always
wants to stay ahead of the curve.
His core expertise involves machine
learning, deep learning,
recommendation systems, and statistics.
Adarsha has worked on various data
science projects across multiple domains
using different technologies and
methodologies. Previously, he worked
for Tredence Analytics and IQVIA.
He lives in Bangalore, India, and loves
to read, ride, and teach data science.
About the Technical Reviewer
Aakash Kag
is a data scientist at AlixPartners and is a
co-founder of the Emeelan application.
He has six years of experience in big data
analytics and has a postgraduate degree
in computer science with a specialization
in big data analytics. Aakash is
passionate about developing social
platforms, machine learning, and
meetups, where he often talks.
© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2021
A. Kulkarni, A. Shivananda, Natural Language Processing Recipes
https://doi.org/10.1007/978-1-4842-7351-7_1
This chapter covers various sources of text data and the ways to extract it. Textual data can act as
information or insights for businesses. The following recipes are covered.
Recipe 1. Text data collection using APIs
Recipe 2. Reading a PDF file in Python
Recipe 3. Reading a Word document
Recipe 4. Reading a JSON object
Recipe 5. Reading an HTML page and HTML parsing
Recipe 6. Regular expressions
Recipe 7. String handling
Recipe 8. Web scraping
Introduction
Before getting into the details of the book, let’s look at generally available data sources. We need to identify
potential data sources that can help with solving data science use cases.
Client Data
For any problem statement, one of the sources is the data that is already present. The business decides
where it wants to store its data. Data storage depends on the type of business, the amount of data, and the
costs associated with the sources. The following are some examples.
SQL databases
HDFS
Cloud storage
Flat files
Free Sources
A large amount of data is freely available on the Internet. You just need to streamline the problem and start
exploring multiple free data sources.
Free APIs like Twitter
Wikipedia
Government data (e.g., http://data.gov)
Census data (e.g., www.census.gov/data.html)
Health care claim data (e.g., www.healthdata.gov)
Data science community websites (e.g., www.kaggle.com)
Google dataset search (e.g., https://datasetsearch.research.google.com)
Web Scraping
Extracting the content/data from websites, blogs, forums, and retail websites for reviews with permission
from the respective sources using web scraping packages in Python.
There are a lot of other sources, such as news data and economic data, that can be leveraged for analysis.
Problem
You want to collect text data using Twitter APIs.
Solution
Twitter has a gigantic amount of data with a lot of value in it. Social media marketers make their living from
it. There is an enormous number of tweets every day, and every tweet has some story to tell. When all of this
data is collected and analyzed, it gives a business tremendous insights about their company, product,
service, and so forth.
Let’s now look at how to pull data and then explore how to leverage it in the coming chapters.
How It Works
Step 1-1. Log in to the Twitter developer portal
Log in to the Twitter developer portal at https://developer.twitter.com.
Create your own app in the Twitter developer portal, and get the following keys. Once you have these
credentials, you can start pulling data.
consumer key: The key associated with the application (Twitter, Facebook, etc.)
consumer secret: The password used to authenticate with the authentication server (Twitter, Facebook,
etc.)
access token: The key given to the client after successful authentication of keys
access token secret: The password for the access key
# Install tweepy
!pip install tweepy
# Import the libraries
import numpy as np
import tweepy
import json
import pandas as pd
from tweepy import OAuthHandler
# credentials
consumer_key = "adjbiejfaaoeh"
consumer_secret = "had73haf78af"
access_token = "jnsfby5u4yuawhafjeh"
access_token_secret = "jhdfgay768476r"
# calling API
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
# Provide the query you want to pull the data. For example, pulling data for
the mobile phone ABC
query ="ABC"
# Fetching tweets
Tweets = api.search(query, count =
10,lang='en',exclude='retweets',tweet_mode='extended')
This query pulls the top ten tweets when product ABC is searched. The API pulls English tweets since the
language given is 'en'. It excludes retweets.
Problem
You want to read a PDF file.
Solution
The simplest way to read a PDF file is by using the PyPDF2 library.
How It Works
Follow the steps in this section to extract data from PDF files.
Note You can download any PDF file from the web and place it in the location where you are running
this Jupyter notebook or Python script.
Please note that the function doesn’t work for scanned PDFs.
Problem
You want to read Word files .
Solution
The simplest way is to use the docx library.
How It Works
Follow the steps in this section to extract data from a Word file.
#Install docx
!pip install docx
#Import library
from docx import Document
Note You can download any Word file from the web and place it in the location where you are running a
Jupyter notebook or Python script.
Problem
You want to read a JSON file/object.
Solution
The simplest way is to use requests and the JSON library.
How It Works
Follow the steps in this section to extract data from JSON.
import requests
import json
Step 4-2. Extract text from a JSON file
Now let’s extract the text .
Problem
You want to read parse/read HTML pages.
Solution
The simplest way is to use the bs4 library.
How It Works
Follow the steps in this section to extract data from the web.
response =
urllib2.urlopen('https://en.wikipedia.org/wiki/Natural_language_processing')
html_doc = response.read()
#Parsing
soup = BeautifulSoup(html_doc, 'html.parser')
# Formating the parsed html file
strhtm = soup.prettify()
# Print few lines
print (strhtm[:1000])
#output
<!DOCTYPE html>
<html class="client-nojs" dir="ltr" lang="en">
<head>
<meta charset="utf-8"/>
<title>
Natural language processing - Wikipedia
</title>
<script>
document.documentElement.className = document.documentElement.className.rep
</script>
<script>
(window.RLQ=window.RLQ||[]).push(function()
{mw.config.set({"wgCanonicalNamespace":"","wgCanonicalSpecialPageName":false,"
processing","wgCurRevisionId":860741853,"wgRevisionId":860741853,"wgArticleId"
["*"],"wgCategories":["Webarchive template wayback links","All accuracy disput
identifiers","Natural language processing","Computational linguistics","Speech
print(soup.title)
print(soup.title.string)
print(soup.a.string)
print(soup.b.string)
#output
<title>Natural language processing - Wikipedia</title>
Natural language processing - Wikipedia
None
Natural language processing
Problem
You want to parse text data using regular expressions.
Solution
The best way is to use the re library in Python.
How It Works
Let’s look at some of the ways we can use regular expressions for our tasks.
The basic flags are I, L, M, S, U, X.
re.I ignores casing.
re.L finds a local dependent.
re.M finds patterns throughout multiple lines.
re.S finds dot matches.
re.U works for Unicode data.
re.X writes regex in a more readable format.
The following describes regular expressions’ functionalities .
Find a single occurrence of characters a and b: [ab]
Find characters except for a and b: [^ab]
Find the character range of a to z: [a-z]
Find a character range except a to z: [^a-z]
Find all the characters from both a to z and A to Z: [a-zA-Z]
Find any single character: []
Find any whitespace character: \s
Find any non-whitespace character: \S
Find any digit: \d
Find any non-digit: \D
Find any non-words: \W
Find any words: \w
Find either a or b: (a|b)
The occurrence of a is either zero or one
Matches zero or not more than one occurrence: a? ; ?
The occurrence of a is zero or more times: a* ; * matches zero or more than that
The occurrence of a is one or more times: a+ ; + matches occurrences one or more
than one time
Match three simultaneous occurrences of a: a{3}
Match three or more simultaneous occurrences of a: a{3,}
Match three to six simultaneous occurrences of a: a{3,6}
Start of a string: ^
End of a string: $
Match word boundary: \b
Non-word boundary: \B
The re.match() and re.search() functions find patterns, which are then processed according to
the requirements of the application.
Let’s look at the differences between re.match() and re.search().
re.match() checks for a match only at the beginning of the string. So, if it finds a pattern at the
beginning of the input string, it returns the matched pattern; otherwise, it returns a noun.
re.search() checks for a match anywhere in the string. It finds all the occurrences of the pattern in the
given input string or data.
Now let’s look at a few examples using these regular expressions.
Tokenizing
Tokenizing means splitting a sentence into words. One way to do this is to use re.split.
# Import library
import re
#run the split query
re.split('\s+','I like this book.')
['I', 'like', 'this', 'book.']
([a-zA-Z0-9+._-]+@[a-zA-Z0-9._-]+\.[a-zA-Z0-9_-]+)
There are even more complex ones to handle all the edge cases (e.g., “.co.in” email IDs). Please give it a
try.
# Import library
import re
import requests
#url you want to extract
url = 'https://www.gutenberg.org/files/2638/2638-0.txt'
#function to extract
def get_book(url).
# Sends a http request to get the text from project Gutenberg
raw = requests.get(url).text
# Discards the metadata from the beginning of the book
start = re.search(r"\*\*\* START OF THIS PROJECT GUTENBERG EBOOK .*
\*\*\*",raw ).end()
# Discards the metadata from the end of the book
stop = re.search(r"II", raw).start()
# Keeps the relevant text
text = raw[start:stop]
return text
# processing
def preprocess(sentence).
return re.sub('[^A-Za-z0-9.]+' , ' ', sentence).lower()
#calling the above function
book = get_book(url)
processed_book = preprocess(book)
print(processed_book)
# Output
produced by martin adamson david widger with corrections by andrew sly
the idiot by fyodor dostoyevsky translated by eva martin part i i. towards
the end of november during a thaw at nine o clock one morning a train on
the warsaw and petersburg railway was approaching the latter city at full
speed. the morning was so damp and misty that it was only with great
difficulty that the day succeeded in breaking and it was impossible to
distinguish anything more than a few yards away from the carriage windows.
some of the passengers by this particular train were returning from abroad
but the third class carriages were the best filled chiefly with
insignificant persons of various occupations and degrees picked up at the
different stations nearer town. all of them seemed weary and most of them
had sleepy eyes and a shivering expression while their complexions
generally appeared to have taken on the colour of the fog outside. when da
2. Perform an exploratory data analysis on this data using regex.
Problem
You want to explore handling strings.
Solution
The simplest way is to use the following string functionality.
s.find(t) is an index of the first instance of string t inside s (–1 if not found)
s.rfind(t) is an index of the last instance of string t inside s (–1 if not found)
s.index(t) is like s.find(t) except it raises ValueError if not found
s.rindex(t) is like s.rfind(t) except it raises ValueError if not found
s.join(text) combines the words of the text into a string using s as the glue
s.split(t) splits s into a list wherever a t is found (whitespace by default)
s.splitlines() splits s into a list of strings, one per line
s.lower() is a lowercase version of the string s
s.upper() is an uppercase version of the string s
s.title() is a titlecased version of the string s
s.strip() is a copy of s without leading or trailing whitespace
s.replace(t, u) replaces instances of t with u inside s
How It Works
Now let’s look at a few of the examples.
Replacing Content
Create a string and replace the content. Creating strings is easy. It is done by enclosing the characters in
single or double quotes. And to replace, you can use the replace function.
1. Create a string.
s1 = "nlp"
s2 = "machine learning"
s3 = s1+s2
print(s3)
#output
'nlpmachine learning'
Caution Before scraping any websites, blogs, or ecommerce sites, please make sure you read the site’s
terms and conditions on whether it gives permissions for data scraping. Generally, robots.txt contains the
terms and conditions (e.g., see www.alixpartners.com/robots.txt) and a site map contains a
URL’s map (e.g., see www.alixpartners.com/sitemap.xml).
Web scraping is also known as web harvesting and web data extraction. It is a technique to extract a large
amount of data from websites and save it in a database or locally. You can use this data to extract
information related to your customers, users, or products for the business’s benefit.
A basic understanding of HTML is a prerequisite.
Problem
You want to extract data from the web by scraping. Let’s use IMDB.com as an example of scraping top
movies.
Solution
The simplest way to do this is by using Python’s Beautiful Soup or Scrapy libraries. Let’s use Beautiful Soup
in this recipe.
How It Works
Follow the steps in this section to extract data from the web.
Step 8-4. Request the URL and download the content using Beautiful Soup
result = requests.get(url)
c = result.content
soup = BeautifulSoup(c,"lxml")
Step 8-5. Understand the website’s structure to extract the required information
Go to the website and right-click the page content to inspect the site’s HTML structure.
Identify the data and fields that you want to extract. For example, you want the movie name and IMDB
rating.
Check which div or class in the HTML contains the movie names and parse the Beautiful Soup
accordingly. In this example, you can parse the soup through <table class ="chart full-width">
and <td class="titleColumn"> to extract the movie name.
Similarly, you can fetch other data; refer to the code in step 8-6.
Step 8-6. Use Beautiful Soup to extract and parse the data from HTML tags
summary = soup.find('div',{'class':'article'})
# Create empty lists to append the extracted data .
moviename = []
cast = []
description = []
rating = []
ratingoutof = []
year = []
genre = []
movielength = []
rot_audscore = []
rot_avgrating = []
rot_users = []
# Extracting the required data from the html soup.
rgx = re.compile('[%s]' % '()')
f = FloatProgress(min=0, max=250)
display(f)
for row,i in
zip(summary.find('table').findAll('tr'),range(len(summary.find('table').findAl
for sitem in row.findAll('span',{'class':'secondaryInfo'}).
s = sitem.find(text=True)
year.append(rgx.sub(", s))
for ritem in row.findAll('td',{'class':'ratingColumn imdbRating'}).
for iget in ritem.findAll('strong').
rating.append(iget.find(text=True))
ratingoutof.append(iget.get('title').split(' ', 4)[3])
for item in row.findAll('td',{'class':'titleColumn'}).
for href in item.findAll('a',href=True).
moviename.append(href.find(text=True))
rurl = 'https://www.rottentomatoes.com/m/'+ href.find(text=True)
try.
rresult = requests.get(rurl)
except requests.exceptions.ConnectionError.
status_code = "Connection refused"
rc = rresult.content
rsoup = BeautifulSoup(rc)
try:
rot_audscore.append(rsoup.find('div',{'class':'meter-
value'}).find('span',{'class':'superPageFontColor'}).text)
rot_avgrating.append(rsoup.find('div',{'class':'audience-info
superPageFontColor'}).find('div').contents[2].strip())
rot_users.append(rsoup.find('div',{'class':'audience-info hidd
superPageFontColor'}).contents[3].contents[2].strip())
except AttributeError.
rot_audscore.append("")
rot_avgrating.append("")
rot_users.append("")
cast.append(href.get('title'))
imdb = "http://www.imdb.com" + href.get('href')
try.
iresult = requests.get(imdb)
ic = iresult.content
isoup = BeautifulSoup(ic)
description.append(isoup.find('div',
{'class':'summary_text'}).find(text=True).strip())
genre.append(isoup.find('span',{'class':'itemprop'}).find(text
movielength.append(isoup.find('time',
{'itemprop':'duration'}).find(text=True).strip())
except requests.exceptions.ConnectionError.
description.append("")
genre.append("")
movielength.append("")
sleep(.1)
f.value = i
Note that there is a high chance that you might encounter an error while executing this script because of
the following reasons.
Your request to the URL fails. If so, try again after some time. This is common in web scraping.
The webpages are dynamic, which means the HTML tags keep changing. Study the tags and make small
changes in the code in accordance with HTML, and you should be good to go.
Step 8-7. Convert lists to a data frame and perform an analysis that meets
business requirements
# List to pandas series
moviename = Series(moviename)
cast = Series(cast)
description = Series(description)
rating = Series(rating)
ratingoutof = Series(ratingoutof)
year = Series(year)
genre = Series(genre)
movielength = Series(movielength)
rot_audscore = Series(rot_audscore)
rot_avgrating = Series(rot_avgrating)
rot_users = Series(rot_users)
# creating dataframe and doing analysis
imdb_df = pd.concat([moviename,year,description,genre,movielength,cast,rating,
imdb_df.columns =
['moviename','year','description','genre','movielength','cast','imdb_rating','
imdb_df['rank'] = imdb_df.index + 1
imdb_df.head(1)
#output
This chapter implemented most of the techniques to extract text data from sources. In the coming
chapters, you look at how to explore, process, and clean data. You also learn about feature engineering and
building NLP applications.
© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2021
A. Kulkarni, A. Shivananda, Natural Language Processing Recipes
https://doi.org/10.1007/978-1-4842-7351-7_2
This chapter discusses various methods and techniques to preprocess textual data and
exploratory data analysis. It covers the following recipes.
Recipe 1. Lowercasing
Recipe 2. Punctuation removal
Recipe 3. Stop words removal
Recipe 4. Text standardization
Recipe 5. Spelling correction
Recipe 6. Tokenization
Recipe 7. Stemming
Recipe 8. Lemmatization
Recipe 9. Exploratory data analysis
Recipe 10. Dealing with emojis and emoticons
Recipe 11. End-to-end processing pipeline
Before directly jumping into the recipes, let’s first understand the need for preprocessing
the text data. As you know, about 90% of the world’s data is unstructured and may be present
in the form of an image, text, audio, and video. Text can come in various forms, from a list of
individual words to sentences to multiple paragraphs with special characters (like tweets and
other punctuations). It also may be present in the form of web, HTML, documents, and so on.
And this data is never clean and consists of a lot of noise. It needs to be treated and then
perform a few preprocessing functions to make sure you have the right input data for the
feature engineering and model building. If you don’t preprocess the data, any algorithms built
on top of such data do not add any value to a business. This reminds us of a very popular
phrase in data science: “Garbage in, garbage out.”
Preprocessing involves transforming raw text data into an understandable format. Real-
world data is often incomplete, inconsistent, and filled with a lot of noise, and is likely to
contain many errors. Preprocessing is a proven method of resolving such issues. Data
preprocessing prepares raw text data for further processing.
Problem
You want to lowercase the text data.
Solution
The simplest way is to use the default lower() function in Python.
The lower() method converts all uppercase characters in a string to lowercase characters
and returns them.
How It Works
Follow the steps in this section to lowercase a given text or document. Here, Python is used.
x = 'Testing'
x2 = x.lower()
print(x2)
#output
'testing'
When you want to perform lowercasing on a data frame, use the apply function as follows.
LITTLE RHODY
By JEAN K. BAIRD
Illustrated by R. G. Vosburgh.
At The Hall, a boys’ school, there is a set of boys known
as the “Union of States,” to which admittance is gained by
excelling in some particular the boys deem worthy of their
mettle.
Rush Petriken, a hunchback boy, comes to The Hall, and
rooms with Barnes, the despair of the entire school
because of his prowess in athletics. Petriken idolizes him,
and when trouble comes to him, the poor crippled lad
gladly shoulders the blame, and is expelled. But shortly
before the end of the term he returns and is hailed as
“little Rhody,” the “capitalest State of all.”
BIGELOW BOYS
By Mrs. A. F. RANSOM
Illustrated by Henry Miller
Four boys, all bubbling over with energy and love of good
times, and their mother, an authoress, make this story of
a street-car strike in one of our large cities move with
leaps and bounds. For it is due to the four boys that a
crowded theatre car is saved from being wrecked, and the
instigators of the plot captured.
Mrs. Ransom is widely known by her patriotic work among
the boys in the navy, and she now proves herself a friend
of the lads on land by writing more especially for them.
CONNECTICUT BOYS IN
THE WESTERN RESERVE
By JAMES A. BRADEN
The author once more sends his heroes toward the setting
sun. “In all the glowing enthusiasm of youth, the
youngsters seek their fortunes in the great, fertile
wilderness of northern Ohio, and eventually achieve fair
success, though their progress is hindered and sometimes
halted by adventures innumerable. It is a lively,
wholesome tale, never dull, and absorbing in interest for
boys who love the fabled life of the frontier.”—Chicago
Tribune.
THE TRAIL of THE SENECA
By JAMES A. BRADEN
In which we follow the romantic careers of John Jerome
and Return Kingdom a little farther.
These two self-reliant boys are living peaceably in their
cabin on the Cuyahoga when an Indian warrior is found
dead in the woods nearby. The Seneca accuses John of
witchcraft. This means death at the stake if he is
captured. They decide that the Seneca’s charge is made to
shield himself, and set out to prove it. Mad Anthony, then
on the Ohio, comes to their aid, but all their efforts prove
futile and the lone cabin is found in ashes on their return.
CAPTIVES THREE
By JAMES A. BRADEN
A tale of frontier life, and how three children—two boys
and a girl—attempt to reach the settlements in a canoe,
but are captured by the Indians. A common enough
occurrence in the days of our great-grandfathers has been
woven into a thrilling story.
CASTLEMON SERIES:
A STRUGGLE FOR A FORTUNE
WINGED ARROW’S MEDICINE
THE FIRST CAPTURE
Harry Castlemon ranks among the best of the writers of
juvenile fiction. His various books are in constant and
large demand by the boys who have learned to look for
his name as author as a guaranty of a good story.
BONEHILL SERIES:
THE BOY LAND BOOMER
THREE YOUNG RANCHMEN
Stories of western life that are full of adventure, which
read as if they happened day before yesterday.
RATHBORNE SERIES:
DOWN THE AMAZON
ADRIFT ON A JUNK
YOUNG VOYAGERS OF THE NILE
YOUNG CASTAWAYS
For boys who have had their fill of adventures on land, the
Rathborne books are ever welcome. They make one feel
the salt breeze, and hear the shouts of the sailor boys.
OTIS SERIES:
TEDDY
TELEGRAPH TOM
MESSENGER No. 48
DOWN THE SLOPE
James Otis writes for wide-awake American boys, and his
audience read his tales with keen appreciation.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com