SANGHDIPPROJECT1
SANGHDIPPROJECT1
SANGHDIPPROJECT1
ON
Bachelor of Technology
(Fifth Semester)
In
COMPUTER SCIENCE & ENGINEERING
Session 2023-2024
Prescribed By
DBATU University, Lonere
Guided By Submitted By
PROF.RAKHI SHENDE SANGHDIP SANJAY UDRAKE
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
RAJIV GANDHI COLLEGE OF ENGINEERING
RESEARCH & TECHNOLOGY,
CHANDRAPUR.
Session 2023-2024
CERTIFICATE
This is to certify that, Mr.Sanghdip Udrake Studying in Fifth Semester Department of
COMPUTER SCIENCE & ENGINEERING,
Institute Vision
Institute Mission
M3.To motivate students to meet dynamic needs of the society with novelty and
creativity.
M4.To promote research and continuing education to keep the country ahead.
Department Vision
To be a centre of excellence in Computer Science & Engineering by imparting
knowledge, professional skills and human values.
Department Mission
M1. To create encouraging learning environment by adapting innovative
student centric learning methods promoting quality education and
research.
M2. To make students competent professionals and entrepreneurs by
imparting career skills and ethics.
M3. To impart quality industry oriented education through industrial
internships, industrial projects and partnering with industries to make
students corporate ready.
Rajiv Gandhi College of Engineering Research & Technology,
Chandrapur
Department of Computer Science & Engineering
Attainment
PO/PSO (level1=low,2= Description
moderate,3=High)
PO1 3 In this project work, engineering knowledge is applied at
highest level.
PO2 3 In this project work, engineering knowledge is applied at
highest level.
PO3 3 In this project work, engineering knowledge is applied at
highest level.
PO4 3 In this project work, engineering knowledge is applied at
highest level.
PO5 3 In this project work, engineering knowledge is applied at
highest level.
PO6 3 In this project work, engineering knowledge is applied at
highest level.
PO7 1 In this project work, the engineer and society
concept is applied at lowest level.
PO8 1 In this project work, the engineer and society
concept is applied at lowest level.
PO9 3 In this project work, engineering knowledge is applied at
highest level.
PO10 3 In this project work, engineering knowledge is applied at
highest level.
PO11 1 In this project work, the engineer and society
concept is applied at lowest level.
PO12 3 In this project work, engineering knowledge is applied at
highest level.
PSO1 3 In this project work, engineering knowledge is applied at
highest level.
PSO2 3 In this project work, engineering knowledge is applied at
highest level.
PSO3 3 In this project work, engineering knowledge is applied at
highest level.
I could never have completed this work without the support and assistance of many people.
First and foremost, I would like to express deepest gratitude to my project Incharge,
Prof. Madhavi Sadu and project guide Prof. Rakhi Shende Department of computer Science
and Engineering for their excellent guidance, valuable suggestions and kind
encouragements in academic. With their help, we learned how to design solutions, improve
them and ultimately, implement them.
I would like to extend our grateful thanks to Dr. Nitin Janwe, Head of Department, Third
year B. Tech and Dr.Pravin Potduke, principle, RCERT, Chandrapur. These supports
are greatly acknowledged. Finally, it impossible to put in to words my feelings of love and
gratitude to my family and friends.
1 Abstract 1
2 Introduction 2
3 System Requirement 3
4 Architecture of Project 4
5 Modules developed 5
9 Advantages & 11
Disadvantages
10 Future Scope 12
11. Conclusion 13
12. Bibliography 14
ABSTRACT
The "Voice Assistant Using Python" project presents the development and implementation
of a versatile voice-controlled assistant leveraging Python programming language and
associated libraries. The objective of this project was to create an interactive and intuitive
system capable of performing various tasks through voice commands, aiming to enhance
user convenience and efficiency.
The project utilized Python's libraries such as speech recognition, natural language
processing, and text-to-speech conversion to enable seamless interaction between users and
the voice assistant. The assistant was designed to interpret spoken commands, process
natural language, and execute corresponding actions, including retrieving information,
managing tasks, controlling applications, and performing basic tasks based on user
instructions.
The report details the methodology, tools, and techniques employed in the development of
the voice assistant, along with insights gained during the implementation phase. Results
include the successful creation of a functional voice-controlled assistant capable of
executing a range of predefined tasks based on voice commands, showcasing the potential
for further enhancements and future applications in the domain of voice-based user
interfaces.
This project not only demonstrates the capabilities of Python for creating interactive voice-
controlled systems but also underscores the possibilities of integrating such technology into
daily tasks, thereby contributing to the evolution of user-friendly, voice-enabled
applications.
1
INTRODUCTION
In recent years, advancements in natural language processing (NLP) and speech recognition
technologies have led to the proliferation of voice-controlled systems, revolutionizing
human-computer interactions. The "Voice Assistant Using Python" project represents an
exploration into the creation and implementation of a sophisticated voice-controlled assistant
using the versatile capabilities of the Python programming language.
The primary goal of this project was to develop an intuitive and responsive voice assistant
capable of understanding natural language commands and executing various tasks, thereby
providing users with a seamless and interactive experience. Harnessing the power of Python
and leveraging its libraries, this project sought to bridge the gap between human speech and
machine actions, aiming to simplify daily tasks and enhance user convenience.
The proliferation of voice-enabled devices and the increasing demand for hands-free
interactions in various domains have underscored the importance of developing intelligent
voice-controlled systems. This project aligns with this trend by focusing on the integration of
speech recognition, natural language understanding, and text-to-speech capabilities to create
an efficient and adaptable voice assistant.
The report aims to provide a comprehensive overview of the methodologies, tools, and
processes employed in the development of the voice assistant. It outlines the challenges
encountered, the strategies implemented, and the outcomes achieved during the course of
designing, coding, and refining the system.
Moreover, the project report delves into the significance of utilizing Python for such
applications, highlighting its flexibility, extensive libraries, and ease of implementation in
building sophisticated voice-controlled systems. Additionally, it explores the potential
applications and implications of voice assistants in various fields, emphasizing the
significance of human-computer interaction through natural language.
2
SYSTEM REQUIREMENTS
HARDWARE REQUIREMENTS:
SOFTWARE REQUIREMENTS:
Pycharm
Jupiter Notebook
Visual Studio Code
Text-To-Speech:
pip install pyttsx3
Natural Language Processing (NLP) Libraries
3
ARCHITECTURE OF PROJECT
4
MODULES DEVELOPED
5
GRAPHICAL USER INTERFACE & SOURCE CODE
import subprocess
import wolframalpha
import pyttsx3
import tkinter
import json
import random
import operator
import speech_recognition as sr
import datetime
import wikipedia
import webbrowser
import os
import winshell
import pyjokes
import feedparser
import smtplib
import ctypes
import time
import requests
import shutil
from twilio.rest import Client
from clint.textui import progress
from ecapture import ecapture as ec
from bs4 import BeautifulSoup
import win32com.client as wincl
from urllib.request import urlopen
engine = pyttsx3.init('sapi5')
voices = engine.getProperty('voices')
engine.setProperty('voice', voices[1].id)
def speak(audio):
engine.say(audio)
engine.runAndWait()
def wishMe():
hour = int(datetime.datetime.now().hour)
if hour>= 0 and hour<12:
speak("Good Morning Sir !")
6
elif hour>= 12 and hour<18:
speak("Good Afternoon Sir !")
else:
speak("Good Evening Sir !")
def username():
speak("What should i call you sir")
uname = takeCommand()
speak("Welcome Mister")
speak(uname)
columns = shutil.get_terminal_size().columns
print("#####################".center(columns))
print("Welcome Mr.", uname.center(columns))
print("#####################".center(columns))
def takeCommand():
r = sr.Recognizer()
print("Listening...")
r.pause_threshold = 1
audio = r.listen(source)
try:
print("Recognizing...")
query = r.recognize_google(audio, language ='en-in')
print(f"User said: {query}\n")
except Exception as e:
print(e)
print("Unable to Recognize your voice.")
return "None"
return query
7
def sendEmail(to, content):
server = smtplib.SMTP('smtp.gmail.com', 587)
server.ehlo()
server.starttls()
8
OUTPUT
Listening...
Recognizing...
User said: Sameer
#####################
Welcome Mr. Sameer
#####################
Listening...
Recognizing…
Unable to Recognizing your voice.
Listening...
Recognizing…
User said: Himalaya in Wikipedia
The Himalayas, or Himalaya (; Sanskrit: [ɦɪmaːlɐjɐ]; from Sanskrit himá 'snow, frost',
and ā-laya 'dwelling, abode'), is a mountain range in Asia, separating the plains of the
Indian subcontinent from the Tibetan Plateau. The range has some of the Earth's
highest peaks, including the highest, Mount Everest; more than 100 peaks exceeding
elevations of 7,200 m (23,600 ft) above sea level lie in the Himalayas.
The Himalayas abut or cross five countries: Nepal, China, Pakistan, Bhutan and India.
#####################
Listening...
Recognizing...
User said: lock window
9
ADVANTAGES
10
DISADVANTAGES
1. Privacy Concerns:
Voice assistants often collect and store user data, raising privacy concerns about the security
of personal information and potential misuse of data.
2. Accuracy and Reliability:
Voice recognition technology may not always accurately interpret commands, leading to
misunderstandings or incorrect actions. Additionally, reliability issues can arise due to
connectivity or technical glitches.
3. Lack of Contextual Understanding:
Voice assistants may struggle with understanding context or following complex instructions,
limiting their ability to perform sophisticated tasks accurately.
4. Limited Functionality:
While voice assistants can perform various tasks, their capabilities might be limited
compared to manual interactions or graphical interfaces, especially for complex or
specialized tasks.
5. Dependency and Overreliance:
Overreliance on voice assistants may lead to reduced critical thinking or problem-solving
skills, as users become accustomed to immediate answers and solutions.
6. Misinterpretation and Miscommunication:
Misinterpretation of commands or accidental activations may lead to unintended actions or
miscommunication, causing frustration for users.
7. Security Risks:
Vulnerabilities exist in voice assistant systems, potentially allowing unauthorized access or
exploitation by malicious actors, posing security risks.
8. Compatibility Issues:
Voice assistants might not be compatible with all devices or platforms, limiting their
integration or usage in certain environments.
9. Language and Accent Limitations:
Some voice assistants may struggle with understanding different accents, languages, or
dialects, hindering accessibility for diverse user groups.
10. Power Consumption:
Continuous listening or active voice recognition can consume device battery power, affecting
the device's battery life.
11
FUTURE SCOPE
1. Advanced Natural Language Processing (NLP):
Further development in NLP will enhance voice assistants' contextual understanding,
enabling more natural and human-like interactions.
2. Personalization and Context Awareness:
Voice assistants will evolve to become more personalized, adapting to individual user
preferences, behaviors, and contexts.
3. Multilingual Support and Accents:
Improved multilingual support and better understanding of diverse accents will enhance
accessibility and usability for a global audience.
4. Emotional Intelligence and Sentiment Analysis:
Integrating emotional intelligence into voice assistants to recognize emotions or sentiment in
speech could enable more empathetic and responsive interactions.
5. Integration with IoT and Smart Devices:
Deeper integration with Internet of Things (IoT) devices will allow voice assistants to control
and manage an even broader range of smart devices and systems.
6. Improved Security and Privacy Measures:
Future voice assistants will prioritize enhanced security features and robust privacy measures
to address growing concerns regarding data privacy and security.
7. Cross-Platform Integration:
Integration across multiple devices and platforms, offering seamless continuity of interactions
regardless of the device being used.
8. AI and Machine Learning Advancements:
Continued advancements in AI and machine learning will enable voice assistants to learn and
adapt dynamically, improving accuracy and performance.
9. Customization and Skill Development:
Empowering users to create custom skills or functionalities for their voice assistants,
allowing for a more personalized experience.
10. Business and Industry Applications:
Expansion of voice assistants into various industries, including healthcare, education, retail,
and more, to streamline processes and enhance user experiences.
11. Ethical and Responsible AI Development:
Emphasis on ethical AI development, addressing biases, ensuring fairness, and maintaining
transparency in voice assistant systems.
12. Augmented Reality (AR) and Virtual Reality (VR) Integration:
Integration with AR/VR technologies could extend voice assistants' capabilities, providing
more immersive and interactive experiences.
12
CONCLUSION
The development and implementation of the "Voice Assistant Using Python" project mark a
significant stride in leveraging technology to redefine human-computer interactions.
Throughout this project, we have explored the capabilities of Python and its associated
libraries in creating a versatile and intuitive voice-controlled assistant, aiming to simplify
tasks and enhance user experiences.
The project's journey involved integrating speech recognition, natural language processing,
and text-to-speech functionalities to bridge the gap between spoken commands and
actionable tasks. Despite challenges such as accuracy limitations and privacy concerns, the
voice assistant showcases immense potential for revolutionizing how individuals interact with
technology.
In conclusion, the "Voice Assistant Using Python" project serves as a testament to the
capabilities of Python in creating intelligent and interactive systems. This project opens doors
to a future where voice assistants play a pivotal role in enhancing productivity, accessibility,
and convenience across various domains, paving the way for a more connected and efficient
technological landscape.
13
BIBLOGRAPHY
14