Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

VISVESVARAYA TECHNOLOGICAL UNIVERSITY


BELGAUM-590018

An Internship Report on

“Artificial Intelligence And Machine Learning”

Submitted in Partial fulfillment of the Requirements for the Award of Degree of

Bachelor of Engineering In
Computer Science & Engineering
Submitted By

IRAGOUDA PATIL 2MM20CS019

Under the Guidance of Prof


Padiyappa k

MARATHA MANDAL ENGINEERING COLLEGE, BELAGAVI

COMPUTER SCIENCE ENGINEERING

R.S.No.104, Halbhavi , Siddhaganga Oi Mills P.O New Vantmuri ,Via-Kakti Belgavi-591113

DEPT OF CSE,MMEC 1
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

Department of Computer Science &Engineering

CERTIFICATE
This is to certify that the Internship Project work entitled “Eye Guesture Control
System” has been carried out by Iragouda Patil (2MM20CS013) Bonafide
student of Maratha Mandal Engineering College, Belagavi in partial fulfillment
for the award of Bachelor of Engineering in Computer Science and Engineering
of the Visvesvaraya Technological University, Belgaum during the year 2023-2024.
It is certified that all corrections/suggestions indicated for Internal Assessment have
been incorporated in the Report deposited in the departmental library. This
Internship Report has been approved as it satisfies the academic requirements in
respect of project work prescribed for the said degree.

----------------------- ----------------------
Signature of Guide Signature of HOD

Prof Swati Patil Prof Padiyappa


Dept. of Computer Science Dept. of Computer Science

External Viva

Name of the examiners Signature with date

DEPT OF CSE,MMEC 2
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

CERTIFICATE OF INTERNSHIP

DEPT OF CSE,MMEC 3
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

ACKNOWLEDGEMENT

The Internship opportunity I had with Inventeron Technologies And Business


Solutions LLP was a great chance for learning and professional development.
Therefore, I consider myself a very lucky individual as I was provided with an
opportunity to be a part of the esteemed firm.

I use this opportunity to express my gratitude and special thanks to this company
who guided and kept me on the correct path and allowed me to carry out the project
at their esteemed organization.

Though the internship was short time, it was effective. The guidance from the
Professional which I received was beneficial. I take this opportunity as a big
milestone in my career development. I will strive to use these gained skills and
knowledge in the best possible way.

IRAGOUDA PATIL

DEPT OF CSE,MMEC 4
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

EXECUTIVE SUMMARY
INVENTERON TECHNOLOGIES AND BUSINESS SOLUTIONS LLP is
a Private company registered in the year 2013 in Bangalore. Their efforts are to
create technology that comforts and fulfills the requirement of clients. The
company’s mission is to help leading corporations and individuals, create and
enhance efficient IT landscapes using various technologies that improve process
and productivity. Their process aims at supporting people to maximize their
benefits and reduce the risk of failures.

Internship at INVENTERON TECHNOLOGIES AND BUSINESS


SOLUTIONS LLP provides an exposure to real-world scenarios and industry
experience that is essential for a student. The internship enables an individual to
gain first-hand exposure to working in real-world problems. It also allows students
to harness the skills and theoretical knowledge that were learned in university. One
can acquire an endless amount of education in one’s life; however, knowledge
doesn’t always translate to the working life. The advantage of an internship is
getting to know about the industry trends and also knowledge and hands-on
experience about a particular field. The internship experience also adds an
advantage to a student's resume

I worked on a project named “Drowsiness Detection” where I tried to develop a


Machine learning model. During the Internship, I learned the importance and
different types of technologies used. I also got knowledge about the domain. To sum
internship at INVENTERON TECHNOLOGIES AND BUSINESS SOLUTIONS
LLP has been a success. I was able to gain practical skills, as well as work in a
fantastic environment.

DEPT OF CSE,MMEC 5
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

DEPT OF CSE,MMEC 6
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

DEPT OF CSE,MMEC 7
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

TABLE OF CONTENTS

SL CONTENT PAGE NO
NO
1 Introduction 1-2

2 About the Company 3-4

3 Internship Objectives 5

4 Tasks/Activities Performed 6-10

5 About the Project 11-12

6 snapshots 13-14

7 Reflection Notes 15

8 Conclusions 16

9 References 17

DEPT OF CSE,MMEC 8
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

CHAPTER 1

INTRODUCTION
An internship helps you train under experienced professionals and explore what your chosen career
path would be like, and an internship with a company in your field can help you to develop the skills
you require to thrive within a professional setting. At the end of the training period, the company
may ask you to review your time with them and write a report based on your experience. In this
article, we explain what an internship report is and how to write one and provide few format samples
to help you create your own. An internship report is a document that summarizes your internship
experience with a company and any ongoing considerations you might wish to recommend. After
you have completed an internship project or an internship program, the company is likely to ask
you to submit an internship training report detailing your work experience. In variety of fields,
including computer science, fake news has become the key research topic. Today’s troublesome
issue is that press, particularly social media, has become the place for false information that impacts
the integrity of entire news environment. This problem is rising because any one can enroll on social
media as a news source without any cost (e.g., anyone can create a Facebook page pretending to be
a news media organization). There are increasing concerns about fake news outlets publishing "true"
news stories, and often adding "fake" followers widely to those news stories. Since the widespread
dissemination of fake news can have a significant adverse impact on individuals and society, the
lack of robust factchecking techniques is particularly worrying. We provide a detailed account of
fake news detection as a text classification problem, to be solved using natural language processing
(NLP) tools, and our tests show that fake news articles are detectable, particularly given enough
training information.

DEPT OF CSE,MMEC 9
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

CHAPTER
10
ABOUT THE COMPANY
Founder and CEO INVENTRON TECHNOLOGIES BENGALURU Product and Service is
SAYED ASAD. INVENTRON TECHNOLOGIES Products and Services LLP is a Limited
Liability Partnership firm incorporated on 28 June 2020.AiRobsoft is a cutting-edge IT firm
headquartered in Bangalore, India. Specializing in advanced technologies such as data science,
robotics, electronics engineering, and machine learning, INVENTRON TECHNOLOGIES boasts
a team of expert professionals dedicated to pushing the boundaries of innovation. Collaborating
seamlessly, they work on developing futuristic technologies that promise to revolutionize various
industries. With a focus on pushing the boundaries of what's possible, INVENTRON
TECHNOLOGIES is at the forefront of shaping the future of technology. INVENTRON
TECHNOLOGIES Products and Services is responsible for providing solutions and services related
to software such as building a cyber security network, web development, mobile app development
and cloud consulting to name some.

DEPT OF CSE,MMEC 1
0
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

CHAPTER 3

INTERNSHIP OBJECTIVES

• For an internship at INVETRON TECHNOLOGIES focusing on the AIML domain,


potential objectives could include:
• Understanding AIML Fundamentals: Gain a solid understanding of AIML syntax, structure,
and its applications in building conversational agents and chatbots.
• Developing AIML Skills: Learn how to create and optimize AIML scripts to enhance
conversational capabilities and improve user experience.
• Integration with NLP Techniques: Explore how AIML can be integrated with natural
language processing techniques to improve language understanding and response generation.
• Building Conversational Agents: Work on projects to develop AIML-based chatbots or
virtual assistants for specific use cases, such as customer support, information retrieval, or
entertainment.
• Testing and Evaluation: Learn how to test and evaluate AIML-based systems, including
techniques for measuring performance, identifying issues, and optimizing conversational flows.
• Enhancing AI Capabilities: Gain exposure to other AI techniques and technologies that
complement AIML, such as machine learning algorithms for intent detection or sentiment analysis.

DEPT OF CSE,MMEC 3
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

CHAPTER 4

ACTIVITIES PERFORMED
Week 1: Importing Libraries and Dataset
NumPy: NumPy is a powerful library for numerical computing in Python. It provides support
for multidimensional arrays, along with a collection of mathematical functions to operate on
these arrays efficiently.
Pandas: Pandas is a data manipulation and analysis library. It offers data structures like
DataFrame and Series, which are ideal for handling structured data. Pandas simplifies tasks
such as reading/writing data from/to various file formats, data cleaning, filtering, grouping, and
more.
scikit-learn: scikit-learn is a popular machine learning library that provides simple and
efficient tools for data mining and data analysis. It includes various algorithms for
classification, regression, clustering, dimensionality reduction, and more. Its user-friendly API
makes it suitable for both beginners and experts.
TensorFlow: TensorFlow is an open-source machine learning framework developed by
Google. It's widely used for building and training deep learning models, particularly neural
networks. TensorFlow offers flexibility, scalability, and high performance, making it suitable
for a wide range of applications, from computer vision to natural language processing.
OpenCV: OpenCV (Open Source Computer Vision Library) is a powerful library for computer
vision tasks. It provides a wide range of functionalities for image and video processing,
including object detection, face recognition, feature detection, image filtering, and geometric
transformations.
Reading the CSV File:
Next, we use the pd.read_csv() function to read the CSV file into a DataFrame, which is a two-
dimensional labeled data structure with columns of potentially different types. Here's the syntax
for reading a CSV file:
df = pd.read_csv('file_path.csv')
Exploring the Dataset:
Once the dataset is loaded into the DataFrame df, we can explore its structure and contents. Some
common operations include:
Displaying the first few rows of the dataset:
print(df.head())
Checking the dimensions of the dataset (number of rows and columns): print(df.shape)
Summarizing the statistical properties of the dataset:

DEPT OF CSE,MMEC 4
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

print(df.describe())
Checking for missing values:
print(df.isnull().sum())

Week 2:
Data Processing: Data processing in Python, facilitated by the pandas library, involves several
key steps. Initially, data is imported from various sources using functions like read_csv() and
read_excel(), and then cleaned to handle missing values, duplicates, and inconsistencies.
Following this, data transformation techniques are applied, such as changing data types,
encoding categorical variables, and scaling numerical features. Feature engineering further
enhances the dataset by creating new features from existing ones to improve model
performance. Throughout this process, Python's extensive ecosystem of libraries like scikit-
learn supports additional techniques and functionalities. By leveraging these tools effectively,
analysts and data scientists can efficiently prepare data for analysis, enabling the derivation of
meaningful insights and the construction of accurate predictive models.
Data Visualization: Data visualization in Python is the process of creating graphical
representations of data to help understand, interpret, and communicate insights effectively.
Python offers various libraries such as matplotlib, Seaborn, and Plotly for visualization. These
libraries provide easy-to-use functions to create a wide range of plots including histograms,
scatter plots, line plots, and more. By visualizing data, patterns, trends, and relationships can
be quickly identified, facilitating better decision-making and storytelling. Python's
visualization capabilities, coupled with its flexibility and ease of use, make it a preferred choice
for data visualization tasks across different domains.

Week 3:
Importing machine learning (ML) and deep learning algorithms in Python is facilitated
by libraries such as scikit-learn, TensorFlow, and PyTorch. With scikit-learn, importing
algorithms is straightforward using its intuitive API. For example, importing a decision tree
classifier: from sklearn.tree import DecisionTreeClassifier
Similarly, with deep learning frameworks like TensorFlow and PyTorch, importing models involves
accessing pre-built architectures and functionalities.
For TensorFlow:
Import tensorflow as tf from tensorflow.keras.models
import Sequential
from tensorflow.keras.layers import Dense

DEPT OF CSE,MMEC 5
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

Once imported, these libraries allow users to instantiate and configure models, train them on
data, and make predictions efficiently. Their comprehensive documentation and active
communities provide resources for users to explore and leverage a wide range of algorithms
and techniques for various ML and deep learning tasks.
Training data into algorithms: Training data is the foundational dataset used to teach
machine learning and deep learning algorithms to make predictions or classifications. It
comprises input features and corresponding target outputs, often organized in a structured
format such as a DataFrame or a NumPy array. In Python, training data is typically fed into
algorithms using libraries such as scikit-learn, TensorFlow, or PyTorch. The process involves
splitting the data into features (X) and labels/targets (y), then passing them to the algorithm's
fit() or train() method to initiate the training process. During training, the algorithm learns
the underlying patterns and relationships in the data, adjusting its internal parameters
iteratively to minimize the difference between predicted and actual outputs. Once trained, the
algorithm can generalize its knowledge to make predictions or classifications on new, unseen
data. This iterative learning process forms the backbone of supervised machine learning and
deep learning models, enabling them to perform various tasks, from image recognition to
natural language processing, with high accuracy and efficiency.
Week 4 :Testing
Testing algorithms for accuracy is a critical step in evaluating their performance and ensuring
their reliability for real-world applications. In Python, this process typically involves splitting
the dataset into training and testing sets using techniques like cross-validation or a simple train-
test split. Once the data is partitioned, the trained model is used to make predictions on the
testing set. These predictions are then compared to the actual labels or outcomes in the testing
set. The accuracy of the model is calculated by determining the percentage of correct
predictions out of the total number of predictions made. In classification tasks, accuracy can be
further evaluated using metrics like precision, recall, and F1score to provide a more
comprehensive understanding of the model's performance across different classes. By testing
algorithms for accuracy, developers and data scientists can assess their effectiveness, identify
potential areas for improvement, and make informed decisions about deploying them in real-
world scenarios.

DEPT OF CSE,MMEC 6
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

CHAPTER 5

ABOUT THE PROJECT


Eye Gesture Control System This system aims to revolutionize the way users interact with their
environment by eliminating the need for physical switches or controllers.

OBJECTIVE OF A PROJECT:

• Utilize advanced image processing techniques for face detection, eye tracking, and eye
state identification.
• The primary objective of the project is to develop a reliable and userfriendly system that
enables individuals to control lights, fans, and other home appliances effortlessly using eye
gestures.
• system aims to enhance convenience, accessibility, and energy efficiency in the home
environment.
• Propose an efficient method that breaks away from traditional approaches for effective
home Automation
1. tkinter: This module is a standard GUI (Graphical User Interface) toolkit in Python. It's used for
creating windows and widgets in the application, such as buttons, labels, and entry fields, allowing
for user interaction.
2. cv2 (OpenCV): OpenCV (Open Source Computer Vision Library) is a library primarily aimed at
real-time computer vision. In this context, it's utilized for capturing images from the webcam,
performing face detection using Haar cascades, and face recognition tasks.
3. os: The os module provides functions for interacting with the operating system. In this
application, it's used for creating directories to store captured images and for other file system
operations. 4.csv: This module provides functionality for working with CSV (Comma Separated
Values) files. In the code, it's used for storing student details and attendance records in CSV format.
5. numpy: NumPy is a powerful library for numerical computing in Python. It's used here for various
numerical operations, particularly for working with arrays of image data.
6. PIL (Python Imaging Library): PIL is a library for image processing tasks in Python. In this
application, it's used for converting images to grayscale format, which is often a preprocessing step
in face recognition.
7. pandas: Pandas is a library for data manipulation and analysis in Python. It's used in this code for
managing student details stored in CSV files.
8. datetime: The datetime module provides functions for working with dates and times. It's used
here for obtaining current date and time stamps during the attendance marking process.

DEPT OF CSE,MMEC 7
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

9. time: The time module provides functions for working with time-related tasks. It's used for
obtaining timestamps during the attendance marking process.
10. messagebox (from tkinter): This is a sub-module of the tkinter library used for displaying various
types of message boxes and dialogs in the GUI. In this code, it's used for showing warning messages
and information dialogs to the user. 11.cv2.data.haarcascades: This is a predefined path in OpenCV
that contains pre-trained Haar cascade classifiers for various objects detection, including face
detection. It's used for loading the face detection cascade classifier.
METHODOLOGY :
The process entails configuring a hardware setup with cameras or eye tracking sensors that can
precisely record eye movements. With this configuration, eye movement data is collected to
guarantee accurate capture of different kinds of movements. Defined actions correspond to certain
motions, like blinking to turn on the lights or staring to change the speed of the fan. After that,
algorithms are created to process the collected eye movement data and use pattern recognition
techniques to identify predetermined movements. The home automation devices are seamlessly
linked with the gesture recognition system, enabling command execution and communication. To
facilitate user interaction with the system, a user-friendly interface is also included, which offers
feedback and customization choices for gestures.
Code:
from scipy.spatial import distance as dist
# from imutils.video import FileVideoStream
from imutils.video import VideoStream
from imutils import face_utils
from eye_detect import Myclass
from datetime import datetime, timedelta
import argparse
import imutils
import time
import dlib
import cv2
now1 = 0
d = ""
def eye_aspect_ratio(eye):
# compute the euclidean distances between the two sets of
# vertical eye landmarks (x, y)-coordinates
A = dist.euclidean(eye[1], eye[5])

DEPT OF CSE,MMEC 8
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

B = dist.euclidean(eye[2], eye[4])

# compute the euclidean distance between the horizontal


# eye landmark (x, y)-coordinates
C = dist.euclidean(eye[0], eye[3])
# compute the eye aspect ratio
f = (A + B) / (2.0 * C)
# return the eye aspect ratio
return f
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-p", "--shape-predictor", required=True,
help="path to facial landmark predictor")
ap.add_argument("-v", "--video", type=str, default="",
help="path to input video file")
args = vars(ap.parse_args())
# define two constants, one for the eye aspect ratio to indicate
# blink and then a second constant for the number of consecutive
# frames the eye must be below the threshold
EYE_AR_THRESH = 0.25
EYE_AR_CONSEC_FRAMES = 3
# initialize the frame counters and the total number of blinks
COUNTER = 0
TOTAL = 0
# initialize dlib's face detector (HOG-based) and then create
# the facial landmark predictor
print("[INFO] loading facial landmark predictor...")
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(args["shape_predictor"])
# grab the indexes of the facial landmarks for the left and
# right eye, respectively
(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
# start the video stream thread
print("[INFO] starting video stream thread...")

DEPT OF CSE,MMEC 9
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

# vs = FileVideoStream(args["video"]).start()
# fileStream = True
vs = VideoStream(src=0).start()
# vs = VideoStream(usePiCamera=True).start()
fileStream = False
time.sleep(1.0)
# loop over frames from the video stream
while True:
# if this is a file video stream, then we need to check if
# There are any more frames left in the buffer to process
if now1 == 0:
now1 = datetime.now()
d = now1 + timedelta(seconds=8)
print(d)
now = datetime.now()
if fileStream and not vs.more():
break
# grab the frame from the threaded video file stream, resize
# it, and convert it to grayscale
# channels
frame = vs.read()
frame = imutils.resize(frame, width=450)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# detect faces in the grayscale frame
rects = detector(gray, 0)
if d <= now:
print(now)
print(TOTAL)
Myclass.Blink(TOTAL)
now1 = 0
TOTAL = 0
# loop over the face detections
for rect in rects:
# determine the facial landmarks for the face region, then
# convert the facial landmark (x, y)-coordinates to a NumPy

DEPT OF CSE,MMEC 10
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

# array
shape = predictor(gray, rect)
shape = face_utils.shape_to_np(shape)
# extract the left and right eye coordinates, then use the
# coordinates to compute the eye aspect ratio for both eyes
leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]
leftEAR = eye_aspect_ratio(leftEye)
rightEAR = eye_aspect_ratio(rightEye)
# average the eye aspect ratio together for both eyes
ear = (leftEAR + rightEAR) / 2.3
# compute the convex hull for the left and right eye, then
# visualize each of the eyes
leftEyeHull = cv2.convexHull(leftEye)
rightEyeHull = cv2.convexHull(rightEye)
cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)
cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)
# check to see if the eye aspect ratio is below the blink
# threshold, and if so, increment the blink frame counter
if ear < EYE_AR_THRESH:
COUNTER += 1
# otherwise, the eye aspect ratio is not below the blink
# threshold
else:
# if the eyes were closed for a sufficient number of

# then increment the total number of blinks


if COUNTER >= EYE_AR_CONSEC_FRAMES:
TOTAL += 1
if TOTAL == 7:
TOTAL = 0
# reset the eye frame counter
COUNTER = 0
# draw the total number of blinks on the frame along with
# the computed eye aspect ratio for the frame

DEPT OF CSE,MMEC 11
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

cv2.putText(frame, "Blinks: {}".format(TOTAL), (10, 30),


cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
cv2.putText(frame, "EAR: {:.2f}".format(ear), (300, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
# show the frame
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()

DEPT OF CSE,MMEC 12
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

CHAPTER 6

Snapshots

Fig 6.1 Fan On

Fig 6.2 Fan Off

DEPT OF CSE,MMEC 13
ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING

Fig 6.1 Light on

Fig 6.4 Bed lamp Off

DEPT OF 14
CSE,MMEC
ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING

CHAPTER 7

REFLECTION NOTES
Abstract :

The main aim of the work is to develop an economically effective and performance wise
efficient virtual assistant using Raspberry Pi for home automation based on the concepts of
Internet of Things, Speech Recognition, Natural Language Processing and Artificial

Intelligence.

Technical Outcomes of Internship:

An internship offers a rich array of technical outcomes centered around artificial intelligence
and machine learning (AIML). Interns can expect to develop a comprehensive understanding
of AIML fundamentals, including supervised and unsupervised learning, reinforcement
learning, and neural networks. Hands-on experience with industry-standard AIML tools and
frameworks like TensorFlow, scikit-learn, and Keras provides practical skills in model
development, training, and evaluation.
Non Technical Outcomes Of Internship:

• Teamwork: It is more professional and each one in a team needs to work together to
finish the tasks
• Problem Solving Skills: An internship introduces you to real-life work problems and
hence develops your problem-solving skills.
• Communication Skill: Communicating well is a gem of a skill which you can learn
during your internship experiences.
Benefits of Internship:

• Gain valuable work experience: The hands-on work experience interns receive is
invaluable and cannot be obtained in a classroom setting, making this one of the most important
benefits of internships.
• Explore Career Path: Internships are a great way for students to acquaint themselves
with the field they are interested in.
• Develop and refine skills: You can learn a lot about your strengths and weaknesses
during an internship. Internships allow for feedback from supervisors.

DEPT OF 15
CSE,MMEC
ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING

• Gain confidence: Internships allow you to test out specific techniques learned in the
classroom.

DEPT OF 16
CSE,MMEC
ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING

CHAPTER 8

CONCLUSION
This internship at Inventeron Technologies provided a valuable opportunity to gain practical
experience in data analysis using Python, with a strong emphasis on NumPy and Pandas.
My internship provided me with valuable hands-on experience in the field of data science. I
feel more prepared and confident in pursuing a career in data science, data analysis, or related
fields. My skills in NumPy and Pandas are assets that I can leverage to add value to future
projects and organizations.
My internship experience focusing on NumPy and Pandas has been both educational and
rewarding. I am grateful for the opportunity to work with these powerful tools and for the
guidance and mentorship I received during my time as an intern. I look forward to continuing
my journey in the world of data analysis and contributing my skills to future data- driven
endeavors.

DEPT OF 17
CSE,MMEC
ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING

REFERENCES
[1] Perez, C.A.; Palma, A.; Holzmann, C.A.; Pena, C.. Face and eye tracking algorithm based
on digital image processing. Systems, Man, and Cybernetics, 2001 IEEE International
Conference on Volume 2, 7-10 Oct. 2001 Page(s):1178 - 1183 vol.2.

[2] Tianjian Liu, Shanan Zhu, Eyes detection and tracking based on entropy in particle filter.
Control and Automation, 2005. ICCA '05. International Conference on Volume 2, 26-29 June
2005 Page(s):1002 - 1007 Vol. 2

[3] Perez, C.A.; Lazcano, V.A.; Estevez, P.A.; Estevez, C.M.. Real-time iris detection on faces
with coronal axis rotation. Systems, Man and Cybernetics, 2004 IEEE International
Conference on Volume 7, 10-13 Oct. 2004 Page(s):6389 - 6394 vol.7

[4] Paul Viola , Michael Jones Rapid object detection using a boosted cascade of simple features.
Computer Vision and Pattern Recognition, 2001. CVPR 2001

[5] Huang, J.; Wechsler, H.; Visual routines for eye location using learning and evolution.
Evolutionary Computation, IEEE Transactions on Volume 4, Issue 1, April 2000 Page(s):73 - 82

[6] Comaniciu, D.; Meer, P.; A robust approach toward feature space analysis; Pattern Analysis
and Machine Intelligence, IEEE Transactions on Volume 24, Issue 5, May 2002 Page(s):603
- 619

[7] Wen-Bing Horng; Chih-Yuan Chen; Yi Chang; Chun-Hai Fan;Driver fatigue detection based
on eye tracking and dynamk, template matching;Networking, Sensing and Control, 2004
IEEE International Conference

[8] Jaiswal, Palak & Dhakite, Mansi & Dhule, Chetan & Mungale, Nirmal & Wazalwar,
Sampada & Deshmukh, Atul. (2023). Smart AI based Eye Gesture Control System.
18731876. 10.1109/ICICCS56967.2023.10142921.

DEPT OF 18
CSE,MMEC
ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING

DEPT OF 19
CSE,MMEC

You might also like