JIVESH SHARMA
JIVESH SHARMA
JIVESH SHARMA
at
BACHELOR OF TECHNOLOGY
JUNE-JULY ,2023
SUBMITTED BY
CANDIDATE'S DECLARATION
I “JIVESH SHARMA” hereby declare that I have undertaken one month training “INFOSYS
work which is
This report provides a concise overview of the essential concepts in Artificial Intelligence (AI)
these critical domains. The document explores the symbiotic relationship between AI and
cybersecurity, highlighting key principles and their implications for modern digital landscapes.
The AI primer section introduces fundamental concepts, including machine learning, neural
networks, and natural language processing. It outlines the basic workings of AI systems, their
applications across industries, and the potential impact on society. The report emphasizes the
Fundamentals section delves into the core principles of securing digital environments. It covers
key topics such as encryption, access control, and network security. The report provides
insights into common cyber threats and attack vectors, empowering readers to comprehend the
challenges posed by evolving cyber threats. This report serves as a succinct guide to the core
these two domains, it equips readers with the knowledge necessary to navigate the complexities
of the digital landscape and contribute to the ongoing discourse on responsible AI development
It give me immense pleasure to find an opportunity to express my deep gratitude to Dr. Sehijpal
Singh (Principal), Dr. Narwant Singh Grewal, Head, Electronics and Communication
Engineering Department, Guru Nanak Dev Engineering College, Ludhiana for enthusiastic
encouragement and useful critiques of the training. I hereby acknowledge my sincere thanks for
valuable guidance. I would like to thank Faculty members of “ECE” for their suggestions and
information relating to my training. I am greatly indebted to all those writers ang organization
whose books, articles and reports which I have used as reference in preparing the training file.
JIVESH SHARMA
CONTENTS
Topic Page No.
Certificate by lnstitute i
Candidate’s Declaration ii
Abstract iii
Acknowledgement iv
CHAPTER 1 INTRODUCTION 1-21
1.1 Artificial Intelligence Primer 01
1.1.1 Introduction to Data Science 01
1.1.2 Introduction to Natural Language Processing 04
1.1.3 Introduction to Artificial Intelligence 05
1.1.4 Introduction to Deep Learning 08
1.1.5 Computer Vision 101 10
1.1.6 Introduction to Robotic Process Automation 13
1.2 Foundation of Cyber Security 16
1.2.1 Fundamental of Information Security 16
1.2.2 Fundamental of Cryptography 17
1.2.3 Introduction to Cyber Security 18
1.2.4 ITIL 2011 Foundation 20
1.2.5 Network Fundamental 21
CHAPTER 2 TRAINING WORK UNDERTAKEN 22-25
2.1 Curriculum Development 22
2.2 Training Delivery 23
2.3 Assessment and Certification 25
CHAPTER 3 RESULTS AND DISCUSSION 26
CHAPTER 4 CONCLUSION AND FUTURE SCOPE 27-28
4.1 Conclusion 27
4.2 Future Scope 28
REFRENCES 29
CHAPTER 1 INTRODUCTION
Artificial Intelligence (AI) is a branch of computer science that focuses on creating systems
or machines capable of performing tasks that would typically require human intelligence.
understanding, and even speech recognition. The goal of AI is to develop systems that can
simulate human intelligence and, in some cases, exceed human capabilities. AI is becoming
increasingly integrated into various aspects of our daily lives, shaping the future of
technology and redefining how we interact with machines and information. It's important to
consider the ethical implications and potential societal impacts as AI continues to evolve.
algorithms, and systems to extract knowledge and insights from structured and
Data Collection:
Sources of Data: Data can be collected from various sources, such as sensors,
Preprocessing: Transforming raw data into a format suitable for analysis. This may
Descriptive Statistics: Summarizing and describing the main aspects of the data.
(supervised and unsupervised) to build models that can make predictions or discover
patterns. Model
Training and Evaluation: Splitting the data into training and testing sets, training
the model on the training set, and evaluating its performance on the testing set.
updating it as needed.
Statistics and Probability:
on a sample of data.
Big Data Technologies: Hadoop, Spark, and NoSQL Databases: Dealing with large
Domain Expertise:
protection regulations.
non-technical stakeholders.
various industries.
1.1.2 Introduction to Natural Language Processing
Natural Language Processing (NLP) is a subfield of artificial intelligence (AI) that
focuses on the interaction between computers and human language. The primary goal of
way that is both meaningful and contextually relevant. NLP involves a range of tasks
and challenges, from simple tasks like language translation to more complex ones such
Tokenization: The process of breaking down a text into smaller units, such as
NLP tasks.
verb, adjective) to each word in a sentence. POS tagging helps in understanding the
Semantic Analysis: Going beyond syntax to understand the meaning of words and
sentences. This includes tasks like word sense disambiguation and semantic role
labeling.
determining that "he" refers to a specific person mentioned earlier in the text.
monitoring,
customer feedback analysis, and opinion mining.
questions posed in natural language. This involves both understanding the question
engines, voice assistants, email filtering, sentiment analysis in social media, and
content summarization.
NLP is a dynamic field with ongoing research and development, and its applications
intelligence. The goal is to develop systems that can simulate human reasoning,
wide range
of techniques and approaches, and it has both historical roots and modern applications
programmed.
Deep Learning: A specialized form of ML, deep learning involves neural networks
with multiple layers (deep neural networks). It has shown remarkable success in
Computer Vision: This field focuses on enabling machines to interpret and make
decisions based on visual data. Applications include image and video analysis,
abilities of a human expert in a specific domain. They use knowledge bases and
analytics.
finance.
Data Quality and Bias: AI systems heavily depend on data, and biases in the
training data can lead to biased predictions. Ensuring high-quality, diverse datasets
is crucial.
AI continues to advance rapidly, and its impact is felt across various domains. As
researchers and engineers work on addressing challenges and improving AI systems, the
networks with multiple layers (deep neural networks) to model and solve complex
problems. The term "deep" refers to the depth of the neural networks, which have
multiple layers of interconnected nodes or artificial neurons. Deep Learning has gained
image and speech recognition, natural language processing, and computer vision.
Neural Networks: Neural networks are the foundation of deep learning. They are
neuron. Neural networks can have an input layer, one or more hidden layers, and an
output layer.
Deep Neural Networks: Deep neural networks refer to neural networks with
multiple hidden layers. The depth of these networks allows them to learn
neural networks. It involves iteratively adjusting the weights of the network based
on the difference between predicted and actual outcomes, minimizing the error.
Training Data: Deep learning models require large amounts of labeled training
data to learn patterns and make accurate predictions. The availability of big data has
Convolutional Neural Networks (CNNs): CNNs are a type of deep neural network
designed for image recognition and computer vision tasks. They use convolutional
layers to automatically learn spatial hierarchies of features.
Recurrent Neural Networks (RNNs): RNNs are designed for sequential data and
tasks such as natural language processing. They have loops that allow information
Long Short-Term Memory (LSTM) Networks: LSTMs are a type of RNN that is
models on one task and adapting them for another task. This can significantly
model that consists of a generator and a discriminator. They are used for generating
new data instances, such as images or text, by learning the underlying data
distribution.
personalized medicine.
optimization.
problems that were difficult to address with traditional machine learning techniques. As
computational power and data availability continue to grow, deep learning is expected
interpret and understand visual information from the world. It involves the development
images and videos, similar to how humans perceive and interpret visual data.
Operations like filtering, edge detection, and image smoothing are commonly used.
Feature Extraction: Identifying and extracting relevant features from images, such
as edges, corners, or texture patterns. Features are crucial for subsequent analysis.
autonomous vehicles.
algorithms. This includes techniques like edge detection, corner detection, and
Hough transforms.
labeled training data. Support Vector Machines (SVMs), Random Forests, and k-
highly effective for tasks like image classification and object detection.
detection.
Data Quality and Quantity: Availability of large, diverse datasets is crucial for
Robustness: Ensuring that computer vision models perform well across different
breakthroughs in how machines perceive and interact with the visual world.
to automate repetitive, rule-based tasks within business processes. RPA aims to mimic
human interactions with digital systems to perform tasks such as data entry, transaction
processing, and communication across multiple applications.
Robots/Bots: Software robots that execute predefined tasks based on rules and
instructions. These bots interact with applications, manipulate data, and perform
Process Designer: A tool or platform that allows users to design, create, and
manage automated workflows. Process designers often use a visual interface for
easy configuration.
Bot Orchestrator: An orchestration tool that manages and controls the execution of
multiple bots. It schedules, monitors, and logs bot activities, ensuring smooth
rules, and create automation scripts. It may include features for debugging, testing,
Control Room: A central hub that provides visibility into bot activities,
performance analytics, and overall system monitoring. The control room allows
Characteristics of RPA:
Rule-Based Automation: RPA is well-suited for tasks with clear rules and
Non-Intrusive Integration: RPA can work with existing systems without requiring
layer of applications.
Speed and Efficiency: Bots operate at a faster pace than human counterparts,
leading to increased efficiency and reduced processing time for repetitive tasks.
Benefits of RPA:
Accuracy and Error Reduction: Bots perform tasks with a high degree of
accuracy, minimizing errors often associated with manual data entry and processing.
Data Entry and Migration: Automating the input of data into systems and
ecosystems.
safeguard digital assets, information, and systems from a myriad of threats and
availability guarantees uninterrupted access to critical resources. This triad forms the
bedrock of protective measures implemented across various layers, including people,
processes, and technology. The human element involves educating users through
Robust processes and procedures, such as access control policies, incident response
software, and encryption, are deployed to fortify the digital perimeter. Access control
ensuring that only authenticated and authorized individuals can access sensitive
secure data centers and facilities. Compliance with regulatory standards, continuous
framework designed to protect digital assets and sensitive data from unauthorized
access, disclosure, or compromise. At the heart of information security lie the three core
and systems are reliably accessible to authorized users when needed. To enforce these
Physical security measures are also implemented to secure data centers and facilities,
detect, respond to, and recover from security incidents, minimizing potential damage.
into unreadable formats that can only be deciphered with the proper keys. As
serve as the cornerstone for building a resilient defense against a constantly evolving
framework designed to protect digital assets and sensitive data from unauthorized
access, disclosure, or compromise. At the heart of information security lie the three core
and systems are reliably accessible to authorized users when needed. To enforce these
identities and granting appropriate access rights. Physical security measures are also
access. Regular training programs and security awareness initiatives educate users on
into unreadable formats that can only be deciphered with the proper keys. As
serve as the cornerstone for building a resilient defense against a constantly evolving
defending computer systems, networks, and data from malicious attacks, unauthorized
access, and data breaches. The rapid evolution of technology has ushered in
unprecedented convenience, but it has also exposed vulnerabilities that threat actors
The digital landscape is rife with potential risks, including malware, phishing attacks,
against these threats, employing a multifaceted approach to fortify the security posture
Network Security:
against unauthorized access, and utilizing firewalls and intrusion detection systems.
Endpoint Security:
Incident Response:
Involves developing and implementing plans to detect, respond to, and recover from
Encryption:
empower individuals with the knowledge to identify and mitigate cybersecurity risks.
As the digital landscape continually evolves, so too do cyber threats. The field of
information but also ensures the integrity and stability of the interconnected world in
which we live. As we navigate the complexities of the digital age, a robust and
framework for aligning IT services with the needs of the business. The Foundation level
introduces key concepts and principles that lay the groundwork for effective service
delivery and
continual improvement.
Strategy, Service Design, Service Transition, Service Operation, and Continual Service
Improvement. Each stage delineates specific processes and activities aimed at delivering
The ITIL Foundation not only imparts a common language and understanding within IT
of delivering value to customers and aligning IT services with business goals, ITIL
IT organizations.
Organizations adopting ITIL 2011 Foundation principles benefit from enhanced service
standard, ITIL 2011 Foundation certification validates the foundational knowledge and
information technology.
our interconnected world, facilitating the seamless flow of data across devices, systems,
and continents. At its core, networking is about creating pathways for communication,
seven layers, from the physical transmission of data to the application layer, the OSI
and others, it governs how data is transmitted, routed, and received across networks.
systems.
networks. This not only optimizes network performance but also enhances security by
routers, switches, hubs, and protocols like DHCP (Dynamic Host Configuration
Protocol) and DNS (Domain Name System). The dynamic nature of networking,
of evolving trends.
Network fundamentals find application in everyday scenarios, from the data centers that
power cloud services to the seamless communication between devices in the Internet of
Things (IoT). Whether ensuring secure data transmission, troubleshooting connectivity
indispensable.
In conclusion, network fundamentals serve as the blueprint for the digital connective
fabric that underpins our modern world. As technology evolves, the ability to
individuals and organizations to harness the full potential of our interconnected future.
CHAPTER 2 TRAINING WORK UNDERTAKEN
inception to its current state. This historical context allowed participants to appreciate
the evolution and breakthroughs in AI, which set the stage for the contemporary
landscape.
supervised learning, where models are trained on labelled data, and unsupervised
learned about the critical step of feature engineering, where they prepared and selected
relevant features for model training. Model evaluation techniques were also addressed
Deep Learning: The deep learning module introduced participants to the fascinating
neural networks (CNNs) for image analysis, recurrent neural networks (RNNs) for
sequential data, and generative adversarial networks (GANs) for image generation.
Participants gained practical experience in building and training these deep learning
Natural Language Processing (NLP): The NLP section immersed participants in the
field of language understanding. They learned the intricacies of text analysis, sentiment
analysis, and chatbot development. With the aid of NLP frameworks and libraries,
participants could work with text data, analyses sentiment in user reviews, and even
create chatbots capable of understanding and generating human language. This module
emphasized the importance of NLP in tasks like chatbots, virtual assistants, and content
analysis.
Computer Vision: The curriculum delved into the exciting domain of computer vision.
applications like image recognition and object detection. They also explored the
participants to work with image data, develop image classifiers, and understand the
Ethics in AI: In recognition of the ethical considerations in AI, the program dedicated
the ethical dimensions of AI, particularly focusing on fairness, bias mitigation, and
potential biases and to explore strategies for making AI more equitable and
networks, and programs from digital attacks. These cyberattacks are usually aimed at
disruption or destruction. The goal is to ensure the safety and privacy of critical data
only the person a message was intended for can read it. The art of cryptography has
been used to code messages for thousands of years and continues to be used in bank
components that define the structure and functionality of computer networks. In the
realm of computer networking, a network is essentially a collection of interconnected
resource-sharing. Networks can take various forms, including Local Area Networks
(LANs) confined to a limited geographic area, Wide Area Networks (WANs) spanning
Protocols, such as TCP/IP, HTTP, and FTP, govern data exchange, while the OSI
model provides a conceptual framework with seven layers for network functions
which were carefully designed to facilitate self-paced learning. These courses included
participants to ask questions, seek clarifications, and delve deeper into complex topics.
The interactive Q&A format enhanced the learning experience and allowed for a deeper
understanding of AI concepts.
Regular Quiz: Quizzes were administered after the completion of every topic or
module, and participants were encouraged to take these quizzes to test their
and skills gained throughout the module. These assessments ensured that participants
were consistently engaged and retained the information, contributing to their overall
learning experience.
the program, a robust assessment system was implemented. Regular quizzes tested their
real-world scenarios. These assessments provided feedback and identified areas for
their dedication to learning but also demonstrated their readiness for AI-related roles in
the job market. The certification validated their capabilities and knowledge, bolstering
The integration of artificial intelligence into cybersecurity has yielded significant results in
enhancing the ability to detect, prevent, and respond to cyber threats. AI technologies, such as
machine learning and neural networks, enable systems to analyze vast amounts of data and
identify patterns indicative of malicious activities. This has proven invaluable in real-time
enabling cybersecurity professionals to focus on more complex and strategic aspects of threat
mitigation. This has the potential to alleviate the shortage of skilled cybersecurity experts and
The deployment of AI in cybersecurity, however, raises concerns about the potential for
adversarial attacks that manipulate AI algorithms. Ongoing research and development are
crucial to creating robust and resilient AI systems that can withstand sophisticated cyber
threats.
nature of cyber threats requires continuous adaptation of AI models to new attack vectors.
Additionally, the ethical considerations surrounding AI, such as privacy issues and potential
Despite these challenges, the synergy between AI and cybersecurity offers immense
contribute to a proactive security posture, identifying and neutralizing threats before they
Artificial Intelligence (AI) has evolved from a concept to a powerful and transformative
force, influencing nearly every aspect of our lives. Its applications range from virtual
autonomous vehicles. The ability of AI systems to analyze vast datasets, learn from
including issues related to bias in algorithms, privacy implications, and the potential
responsible and ethical practices. Cybersecurity stands at the forefront of our digital age,
and ensuring the confidentiality of digital assets has never been more pronounced. As we
with regulations, and the continuous evolution of cybersecurity technologies are essential
The future scope of Artificial Intelligence (AI) holds immense promise, paving the way
capability to decipher complex data patterns and make accurate predictions. Explainable
decision-making
processes. In healthcare, AI is poised to revolutionize personalized medicine, drug
discovery, and predictive analytics, contributing to more effective and tailored healthcare
solutions. Autonomous systems, including self-driving cars and drones, will witness
operations. Natural Language Processing (NLP) will progress, fostering more natural and
social good will expand, addressing global challenges such as climate change and
Artificial Intelligence (AI) and machine learning are set to revolutionize cybersecurity,
securing interconnected devices and ecosystems will be a critical focus. The deployment
of 5G networks and the rise of edge computing demand enhanced security measures to
protect the increased data flow at the edge. The Zero Trust security model, relying on
evolving threat vectors. Biometric authentication and behavioral analysis will play a
significant role in user identification and access control. Securing cloud services and