Summer Internship Reports DSA Using C++
Summer Internship Reports DSA Using C++
Summer Internship Reports DSA Using C++
Submitted by
Sidharth Sehrawat
B-TECH CSE
June-July,2023
TABLE OF CONTENT
I, Sidharth Sehrawat, hereby declare that the work presented herein is genuine
work done originally by me and has not been published or submitted elsewhere
for the requirement of a degree programme. Any literature, data or works done
by others and cited within this dissertation has been given due
acknowledgement and listed in the reference section.
Sidharth Sehrawat
Reg .No.:- 12112863
Date :- 26th Aug 2023
CERTIFICATION
ACKNOWLEDGEMENT
First of all we would like to take this opportunity to thank the LOVELY
PROFESSIONAL UNIVERSITY for having summer training as a part of the
B Tech CSE degree. The accomplishment of this project otherwise would have
been painstaking endeavor, for lack of staunch and sincere support of the Mittal
School of Business, LPU. The incessant and undeterred succors extended by the
members of the department facilitated the job to the great extent. If this goes
unnoticed and unacknowledged it would be selfishness.
Many people have influenced the shape and content of this project, and many
supported me throughout. I express my sincere gratitude to Anurag Mishra,
founder, Cipher schools who was available for help whenever i required, his
guidance, gentle persuasion and active support has made it possible to complete
this project.
In the end, I can say only this much that “ALL ARE NOT MENTIONED, BUT
NONE IS FORGOTTEN”
Last but not the least I would like to thank GOD, who continues to look after us
despite all my flaws.
TRAINING DESCRIPTION
Introduction
This report describes my journey of learning and growth, revealing the stages of
understanding and practical application that have shaped my knowledge of these
core concepts.
Data structures and algorithms are the foundation of software engineering. They
are used to create complex data storage systems and efficient algorithms for
processing large datasets. With this in mind, I set out to deepen my
understanding of these concepts, driven by a desire to master the art of creating
elegant solutions to complex problems.
In addition to the technical details, this journey also showed me how these
concepts are connected to real-world situations.
As you read this report, I hope you will see the growth of a learner who has
gained knowledge and experience through practice.
One of the most valuable lessons I learned on this journey was the importance
of collaboration. By working with others on coding projects, I was able to learn
from their experiences and insights. This helped me to improve my own skills
and deepen my understanding of data structures and algorithms.
Another key takeaway from this journey was the importance of practice. By
repeatedly applying what I had learned through coding exercises and practical
applications, I was able to hone my skills and develop a deeper understanding of
these concepts. This allowed me to tackle increasingly complex problems with
confidence.
Basic Concepts and Foundation
At the heart of C++ lies the concept of arrays, an elemental data structure that
taught me about contiguous memory allocation and the importance of indexing
for accessing individual elements. I discovered that arrays provided a
foundational understanding of how memory is managed and data is stored in a
linear fashion. The concept of pointers, C++'s powerful feature, added a new
layer of understanding as I realized the potential to manipulate memory directly.
Trees revealed a world of hierarchical structures beyond linear arrays and lists.
Binary trees presented a concept of branching that laid the foundation for
diverse structures like binary search trees and balanced trees. Learning about
tree traversal methods deepened my understanding of the structural organization
of data.
Recursion: Unraveling Complex Problems
Throughout this foundational phase, C++ played a vital role in bridging theory
and practice. Translating abstract concepts into C++ code solidified my
understanding of data structures and algorithms. From writing class definitions
for ADTs to implementing complex algorithms, C++ served as the canvas upon
which I applied my newfound knowledge.
Crafting a Strong Foundation
Algorithmic Techniques
Divide: Break down the problem into smaller, more manageable subproblems.
1. Identify a base case: The smallest possible input size where the solution is
trivial.
2. Divide the problem into subproblems: Reduce the problem size while
maintaining the same structure.
Sorting Algorithms:
1. Merge Sort: Divides the array into two halves, recursively sorts them, and
then merges them.
2. Quick Sort: Chooses a pivot element, partitions the array around the
pivot, and recursively sorts the subarrays.
3. IntroSort: A hybrid sorting algorithm combining Quick Sort, Heap Sort,
and Insertion Sort.
Searching Algorithms:
1. Binary Search: Divides a sorted array in half and eliminates one half
based on the comparison with the target element.
2. Closest Pair of Points: Divide the points into smaller subsets and find the
closest pair in each subset, then merge the solutions.
Numerical Algorithms:
1. Fast Fourier Transform (FFT): Divide the polynomial multiplication
problem into smaller subproblems using roots of unity.
2. Karatsuba Multiplication: Divide large numbers into smaller parts and
recursively multiply them.
Optimization Problems:
1. Maximum Subarray Sum: Divide the array into halves and find the
maximum subarray sum in each half, then combine.
2. Closest Pair of Points: Find the closest pair of points in a plane using the
divide and conquer approach.
Parallel Algorithms:
1. Divide and conquer algorithms can be parallelized by running
subproblem solutions concurrently.
Graph Algorithms:
1. Graph Traversal: Divide graphs into smaller connected components and
apply traversal algorithms.
2. Minimum Spanning Tree: Divide the graph into smaller subgraphs and
merge their MSTs.
C. Knapsack Problem:
1. Problem statement (0/1 Knapsack) and its subproblems.
2. Solving Knapsack using both memoization and tabulation.
I. Introduction to Backtracking:
Explanation of backtracking as a systematic approach to problem-solving.
Key idea: Systematically trying different choices and undoing them when
they don't lead to a solution.
B. Sudoku Solver:
1. Problem statement and rules of Sudoku.
2. Developing a backtracking algorithm to solve Sudoku puzzles.
V. Applications of Backtracking:
Discuss other problem domains where backtracking is commonly used
(e.g., permutation generation, subset generation).
Imagine you're building a social networking platform like Facebook. Each user
is a node, and their connections form edges in a graph. You can use graph
algorithms to find shortest paths between users, suggest friends of friends, and
detect communities within the network. Graph traversal algorithms like
Breadth-First Search (BFS) and Depth-First Search (DFS) are essential for
exploring these connections efficiently.
2. GPS Navigation
When you use a GPS app to find the shortest route between two locations,
you're leveraging Dijkstra's algorithm. This algorithm finds the shortest path in
weighted graphs, where edges have distances associated with them. By
representing roads as nodes and distances as edges, you can calculate the
optimal route to your destination.
Online stores need efficient inventory management. Hash tables can store
product information based on unique identifiers. Given a product ID, you can
quickly retrieve details like price, availability, and specifications. Hashing
ensures constant-time access to this information, making the shopping
experience seamless.
Your computer's file system is structured like a tree. Directories (folders) are
nodes, and files are leaves. Binary search trees can optimize file searches and
retrievals. By maintaining the hierarchy in a balanced tree, you can efficiently
locate files without searching through the entire file system.
5. Text Auto-Completion
When you type in a search bar, the system often suggests completions. Tries are
used to implement this feature efficiently. Each node in the trie represents a
character, and by traversing the trie, the system generates word suggestions
based on the characters entered.
6. Music Playlist
7. Online Gaming
In online games, players often have different priorities for actions like healing,
attacking, or defending. A priority queue (implemented as a heap) can manage
these actions efficiently. The highest priority action is executed first, enhancing
gameplay dynamics.
8. Spell Check
When a spell check suggests corrections, it calculates the edit distance between
the input word and words in its dictionary. Dynamic programming algorithms
can efficiently compute this distance, helping users find the most likely correct
word.
9. Image Processing
In banking applications, balanced trees like AVL trees can store account
information. These trees ensure that account balances remain organized, and
operations like deposits, withdrawals, and transfers can be performed
efficiently.
Search engines use algorithms to crawl the web and index websites. Depth-First
Search can be employed to traverse websites efficiently, ensuring that all pages
are visited while avoiding revisiting the same pages repeatedly.
C. Hybrid Approaches:
1. Combining collaborative filtering, graph algorithms, and other
techniques.
2. Developing hybrid recommendation systems for enhanced accuracy.
B. Algorithm Execution:
1. How Google Maps uses Dijkstra's algorithm to find initial routes.
2. Incorporating real-time traffic data for dynamic adjustments.
C. Visual Representation:
1. Displaying recommended routes on the map interface.
2. Color-coded traffic conditions (green, yellow, red) for easy interpretation.
V. Challenges and Future Enhancements: A. Data Accuracy and Quality:
1. Addressing inaccuracies in real-time traffic data.
2. Strategies to improve data quality through user feedback.
B. Predictive Analysis:
1. Using historical data to predict future traffic patterns.
2. Enhancing route recommendations with predictive models.
C. Multi-Modal Navigation:
1. Incorporating various transportation modes (public transit, walking,
biking).
2. Providing seamless navigation across different modes.
B. Optimization Objectives:
1. Defining optimization goals (e.g., minimize/maximize cost, time,
accuracy).
2. Balancing between different objectives based on the problem context.
V. Real-World Examples:
A. Traveling Salesman Problem:
1. Illustrating the challenge of finding the optimal route through all cities.
2. Discussing algorithms like brute force, dynamic programming, and
heuristics.
B. Sorting Algorithms:
1. Analyzing trade-offs between different sorting algorithms.
2. Comparing time complexity and adaptability to input data.
B. Pointer Arithmetic:
1. Explanation of pointer arithmetic for navigating memory addresses.
2. Incrementing and decrementing pointers, pointer offsets.
C. Dereferencing Pointers:
1. Using the dereference operator (*) to access the value pointed to by a
pointer.
2. How dereferencing relates to reading and modifying variables through
pointers.
B. new Operator:
1. Using the new operator to allocate memory for objects.
2. Advantages of new over malloc, automatic constructor calls.
C. delete Operator:
1. Using the delete operator to release memory allocated with new.
2. Handling memory deallocation to prevent memory leaks.
IV. Memory Deallocation: Avoiding Memory Leaks
A. Memory Leaks:
1. Explanation of memory leaks and their consequences.
2. Instances when allocated memory is not properly deallocated.
B. Dangling Pointers:
1. Understanding dangling pointers after memory deallocation.
2. Techniques to avoid accessing memory through dangling pointers.
B. std::unique_ptr:
1. Unique ownership semantics of std::unique_ptr.
2. Automatic deallocation when the pointer goes out of scope.
B. Hash Tables:
1. Overview of hash tables and constant-time average case lookups.
2. Increased memory usage due to hash table structure.
Project Showcases
B. Functional Requirements:
1. Identifying the main features and functionalities of the tool.
2. Specifying user interactions, data input/output, and analysis capabilities.
A. Data Sources:
1. Identifying potential data sources (social media APIs, CSV files,
databases).
2. Choosing data sources based on the tool's objectives.
B. Data Preprocessing:
1. Cleaning and transforming raw data into a usable format.
2. Handling missing values, data normalization, and data enrichment.
A. Node Metrics:
1. Degree centrality, betweenness centrality, closeness centrality.
2. Identifying influential nodes and their positions in the network.
B. Edge Metrics:
1. Weighted relationships, edge betweenness.
2. Analyzing the strength and importance of connections.
VI. Visualization:
B. Community Detection:
1. Applying community detection algorithms to group nodes with similar
characteristics.
2. Visualizing communities and their interconnections.
A. Centrality Measures:
1. Eigenvector centrality, Katz centrality, PageRank.
2. Analyzing importance and influence in a network.
A. User-Friendly Interface:
1. Designing an intuitive and user-friendly interface.
2. Allowing users to upload data, view visualizations, and perform analyses.
B. Interactivity and Filters:
1. Enabling users to filter data, zoom in/out, and interact with visualizations.
2. Providing options for customizing the analysis and visualization
parameters.
A. Exporting Results:
1. Allowing users to export visualizations, graphs, and analysis reports.
2. Supporting formats like images, PDFs, and CSV files.
B. Sharing Capabilities:
1. Enabling users to share their analyses and visualizations with others.
2. Integrating social media sharing or collaboration features.
B. User Testing:
1. Inviting users to test the tool and provide feedback.
2. Making improvements based on user suggestions.
A. User Documentation:
1. Creating user guides and tutorials for using the tool.
2. Providing step-by-step instructions and examples.
B. Functional Requirements:
1. Identifying the core features of the auto-completion system.
2. Defining user interactions, input methods, and real-time suggestions.
A. Corpus Collection:
1. Gathering a dataset of text documents for training the auto-completion
model.
2. Choosing diverse and relevant sources to cover a wide range of
vocabulary.
B. Text Preprocessing:
1. Cleaning and processing the raw text data.
2. Tokenization, removing punctuation, lowercase conversion, and
stemming.
A. Introduction to N-Grams:
1. Explanation of N-grams as contiguous sequences of N items (usually
words).
2. Using N-grams to model the probabilities of word sequences.
A. User-Friendly Interface:
1. Designing an intuitive interface for the auto-completion feature.
2. Implementing a text input area with real-time suggestion dropdown.
B. Real-Time Interaction:
1. Implementing event listeners to capture user input.
2. Triggering suggestion generation as the user types.
A. Unit Testing:
1. Testing individual components for correctness and accuracy.
2. Ensuring that N-gram models, suggestion ranking, and UI interactions
work as expected.
B. User Testing:
1. Inviting users to interact with the auto-completion feature.
2. Collecting feedback and making improvements based on user experience.
IX. Documentation and Support:
A. User Documentation:
1. Creating user guides and instructions for using the auto-completion tool.
2. Detailing how to enable/disable the feature, adjust settings, etc.
B. Functional Requirements:
1. Identifying the core features and functionalities of the system.
2. Defining user roles, access levels, and data input/output.
A. Product Information:
1. Storing essential product details (name, description, SKU, price).
2. Handling product categories, attributes, and variations.
B. Stock Tracking:
1. Defining data structures to track stock levels and availability.
2. Incorporating data on quantities, locations, and incoming/outgoing stock.
IV. User Roles and Access Control:
C. Removing Products:
1. Providing a method to mark products as discontinued or out of stock.
2. Handling data archiving or deletion for obsolete products.
A. Stock Adjustments:
1. Enabling staff to adjust stock levels due to various reasons (e.g., damage,
returns).
2. Recording adjustment reasons and maintaining an audit trail.
B. Stock Transfer:
1. Facilitating the movement of stock between different locations or
warehouses.
2. Updating stock levels at both source and destination.
C. Order Fulfillment:
1. Integrating with order processing to deduct sold items from the inventory.
2. Automating stock level updates based on order statuses.
VII. Reporting and Analytics:
A. Dashboard:
1. Designing a dashboard for quick access to important inventory
information.
2. Displaying key metrics, alerts, and pending tasks.
A. Unit Testing:
1. Testing individual components and functionalities for correctness.
2. Ensuring accurate calculations, data storage, and interactions.
A. User Documentation:
1. Creating user guides and manuals for using the inventory management
system.
2. Detailing how to perform various operations, access reports, etc.
Advanced Algorithms:
With a strong foundation in data structures and algorithms, you'll be ready to
explore more advanced and intricate algorithms.
Topics like graph algorithms, dynamic programming, and advanced searching
and sorting techniques will deepen your understanding and problem-solving
skills.
Competitive Programming:
If you're inclined towards competitive programming, your journey will equip
you to participate in coding competitions and hackathons.
The skills you've developed will give you an edge in solving time-critical and
demanding problems.
Specialization:
You might find yourself gravitating toward specific areas of interest within
computer science.
Whether it's artificial intelligence, data science, game development, or software
engineering, your knowledge of data structures and algorithms will serve as a
strong foundation for specialization.
Interview Readiness:
If you're considering a career in tech, your expertise in data structures and
algorithms will be crucial for technical interviews.
You'll be well-prepared to tackle coding challenges presented during interviews
for internships, jobs, or research positions.
Teaching and Mentoring:
As you continue to learn, you'll also have the opportunity to share your
knowledge with others.
Mentoring aspiring programmers or teaching data structures and algorithms
workshops can be immensely fulfilling.
Career Growth:
Mastery of data structures and algorithms is highly regarded in the tech
industry.
It can open doors to diverse roles such as software engineer, data scientist,
research scientist, algorithm engineer, and more.
REFERENCES
1. https://algs4.cs.princeton.edu/references/
2. https://libguides.csun.edu/comp282/citeyoursources
3. https://www.aiu.edu/publications/student/english/algorithms%20&%20da
ta%20structures.html
4. https://www.cambridge.org/core/books/abs/data-structures-and-
algorithms-using-c/references/747090C753A886433F1F3AF29886ABD1