Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
52 views

Python Ques

1) The difference between shallow and deep copying in Python is that shallow copying creates a new object and then inserts references to the original object's mutable elements, while deep copying creates new objects and recursively inserts copies of the original's elements. 2) *args allows a function to accept an variable number of non-keyword arguments, treating them as a tuple. **kwargs allows a function to accept an variable number of keyword arguments, treating them as a dictionary. 3) While tuples are immutable, they can contain mutable elements because tuples only store references to objects rather than the objects themselves, so changes to mutable elements in a tuple are reflected in the original objects.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

Python Ques

1) The difference between shallow and deep copying in Python is that shallow copying creates a new object and then inserts references to the original object's mutable elements, while deep copying creates new objects and recursively inserts copies of the original's elements. 2) *args allows a function to accept an variable number of non-keyword arguments, treating them as a tuple. **kwargs allows a function to accept an variable number of keyword arguments, treating them as a dictionary. 3) While tuples are immutable, they can contain mutable elements because tuples only store references to objects rather than the objects themselves, so changes to mutable elements in a tuple are reflected in the original objects.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

1) difference between shallow copy and deepcopy in python using example

Python defines a module which allows to deep copy or shallow copy mutable object using the inbuilt functions present
in the module “copy“.
Assignment statements in Python do not copy objects, they create bindings between a target and an object. For
collections that are mutable or contain mutable items, a copy is sometimes needed so one can change one copy
without changing the other.
Deep copy

In case of deep copy, a copy of object is copied in other object. It means that any changes made to a copy of
object do not reflect in the original object.
In python, this is implemented using “deepcopy()” function.
# Python code to demonstrate copy operations

# importing "copy" for copy operations


import copy

# initializing list 1
li1 = [1, 2, [3,5], 4]

# using deepcopy to deep copy


li2 = copy.deepcopy(li1)

# original elements of list


print ("The original elements before deep copying")
for i in range(0,len(li1)):
print (li1[i],end=" ")

print("\r")

# adding and element to new list


li2[2][0] = 7

# Change is reflected in l2
print ("The new list of elements after deep copying ")
for i in range(0,len( li1)):
print (li2[i],end=" ")

print("\r")

# Change is NOT reflected in original list


# as it is a deep copy
print ("The original elements after deep copying")
for i in range(0,len( li1)):
print (li1[i],end=" ")
Copy CodeRun on IDE
Output:
The original elements before deep copying
1 2 [3, 5] 4
The new list of elements after deep copying
1 2 [7, 5] 4
The original elements after deep copying
1 2 [3, 5] 4
In the above example, the change made in the list did not effect in other list, indicating the list is deep copied.

Shallow copy

In case of shallow copy, a reference of object is copied in other object. It means that any changes made to a copy of
object do reflect in the original object.
In python, this is implemented using “copy()” function.
# Python code to demonstrate copy operations

# importing "copy" for copy operations


import copy

# initializing list 1
li1 = [1, 2, [3,5], 4]

# using copy to shallow copy


li2 = copy.copy(li1)

# original elements of list


print ("The original elements before shallow copying")
for i in range(0,len(li1)):
print (li1[i],end=" ")

print("\r")

# adding and element to new list


li2[2][0] = 7

# checking if change is reflected


print ("The original elements after shallow copying")
for i in range(0,len( li1)):
print (li1[i],end=" ")
Copy CodeRun on IDE
Output:
The original elements before shallow copying
1 2 [3, 5] 4
The original elements after shallow copying
1 2 [7, 5] 4
In the above example, the change made in the list did effect in other list, indicating the list is shallow copied.
Important Points:
The difference between shallow and deep copying is only relevant for compound objects (objects that contain other
objects, like lists or class instances):
● A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to
the objects found in the original.
● A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects
found in the original.

2) Tuples are faster than list:


Tuples are stored in a single block of memory. Tuples are immutable so, It doesn't require extraspace to store new
objects. Lists are allocated in two blocks: the fixed one with all the Python object information and a variable sized
block for the data. It is the reason creating a tuple is faster than List.

Note :
y = tuple([1,2,3])
>>> y
(1, 2, 3)
>>> y[0] = 5 # Not allowed!

Traceback (most recent call last):


File "<pyshell#20>", line 1, in <module>
y[0] = 5
TypeError: 'tuple' object does not support item assignment

a = [1,2,3]
>>> b = [4,5,6]
>>> t = (a,b)
>>> t
([1, 2, 3], [4, 5, 6])
You are allowed to modify the internal lists as
>>> t[0][0] = 5
>>> t
([5, 2, 3], [4, 5, 6])

3) If a tuple is immutable then why can it contain mutable items?


The key insight is that tuples have no way of knowing whether the objects inside them are mutable. The only thing
that makes an object mutable is to have a method that alters its data. In general, there is no way to detect this.

Another insight is that Python's containers don't actually contain anything. Instead, they keep references to other
objects. Likewise, Python's variables aren't like variables in compiled languages; instead the variable names are just
keys in a namespace dictionary where they are associated with a corresponding object. Ned Batchhelder explains
this nicely in his blog post. Either way, objects only know their reference count; they don't know what those
references are (variables, containers, or the Python internals).
Together, these two insights explain your mystery (why an immutable tuple "containing" a list seems to change when
the underlying list changes). In fact, the tuple did not change (it still has the same references to other objects that it
did before). The tuple could not change (because it did not have mutating methods). When the list changed, the tuple
didn't get notified of the change (the list doesn't know whether it is referred to by a variable, a tuple, or another list).

While we're on the topic, here are a few other thoughts to help complete your mental model of what tuples are, how
they work, and their intended use:

1. Tuples are characterized less by their immutability and more by their intended purpose.
Tuples are Python's way of collecting heterogeneous pieces of information under one roof. For example, s =
('www.python.org', 80) brings together a string and a number so that the host/port pair can be passed around as
a socket, a composite object. Viewed in that light, it is perfectly reasonable to have mutable components.
2. Immutability goes hand-in-hand with another property, hashability. But hashability isn't an absolute property. If
one of the tuple's components isn't hashable, then the overall tuple isn't hashable either. For example, t = ('red',
[10, 20, 30]) isn't hashable.
The last example shows a 2-tuple that contains a string and a list. The tuple itself isn't mutable (i.e. it doesn't have
any methods that for changing its contents). Likewise, the string is immutable because strings don't have any
mutating methods. The list object does have mutating methods, so it can be changed. This shows that mutability is a
property of an object type -- some objects have mutating methods and some don't. This doesn't change just because
the objects are nested.

Remember two things. First, immutability is not magic -- it is merely the absence of mutating methods. Second,
objects don't know what variables or containers refer to them -- they only know the reference count.

4) difference between args and kwargs in python using example

*args
The special syntax *args in function definitions in python is used to pass a variable number of arguments to a
function. It is used to pass a non-keyworded, variable-length argument list.
● The syntax is to use the symbol * to take in a variable number of arguments; by convention, it is often used
with the word args.
● What *args allows you to do is take in more arguments than the number of formal arguments that you
previously defined. With *args, any number of extra arguments can be tacked on to your current formal
parameters (including zero extra arguments).
● For example : we want to make a multiply function that takes any number of arguments and able to multiply
them all together. It can be done using *args.
● Using the *, the variable that we associate with the * becomes an iterable meaning you can do things like
iterate over it, run some higher order functions such as map and filter, etc.
● Example for usage of *arg:
# Python program to illustrate
# *args for variable number of arguments
def myFun(*argv):
for arg in argv:
print (arg)

myFun('Hello', 'Welcome', 'to', 'GeeksforGeeks')


● Copy CodeRun on IDE

● Output:

● Hello

● Welcome
● to

● GeeksforGeeks

# Python program to illustrate


# *args with first extra argument
def myFun(arg1, *argv):
print ("First argument :", arg1)
for arg in argv:
print("Next argument through *argv :", arg)

myFun('Hello', 'Welcome', 'to', 'GeeksforGeeks')


● Copy CodeRun on IDE

● Output:

● First argument : Hello

● Next argument through *argv : Welcome

● Next argument through *argv : to

● Next argument through *argv : GeeksforGeeks

**kwargs

The special syntax **kwargs in function definitions in python is used to pass a keyworded, variable-length argument
list. We use the name kwargs with the double star. The reason is because the double star allows us to pass through
keyword arguments (and any number of them).
● A keyword argument is where you provide a name to the variable as you pass it into the function.

● One can think of the kwargs as being a dictionary that maps each keyword to the value that we pass alongside
it. That is why when we iterate over the kwargs there doesn’t seem to be any order in which they were printed
out.
● Example for usage of **kwargs:
# Python program to illustrate
# *kargs for variable number of keyword arguments

def myFun(**kwargs):
for key, value in kwargs.items():
print ("%s == %s" %(key, value))

# Driver code
myFun(first ='Geeks', mid ='for', last='Geeks')
● Copy CodeRun on IDE

● Output:

● last == Geeks

● mid == for
● first == Geeks

# Python program to illustrate **kargs for


# variable number of keyword arguments with
# one extra argument.

def myFun(arg1, **kwargs):


for key, value in kwargs.items():
print ("%s == %s" %(key, value))

# Driver code
myFun("Hi", first ='Geeks', mid ='for', last='Geeks')
● Copy CodeRun on IDE

● Output:

● last == Geeks

● mid == for

● first == Geeks

Using *args and **kwargs to call a function


Examples:
def myFun(arg1, arg2, arg3):
print("arg1:", arg1)
print("arg2:", arg2)
print("arg3:", arg3)

# Now we can use *args or **kwargs to


# pass arguments to this function :
args = ("Geeks", "for", "Geeks")
myFun(*args)

kwargs = {"arg1" : "Geeks", "arg2" : "for", "arg3" : "Geeks"}


myFun(**kwargs)
Copy CodeRun on IDE
Output:
arg1: Geeks
arg2: for
arg3: Geeks
arg1: Geeks
arg2: for
arg3: Geeks

3) What is lambda function? how to use it? Can we use recursive function inside lambda function?

Python allows you to create anonymous function i.e. function having no names using a facility called lambda function.

lambda functions are small functions usually not more than a line. It can have any number of arguments just like a
normal function. The body of lambda functions is very small and consists of only one expression. The result of the
expression is the value when the lambda is applied to an argument. Also there is no need for any return statement in
lambda function.

Let’s take an example:

Consider a function multiply()

def multiply(x, y):


1
return x * y

This function is too small, so let’s convert it into a lambda function.

To create a lambda function first write keyword lambda followed by one of more arguments separated by comma,
followed by colon sign ( : ), followed by a single line expression.

r = lambda x, y: x * y
1
r(12, 3) # call the lambda function

Expected Output:

36

Here we are using two arguments x and y , expression after colon is the body of the lambda function. As you can see
lambda function has no name and is called through the variable it is assigned to.

Recursive function inside lambda function:

fact = lambda x: 1 if x == 0 else x * fact(x-1)


4) what is map function? how to use map function with lambda with example(syntax)

Basic syntax

map(function_object, iterable1, iterable2,...)

map functions expects a function object and any number of iterables like list, dictionary, etc. It executes the
function_object for each element in the sequence and returns a list of the elements modified by the function object.

Example:
def multiply2(x):
return x * 2

map(multiply2, [1, 2, 3, 4]) # Output [2, 4, 6, 8]

In the above example, map executes multiply2 function for each element in the list i.e. 1, 2, 3, 4 and returns [2, 4,
6, 8]

Let’s see how we can write the above code using map and lambda.

map(lambda x : x*2, [1, 2, 3, 4]) #Output [2, 4, 6, 8]

Just one line of code and that’s it.

Iterating over a dictionary using map and lambda

dict_a = [{'name': 'python', 'points': 10}, {'name': 'java', 'points': 8}]

map(lambda x : x['name'], dict_a) # Output: ['python', 'java']

map(lambda x : x['points']*10, dict_a) # Output: [100, 80]

map(lambda x : x['name'] == "python", dict_a) # Output: [True, False]

In the above example, each dict of dict_a will be passed as parameter to the lambda function. Result
of lambda function expression for each dict will be given as output.

Multiple iterables to the map function


We can pass multiple sequences to the map functions as shown below:
list_a = [1, 2, 3]
list_b = [10, 20, 30]

map(lambda x, y: x + y, list_a, list_b) # Output: [11, 22, 33]

Here, each i^th element of list_a and list_b will be passed as argument to the lambda function.

In Python3, map function returns an iterator or map object which gets lazily evaluated. Just like zip function is
lazily evaluated. Lazily evaluation is explained in more detail in the zip function article

Neither we can access the elements of the map object with index nor we can use len() to find the length of the map
object

We can force convert the map output i.e. the map object to list as shown below:
map_output = map(lambda x: x*2, [1, 2, 3, 4])
print(map_output) # Output: map object: <map object at 0x04D6BAB0>

list_map_output = list(map_output)

print(list_map_output) # Output: [2, 4, 6, 8]

5) Is method and function different? if yes, how?

A function is a piece of code that is called by name. It can be passed data to operate on (i.e., the parameters) and
can optionally return data (the return value). All data that is passed to a function is explicitly passed.

def sum(num1, num2):


return num1 + num2

A method is a piece of code that is called by name that is associated with an object. In most respects it is identical to
a function except for two key differences.

1. It is implicitly passed for the object for which it was called.


2. It is able to operate on data that is contained within the class (remembering that an object is an instance of a class -
the class is the definition, the object is an instance of that data).
3. class Dog:
4. def my_method(self):
5. print "I am a Dog"
6.
7. dog = Dog()
8. dog.my_method() # Prints "I am a Dog"

6) what is ternary operator? how we can implement it in python with example


Ternary operators also known as conditional expressions are operators that evaluate something based on a condition
being true or false. It was added to Python in version 2.5.
It simply allows to test a condition in a single line replacing the multiline if-else making the code compact.
Syntax :
[on_true] if [expression] else [on_false]
Direct Method by using tuples, Dictionary and lambda
# Python program to demonstrate ternary operator
a, b = 10, 20

# Use tuple for selecting an item


print( (b, a) [a < b] )

# Use Dictionary for selecting an item


print({True: a, False: b} [a < b])

# lamda is more efficient than above two methods


# because in lambda we are assure that
# only one expression will be evaluated unlike in
# tuple and Dictionary
print((lambda: b, lambda: a)[a < b]())
Output:
10
10
10
Important Points:
● First the given condition is evaluated (a < b), then either a or b is returned based on the Boolean value returned by the condition

● Order of the arguments in the operator is different from other languages like C/C++.

● Conditional expressions have the lowest priority amongst all Python operations.

7) Which one is faster: set or list


It depends on what you are intending to do with it.

Sets are significantly faster when it comes to determining if an object is present in the set (as in x in s), but are
slower than lists when it comes to iterating over their contents.

8) Program : Given a string: “ab123$zc2” Output: “cz123$ba2” (Note: without using direct swapping. Code should run
for any given string)

9) Uses of Pandas

pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with
“relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing
practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and
flexible open source data analysis / manipulation tool available in any language. It is already well on its way toward
this goal.

pandas is well suited for many different kinds of data:

● Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet

● Ordered and unordered (not necessarily fixed-frequency) time series data.

● Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
● Any other form of observational / statistical data sets. The data actually need not be labeled at all to
be placed into a pandas data structure

The two primary data structures of pandas, Series (1-dimensional) and DataFrame (2-dimensional), handle the vast
majority of typical use cases in finance, statistics, social science, and many areas of engineering. For R
users, DataFrame provides everything that R’s data.frame provides and much more. pandas is built on top
of NumPy and is intended to integrate well within a scientific computing environment with many other 3rd party
libraries.

Here are just a few of the things that pandas does well:

● Easy handling of missing data (represented as NaN) in floating point as well as non-floating point
data
● Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional
objects
● Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the
user can simply ignore the labels and let Series, DataFrame, etc. automatically align the data for
you in computations
● Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for
both aggregating and transforming data
● Make it easy to convert ragged, differently-indexed data in other Python and NumPy data
structures into DataFrame objects
● Intelligent label-based slicing, fancy indexing, and subsetting of large data sets

● Intuitive merging and joining data sets

● Flexible reshaping and pivoting of data sets

● Hierarchical labeling of axes (possible to have multiple labels per tick)

● Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and
saving / loading data from the ultrafast HDF5 format
● Time series-specific functionality: date range generation and frequency conversion, moving window
statistics, moving window linear regressions, date shifting and lagging, etc.

Many of these principles are here to address the shortcomings frequently experienced using other languages /
scientific research environments. For data scientists, working with data is typically divided into multiple stages:
munging and cleaning data, analyzing / modeling it, then organizing the results of the analysis into a form suitable for
plotting or tabular display. pandas is the ideal tool for all of these tasks.

Some other notes

● pandas is fast. Many of the low-level algorithmic bits have been extensively tweaked
in Cythoncode. However, as with anything else generalization usually sacrifices performance. So if
you focus on one feature for your application you may be able to create a faster specialized tool.
● pandas is a dependency of statsmodels, making it an important part of the statistical computing
ecosystem in Python.
● pandas has been used extensively in production in financial applications.
10) Bubble sort in python
def bubbleSort(alist):
for passnum in range(len(alist)-1,0,-1):
for i in range(passnum):
if alist[i]>alist[i+1]:
temp = alist[i]
alist[i] = alist[i+1]
alist[i+1] = temp

alist = [54,26,93,17,77,31,44,55,20]
bubbleSort(alist)
print(alist)
15) Program:
1
232
34543
4567654
16) Dictionary:
A dictionary is a collection which is unordered, changeable and indexed. In Python dictionaries are written with curly
brackets, and they have keys and values.
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
print(thisdict)

Get the value of the "model" key:

x = thisdict["model"]

17) diff between array and dictionary:


Sequences are used to store different types of data in Python. Sequences are divided into two categories.

1. Immutable Sequences: We can not change values of these sequences.


2. Mutable Sequences: We can change values of these sequences.
List and Dictionary both are Mutable Sequences.

List:

List is the most versatile Sequence available in Python, which can be written as a list of comma-separated values
between square brackets.
● Elements present in List maintain their order.

● The elements present in list can be of any type (int, float, string, tuple etc.).

● Elements are accessed through their index values.


When to use List?

● If you have a collection of data that does not need random access, use List.

● Where you have to deal with values which can be changed, use List.
Dictionary:

Dictionary is an unordered collection of key-value pairs. Dictionaries are used to handle large amount of data.

● Every element is having a key-value pair.

● Elements are accessed by using it’s key value.


Where to use Dictionary?

● When you are dealing with unique keys and you are mapping values to the keys, use Dictionary.

21) Why you use postgres?


1. Postgres uses the client-server model to enable multiple connections to the database.
2. Using the popular psycopg2 library, we can use Python to connect to Postgres.
3. Postgres is type sensitive so we have to declare types on each of our columns.
4. Postgres uses SQL transactions to save the state of the database.

22) Difference between postgresql and mysql

23) Memory management in python:

Memory management in Python involves a private heap containing all Python objects and data structures. The
management of this private heap is ensured internally by the Python memory manager. The Python memory
manager has different components which deal with various dynamic storage management aspects, like sharing,
segmentation, preallocation or caching.

At the lowest level, a raw memory allocator ensures that there is enough room in the private heap for storing all
Python-related data by interacting with the memory manager of the operating system. On top of the raw memory
allocator, several object-specific allocators operate on the same heap and implement distinct memory management
policies adapted to the peculiarities of every object type. For example, integer objects are managed differently within
the heap than strings, tuples or dictionaries because integers imply different storage requirements and speed/space
tradeoffs. The Python memory manager thus delegates some of the work to the object-specific allocators, but
ensures that the latter operate within the bounds of the private heap.
It is important to understand that the management of the Python heap is performed by the interpreter itself and that
the user has no control over it, even if they regularly manipulate object pointers to memory blocks inside that heap.
The allocation of heap space for Python objects and other internal buffers is performed on demand by the Python
memory manager through the Python/C API functions listed in this document.

To avoid memory corruption, extension writers should never try to operate on Python objects with the functions
exported by the C library: malloc(), calloc(), realloc() and free(). This will result in mixed calls between the C
allocator and the Python memory manager with fatal consequences, because they implement different algorithms and
operate on different heaps. However, one may safely allocate and release memory blocks with the C library allocator
for individual purposes, as shown in the following example:

PyObject *res;
char *buf = (char *) malloc(BUFSIZ); /* for I/O */

if (buf == NULL)
return PyErr_NoMemory();
...Do some I/O operation involving buf...
res = PyString_FromString(buf);
free(buf); /* malloc'ed */
return res;

In this example, the memory request for the I/O buffer is handled by the C library allocator. The Python memory
manager is involved only in the allocation of the string object returned as a result.

In most situations, however, it is recommended to allocate memory from the Python heap specifically because the
latter is under control of the Python memory manager. For example, this is required when the interpreter is extended
with new object types written in C. Another reason for using the Python heap is the desire to inform the Python
memory manager about the memory needs of the extension module. Even when the requested memory is used
exclusively for internal, highly-specific purposes, delegating all memory requests to the Python memory manager
causes the interpreter to have a more accurate image of its memory footprint as a whole. Consequently, under
certain circumstances, the Python memory manager may or may not trigger appropriate actions, like garbage
collection, memory compaction or other preventive procedures. Note that by using the C library allocator as shown in
the previous example, the allocated memory for the I/O buffer escapes completely the Python memory manager.

How is memory managed in Python?

1. Memory management in python is managed by Python private heap space. All Python objects and data
structures are located in a private heap. The programmer does not have access to this private heap. The
python interpreter takes care of this instead.
2. The allocation of heap space for Python objects is done by Python’s memory manager. The core API gives
access to some tools for the programmer to code.
3. Python also has an inbuilt garbage collector, which recycles all the unused memory and so that it can be
made available to the heap space.

How is Multithreading achieved in Python?

1. Python has a multi-threading package but if you want to multi-thread to speed your code up, then it’s usually
not a good idea to use it.
2. Python has a construct called the Global Interpreter Lock (GIL). The GIL makes sure that only one of your
‘threads’ can execute at any one time. A thread acquires the GIL, does a little work, then passes the GIL
onto the next thread.
3. This happens very quickly so to the human eye it may seem like your threads are executing in parallel, but
they are really just taking turns using the same CPU core.
4. All this GIL passing adds overhead to execution. This means that if you want to make your code run faster
then using the threading package often isn’t a good idea.

24) Is memory management of python is best feature:

25) Python Garbage Collection


The Python interpreter keeps reference counts to objects being used. When an object is not referred to anymore, the
garbage collector is free to release the object and get back the allocated memory. For eg: if you are using regular
Python (that is, CPython, not JPython) this is when Python’s garbage collector will call free()/delete()

Resource
‘resource’ module for finding the current (Resident) memory consumption of your program
[Resident memory is the actual RAM your program is using]

>>> import resource


>>> resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
4332

objgraph
‘objgraph’ is a useful module that shows you the objects that are currently in memory
[objgraph documentation and examples are available at: https://mg.pov.lt/objgraph/]

Let's look at this simple usage of objgraph:

import objgraph
import random
import inspect
class Foo(object):
def __init__(self):
self.val = None
def __str__(self):
return “foo – val: {0}”.format(self.val)
def f():
l = []
for i in range(3):
foo = Foo()
#print “id of foo: {0}”.format(id(foo))
#print “foo is: {0}”.format(foo)
l.append(foo)
return l
def main():
d = {}
l = f()
d[‘k’] = l
print “list l has {0} objects of type Foo()”.format(len(l))
objgraph.show_most_common_types()
objgraph.show_backrefs(random.choice(objgraph.by_type(‘Foo’)),
filename=“foo_refs.png”)
objgraph.show_refs(d, filename=‘sample-graph.png’)
if __name__ == “__main__”:
main()

26) Pickling Unpickling :


Pickling in python refers to the process of serializing objects into binary streams, while unpickling is the inverse of
that. It's called that because of the pickle module in Python which implements the methods to do this.

It is used for serializing and de-serializing a Python object structure. Any object in python can be pickled so that it can
be saved on disk. What pickle does is that it “serializes” the object first before writing it to file. Pickling is a way to
convert a python object (list, dict, etc.) into a character stream.

Whenever Python exits, why isn’t all the memory de-allocated?

1. Whenever Python exits, especially those Python modules which are having circular references to other
objects or the objects that are referenced from the global namespaces are not always de-allocated or freed.
2. It is impossible to de-allocate those portions of memory that are reserved by the C library.
3. On exit, because of having its own efficient clean up mechanism, Python would try to de-allocate/destroy
every other object.

27) Generators are iterators, but you can only iterate over them once. It’s because they do not store all the values in
memory, they generate the values on the fly. You use them by iterating over them, either with a ‘for’ loop or by
passing them to any function or construct that iterates. Most of the time generators are implemented as functions.
However, they do not return a value, they yield it. Here is a simple example of a generator function:

def generator_function():
for i in range(10):
yield i

for item in generator_function():


print(item)

# Output: 0
# 1
# 2
# 3
# 4
# 5
# 6
# 7
# 8
# 9
It is not really useful in this case. Generators are best for calculating large sets of results (particularly calculations
involving loops themselves) where you don’t want to allocate the memory for all results at the same time. Many
Standard Library functions that return lists in Python 2 have been modified to return generators in Python 3
because generators require fewer resources.

Generator is a function that returns an object (iterator) which we can iterate over (one value at a time).
Why generators are used in Python?

There are several reasons which make generators an attractive implementation to go for.

1. Easy to Implement

Generators can be implemented in a clear and concise way as compared to their iterator class counterpart. Following
is an example to implement a sequence of power of 2's using iterator class.

class PowTwo:

def __init__(self, max = 0):

self.max = max

def __iter__(self):

self.n = 0

return self

def __next__(self):

if self.n > self.max:

raise StopIteration

result = 2 ** self.n

self.n += 1

return result

This was lengthy. Now lets do the same using a generator function.

def PowTwoGen(max = 0):

n=0

while n < max:

yield 2 ** n

n += 1

Since, generators keep track of details automatically, it was concise and much cleaner in implementation.
2. Memory Efficient

A normal function to return a sequence will create the entire sequence in memory before returning the result. This is
an overkill if the number of items in the sequence is very large.

Generator implementation of such sequence is memory friendly and is preferred since it only produces one item at a
time.

3. Represent Infinite Stream

Generators are excellent medium to represent an infinite stream of data. Infinite streams cannot be stored in memory
and since generators produce only one item at a time, it can represent infinite stream of data.

The following example can generate all the even numbers (at least in theory).

def all_even():

n=0

while True:

yield n

n += 2

4. Pipelining Generators

Generators can be used to pipeline a series of operations. This is best illustrated using an example.

Suppose we have a log file from a famous fast food chain. The log file has a column (4th column) that keeps track of
the number of pizza sold every hour and we want to sum it to find the total pizzas sold in 5 years.

Assume everything is in string and numbers that are not available are marked as 'N/A'. A generator implementation of
this could be as follows.

with open('sells.log') as file:

pizza_col = (line[3] for line in file)

per_hour = (int(x) for x in pizza_col if x != 'N/A')

print("Total pizzas sold = ",sum(per_hour))

This pipelining is efficient and easy to read (and yes, a lot cooler!).

Extra :-

1) What is daemon thread in Python?

A thread can be flagged as a "daemon thread". The significance of this flag is that the entire Python program
exits when only daemon threads are left. The initial value is inherited from the creating thread.
When Daemon Threads are useful

In a big project, some threads are there to do some background task such as sending data,
performing periodic garbage collection etc. It can be done by non-daemon thread. But if non-
daemon thread is used, the main thread has to keep track of them manually. However, using
daemon thread the main thread can completely forget about this task and this task will either
complete or killed when main thread exits.

2) What are threads in python?


Threading in python is used to run multiple threads (tasks, function calls) at the same time.

3) Is Python single threaded?


Yes

4) What is multithreading in Python?

There are several implementations of Python, for example, CPython, IronPython, RPython, etc.

Some of them have a GIL, some don't. For example, CPython has the GIL:

Applications written in programming languages with a GIL can be designed to use separate
processes to achieve full parallelism, as each process has its own interpreter and in turn has its
own GIL.

Benefits of the GIL


● Increased speed of single-threaded programs.

● Easy integration of C libraries that usually are not thread-safe.


Why Python (CPython and others) uses the GIL
● From http://wiki.python.org/moin/GlobalInterpreterLock
In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple native threads from
executing Python bytecodes at once. This lock is necessary mainly because CPython's memory
management is not thread-safe.

The GIL is controversial because it prevents multithreaded CPython programs from taking full
advantage of multiprocessor systems in certain situations. Note that potentially blocking or long-
running operations, such as I/O, image processing, and NumPy number crunching, happen outside
the GIL. Therefore it is only in multithreaded programs that spend a lot of time inside the GIL,
interpreting CPython bytecode, that the GIL becomes a bottleneck.

● From http://www.grouplens.org/node/244
Python has a GIL as opposed to fine-grained locking for several reasons:

● It is faster in the single-threaded case.

● It is faster in the multi-threaded case for i/o bound programs.

● It is faster in the multi-threaded case for cpu-bound programs that do their compute-intensive
work in C libraries.
● It makes C extensions easier to write: there will be no switch of Python threads except where
you allow it to happen (i.e. between the Py_BEGIN_ALLOW_THREADS and
Py_END_ALLOW_THREADS macros).
● It makes wrapping C libraries easier. You don't have to worry about thread-safety. If the
library is not thread-safe, you simply keep the GIL locked while you call it.

5) What is thread join in Python?

● When join method is invoked, the calling thread is blocked till the thread object on which it was
called is terminated.

● For example, when the join() is invoked from a main thread, the main thread
waits till the child thread on which join is invoked exits.
The significance of join() method is, if join() is not invoked, the main thread may
exit before the child thread, which will result undetermined behaviour of programs and affect
program invariants and integrity of the data on which the program operates.

● The join() method can also be specified of a timeout value.

● For example, a thread that makes network connections to servers is expected to


complete the connection establishment within stipulated number of seconds.
When the timeout value elapses the calling thread will come from the blocking state and could
try connecting to a set of backup servers from a config file.

● In such circumstances calling thread may send a signal through event object and ask the calling thread to stop.

● The join method can be called several times on a thread object.

● Exceptions:


Calling join() on the same thread will result in a deadlock.
Hence a RuntimeError is raised when join() is invoked on the same thread.

● Calling join() on a thread which has not yet been started also causes a RuntimeError.

6) Diff between pyc and py file?

● Python compiles the .py files and saves it as .pyc files , so it can reference them in subsequent invocations.
The .pyc contain the compiled bytecode of Python source files. The .pyc contain the compiled bytecode
of Python source files, which is what the Python interpreter compiles the source to. This code is then executed
by Python's virtual machine . There's no harm in deleting them (.pyc), but they will save compilation time if you're
doing lots of processing.

● Python is an interpreted language , as opposed to a compiled one, though the distinction can be blurry because
of the presence of the bytecode compiler. Compiling usually means converting to machine code which is what
runs the fastest. But interpreters take human readable text and execute it. They may do this with an intermediate
stage.

● For example, When you run myprog.py source file, the python interpreter first looks to see if any 'myprog.pyc'
(which is the byte-code compiled version of 'myprog.py') exists, and if it is more recent than 'myprog.py'. If so, the
interpreter runs it. If it does not exist, or 'myprog.py' is more recent than it (meaning you have changed the
source file), the interpreter first compiles 'myprog.py' to 'myprog.pyc'.
● There is one exception to the above example. If you put '#! /usr/bin/env python' on the first line of 'myprog.py',
make it executable , and then run 'myprog.py' by itself.

7) Decorator –
decorator is a function that takes another function and extends the behavior of the latter function
without explicitly modifying it. decorators to add functionality to an existing code.

This is also called metaprogramming as a part of the program tries to modify another part of the
program at compile time.

8) The purpose of having a wrapper function is that a function decorator receives a function object to decorate,
and it must return the decorated function.
9) Generator is a function that returns an object (iterator) which we can iterate over (one value at a time).
Python generators are a simple way of creating iterators. All the overhead we mentioned above are
automatically handled by generators in Python.
10) Uses of pickling :
storing python objects in a database.
converting an arbitrary python object to a string so that it can be used as a dictionary key (e.g. for caching &
memoization).
sending python data over a TCP connection in a multi-core or distributed system (marshalling)
11) List Comprehension –

● new_list = []

● for i in old_list:

● if filter(i):

● new_list.append(expressions(i))

● List comprehension -

● new_list = [expression(i) for i in old_list if filter(i)]

[ expression for item in list if conditional ]

List comprehensions provide a concise way to create lists.

18) The purpose of zip() is to map the similar index of multiple containers so that they can be used just using as
single entity.
Syntax :
zip(*iterators)
Parameters :
Python iterables or containers ( list, string etc )
Return Value :
Returns a single iterator object, having mapped values from all the
containers.

# Python code to demonstrate the working of


# zip()

# initializing lists
name = [ "Manjeet", "Nikhil", "Shambhavi", "Astha" ]
roll_no = [ 4, 1, 3, 2 ]
marks = [ 40, 50, 60, 70 ]

# using zip() to map values


mapped = zip(name, roll_no, marks)

# converting values to print as set


mapped = set(mapped)

# printing resultant values


print ("The zipped result is : ",end="")
print (mapped)
Output:
The zipped result is : {('Shambhavi', 3, 60), ('Astha', 2, 70),
('Manjeet', 4, 40), ('Nikhil', 1, 50)}
How to unzip?
Unzipping means converting the zipped values back to the individual self as they were. This is done with the help of
“*” operator.

# Python code to demonstrate the working of


# unzip

# initializing lists

name = [ "Manjeet", "Nikhil", "Shambhavi", "Astha" ]


roll_no = [ 4, 1, 3, 2 ]
marks = [ 40, 50, 60, 70 ]

# using zip() to map values


mapped = zip(name, roll_no, marks)

# converting values to print as list


mapped = list(mapped)

# printing resultant values


print ("The zipped result is : ",end="")
print (mapped)

print("\n")

# unzipping values
namz, roll_noz, marksz = zip(*mapped)

print ("The unzipped result: \n",end="")

# printing initial lists


print ("The name list is : ",end="")
print (namz)

print ("The roll_no list is : ",end="")


print (roll_noz)
print ("The marks list is : ",end="")
print (marksz)
Output:
The zipped result is : [('Manjeet', 4, 40), ('Nikhil', 1, 50),
('Shambhavi', 3, 60), ('Astha', 2, 70)]

The unzipped result:


The name list is : ('Manjeet', 'Nikhil', 'Shambhavi', 'Astha')
The roll_no list is : (4, 1, 3, 2)
The marks list is : (40, 50, 60, 70)

19) Tuples are stored in a single block of memory. Tuples are immutalbe so, It doesn't require extraspace to store
new objects. Lists are allocated in two blocks: the fixed one with all the Python object information and a variable sized
block for the data. It is the reason creating a tuple is faster than List.

20) __init__ is a special method in Python classes, it is the constructor method for a class.

21) Object References - Once we have some data types, the next thing we need are variables in which to store them.
Python doesn’t have variables as such, but instead has objectreferences. When it comes to immutable Shallow and
deep copying objects like ints and strs, there is no discernable difference between a variable and an object reference.
As for mutable objects, there is a difference, but it rarely matters in practice. We will use the terms variable and object
reference interchangeably. Let’s look at a few tiny examples, and then discuss some of the details.

x = "blue"

y = "green"

z=x

The syntax is simply objectReference = value. There is no need for predeclaration and no need to specify the value’s
type. When Python executes the first statement it creates a str object with the text “blue”, and creates an object
reference called x that refers to the str object. For all practical purposes we can say that “variable x has been
assigned the ‘blue’ string”. The second statement is similar. The third statement creates a new object reference called
z and sets it to refer to the same object that the x object reference refers to (in this case the str containing the text
“blue”).The = operator is not the same as the variable assignment operator in some other languages. The = operator
binds an object reference to an object inmemory. If the object reference already exists, it is simply re-bound to refer to
the object on the right of the = operator; if the object reference does not exist it is created by the = operator.

1) Can Python run multiple threads?

2) Why does Python have a Gil?

3) What is the difference between multithreading and multiprocessing

4) What is a Python lock?


5) Does Python have polymorphism?

6) Late binding in python

7)
● def extendList(val, list=[]):

● list.append(val)

● return list

● list1 = extendList(10)

● list2 = extendList(123,[])

● list3 = extendList('a')

● print "list1 = %s" % list1

● print "list2 = %s" % list2

● print "list3 = %s" % list3

● OUTPUT -
list1 = [10, 'a']
list2 = [123]
list3 = [10, 'a']

def multipliers():
return [lambda x : i * x for i in range(4)]

print [m(2) for m in multipliers()]

OUTPUT - The output of the above code will be [6, 6, 6, 6] (not [0, 2, 4, 6]).
The reason for this is that Python’s closures are late binding. This means that the values of variables used in closures
are looked up at the time the inner function is called. So as a result, when any of the functions returned
by multipliers() are called, the value of i is looked up in the surrounding scope at that time. By then, regardless of
which of the returned functions is called, the forloop has completed and i is left with its final value of 3. Therefore,
every returned function multiplies the value it is passed by 3, so since a value of 2 is passed in the above code, they
all return a value of 6 (i.e., 3 x 2).

MAGIC FUNCTION - Dunder or magic methods in Python are the methods having two prefix and
suffix underscores in the method name. Dunder here means “Double Under (Underscores)”.
A special magic method in Python allows instances of your classes to behave as if they
were functions, so that you can "call" them, pass them to functions that takefunctions as
arguments, and so on
https://www.toptal.com/python/interview-questions

how to check the email id is valid or not in python -

Deloitte:

1) Inheritance , types of it
2) To print the reverse of natural num if the number is odd , else the whole range
3) What is faster – left join or inner join

QUEZX -

1) To specify the columns in excel using pandas

Create, write to and save a workbook:

>>> df1 = pd.DataFrame([['a', 'b'], ['c', 'd']],

... index=['row 1', 'row 2'],

... columns=['col 1', 'col 2'])

>>> df1.to_excel("output.xlsx") # doctest: +SKIP

EXTRA INFO -

To specify the sheet name:

>>> df1.to_excel("output.xlsx",

... sheet_name='Sheet_name_1') # doctest: +SKIP

If you wish to write to more than one sheet in the workbook, it is necessary to specify an ExcelWriter object:

>>> df2 = df1.copy()

>>> with pd.ExcelWriter('output.xlsx') as writer: # doctest: +SKIP

... df1.to_excel(writer, sheet_name='Sheet_name_1')


... df2.to_excel(writer, sheet_name='Sheet_name_2')

To set the library that is used to write the Excel file, you can pass the engine keyword (the default engine is
automatically chosen depending on the file extension):

>>> df1.to_excel('output1.xlsx', engine='xlsxwriter') # doctest: +SKIP

2) To find the middle index of linked list


3) To find the cycle in linked list
4) To reset the index in pandas
5) To reset the index in pandas
6) Factorial number in python
7) To add different length of columns in excel sheet using pandas
8) Candidate Key Vs Super Key
9) Primary Key Vs Unique key
10) How to determine which will be candidate key, unique key and primary key

Dictionary Key – Should be Hashable and immutable. Therefore, tuple, number and string can be used as a key.
Similarly, object (instance of a class) can be used as key.

The following code works well because by default, your class object are hashable :

Class Foo(object):
def __init__(self):
pass

myinstance = Foo()
mydict = {myinstance : 'Hello world'}

print mydict[myinstance]
Output : Hello world

TOPICS –
1) Inheritance in Python - https://www.geeksforgeeks.org/inheritance-in-python/
2) Magic Function – In Sheet
3) Monkey Patching - In Python, the term monkey patch refers to dynamic (or run-time) modifications
of a class or module. In Python, we can actually change the behavior of code at run-time.

# monk.py
class A:
def func(self):
print "func() is being called"
We use above module (monk) in below code and change behavior of func() at run-time by assigning
different value.

import monk
def monkey_f(self):
print "monkey_f() is being called"

# replacing address of "func" with "monkey_f"


monk.A.func = monkey_f
obj = monk.A()

# calling function "func" whose address got replaced


# with function "monkey_f()"
obj.func()

4) Operation Overloading
5) OS module - The OS module in Python provides a way of using operating system dependent
functionality.
import os

Executing a shell command


os.system()

Get the users environment


os.environ()

#Returns the current working directory.


os.getcwd()

Return the real group id of the current process.


os.getgid()

Return the current process’s user id.


os.getuid()

Returns the real process ID of the current process.


os.getpid()

Set the current numeric umask and return the previous umask.
os.umask(mask)

Return information identifying the current operating system.


os.uname()

Change the root directory of the current process to path.


os.chroot(path)

Return a list of the entries in the directory given by path.


os.listdir(path)

Create a directory named path with numeric mode mode.


os.mkdir(path)

Recursive directory creation function.


os.makedirs(path)

Remove (delete) the file path.


os.remove(path)

Remove directories recursively.


os.removedirs(path)

Rename the file or directory src to dst.


os.rename(src, dst)

Remove (delete) the directory path.


os.rmdir(path)

6) Database connection
7) Accumulate
8) Zip vs iZip
9) Numpy
10) Pandas
11) SciPy
12) MRO
13) Namescapes
14) Scope

Notes –

List containing list can’t get converted into set . Throws UNHASABLE list error .

L1 = [1,2,2,3]

L2 = SET(L1)

PRINT(l2) – 1,2,3

L1 = [1,2,3,[1,2]]

L2 = SET(L1)

PRINT(L2) ---- ERROR – UNHASABLE LIST

Namespaces and Scope in Python

What is namespace:

A namespace is a system to have a unique name for each and every object in Python. An object might be a variable
or a method. Python itself maintains a namespace in the form of a Python dictionary. Let’s go through an example, a
directory-file system structure in computers. Needless to say, that one can have multiple directories having a file with
the same name inside of every directory. But one can get directed to the file, one wishes, just by specifying the
absolute path to the file.

Real-time example, the role of a namespace is like a surname. One might not find a single “Alice” in the class there
might be multiple “Alice” but when you particularly ask for “Alice Lee” or “Alice Clark” (with a surname), there will be
only one (time being don’t think of both first name and surname are same for multiple students).

On the similar lines, Python interpreter understands what exact method or variable one is trying to point to in the
code, depending upon the namespace. So, the division of the word itself gives little more information. Its Name
(which means name, an unique identifier) + Space(which talks something related to scope). Here, a name might be of
any Python method or variable and space depends upon the location from where is trying to access a variable or a
method.

Types of namespaces :

When Python interpreter runs solely without and user-defined modules, methods, classes, etc. Some functions like
print(), id() are always present, these are built in namespaces. When a user creates a module, a global namespace
gets created, later creation of local functions creates the local namespace. The built-in namespace encompasses
global namespace and global namespace encompasses local namespace.

Lifetime of a namespace :

A lifetime of a namespace depends upon the scope of objects, if the scope of an object ends, the lifetime of that
namespace comes to an end. Hence, it is not possible to access inner namespace’s objects from an outer
namespace.

Example:

filter_none

brightness_4

# var1 is in the global namespace

var1 = 5

def some_func():
# var2 is in the local namespace

var2 = 6

def some_inner_func():

# var3 is in the nested local

# namespace

var3 = 7

As shown in the following figure, same object name can be present in multiple namespaces as isolation between the
same name is maintained by their namespace.

But in some cases, one might be interested in updating or processing global variable only, as shown in the following
example, one should mark it explicitly as global and the update or process.

filter_none

edit

play_arrow

brightness_4

# Python program processing

# global variable

count = 5

def some_method():

global count

count = count + 1

print(count)

some_method()

Output:

Scope of Objects in Python :


Scope refers to the coding region from which particular Python object is accessible. Hence one cannot access any
particular object from anywhere from the code, the accessing has to be allowed by the scope of the object.

Example 1:

filter_none

edit

play_arrow

brightness_4

# Python program showing

# a scope of object

def some_func():

print("You are welcome to some_func")

print(var)

some_func()

Output:

You are welcome to some_func

Traceback (most recent call last):

File "/home/ab866b415abb0279f2e93037ea5d6de5.py", line 4, in

some_func()

File "/home/ab866b415abb0279f2e93037ea5d6de5.py", line 3, in some_func

print(var)

NameError: name 'var' is not defined

As can be seen in the above output the function some_func() is in the scope from main but var is not avaialable in the
scope of main. Similarly, in case of inner functions, outer functions don’t have accessibility of inner local variables
which are local to inner functions and out of scope for outer functions. Lets take an example to have details
understanding of the same:

Example 2:

filter_none
edit

play_arrow

brightness_4

# Python program showing

# a scope of object

def some_func():

print("Inside some_func")

def some_inner_func():

var = 10

print("Inside inner function, value of var:",var)

some_inner_func()

print("Try printing var from outer function: ",var)

some_func()

Output:

Inside some_func

Inside inner function, value of var: 10

Traceback (most recent call last):

File "/home/1eb47bb3eac2fa36d6bfe5d349dfcb84.py", line 8, in

some_func()

File "/home/1eb47bb3eac2fa36d6bfe5d349dfcb84.py", line 7, in some_func

print("Try printing var from outer function: ",var)

NameError: name 'var' is not defined

Argsparse

Json

Bank Of America –

1) Namescape
2) Encapsulation and Abstraction
3) Encoding – sha1 and sha2
4) Property
5) Inheritance

Johnson –

1) Lambda significance
2) Callback
3) Utility Function
4) Pandas
5) Networking

How to make module private?


Private function – read

You might also like