Python Ques
Python Ques
Python defines a module which allows to deep copy or shallow copy mutable object using the inbuilt functions present
in the module “copy“.
Assignment statements in Python do not copy objects, they create bindings between a target and an object. For
collections that are mutable or contain mutable items, a copy is sometimes needed so one can change one copy
without changing the other.
Deep copy
In case of deep copy, a copy of object is copied in other object. It means that any changes made to a copy of
object do not reflect in the original object.
In python, this is implemented using “deepcopy()” function.
# Python code to demonstrate copy operations
# initializing list 1
li1 = [1, 2, [3,5], 4]
print("\r")
# Change is reflected in l2
print ("The new list of elements after deep copying ")
for i in range(0,len( li1)):
print (li2[i],end=" ")
print("\r")
Shallow copy
In case of shallow copy, a reference of object is copied in other object. It means that any changes made to a copy of
object do reflect in the original object.
In python, this is implemented using “copy()” function.
# Python code to demonstrate copy operations
# initializing list 1
li1 = [1, 2, [3,5], 4]
print("\r")
Note :
y = tuple([1,2,3])
>>> y
(1, 2, 3)
>>> y[0] = 5 # Not allowed!
a = [1,2,3]
>>> b = [4,5,6]
>>> t = (a,b)
>>> t
([1, 2, 3], [4, 5, 6])
You are allowed to modify the internal lists as
>>> t[0][0] = 5
>>> t
([5, 2, 3], [4, 5, 6])
Another insight is that Python's containers don't actually contain anything. Instead, they keep references to other
objects. Likewise, Python's variables aren't like variables in compiled languages; instead the variable names are just
keys in a namespace dictionary where they are associated with a corresponding object. Ned Batchhelder explains
this nicely in his blog post. Either way, objects only know their reference count; they don't know what those
references are (variables, containers, or the Python internals).
Together, these two insights explain your mystery (why an immutable tuple "containing" a list seems to change when
the underlying list changes). In fact, the tuple did not change (it still has the same references to other objects that it
did before). The tuple could not change (because it did not have mutating methods). When the list changed, the tuple
didn't get notified of the change (the list doesn't know whether it is referred to by a variable, a tuple, or another list).
While we're on the topic, here are a few other thoughts to help complete your mental model of what tuples are, how
they work, and their intended use:
1. Tuples are characterized less by their immutability and more by their intended purpose.
Tuples are Python's way of collecting heterogeneous pieces of information under one roof. For example, s =
('www.python.org', 80) brings together a string and a number so that the host/port pair can be passed around as
a socket, a composite object. Viewed in that light, it is perfectly reasonable to have mutable components.
2. Immutability goes hand-in-hand with another property, hashability. But hashability isn't an absolute property. If
one of the tuple's components isn't hashable, then the overall tuple isn't hashable either. For example, t = ('red',
[10, 20, 30]) isn't hashable.
The last example shows a 2-tuple that contains a string and a list. The tuple itself isn't mutable (i.e. it doesn't have
any methods that for changing its contents). Likewise, the string is immutable because strings don't have any
mutating methods. The list object does have mutating methods, so it can be changed. This shows that mutability is a
property of an object type -- some objects have mutating methods and some don't. This doesn't change just because
the objects are nested.
Remember two things. First, immutability is not magic -- it is merely the absence of mutating methods. Second,
objects don't know what variables or containers refer to them -- they only know the reference count.
*args
The special syntax *args in function definitions in python is used to pass a variable number of arguments to a
function. It is used to pass a non-keyworded, variable-length argument list.
● The syntax is to use the symbol * to take in a variable number of arguments; by convention, it is often used
with the word args.
● What *args allows you to do is take in more arguments than the number of formal arguments that you
previously defined. With *args, any number of extra arguments can be tacked on to your current formal
parameters (including zero extra arguments).
● For example : we want to make a multiply function that takes any number of arguments and able to multiply
them all together. It can be done using *args.
● Using the *, the variable that we associate with the * becomes an iterable meaning you can do things like
iterate over it, run some higher order functions such as map and filter, etc.
● Example for usage of *arg:
# Python program to illustrate
# *args for variable number of arguments
def myFun(*argv):
for arg in argv:
print (arg)
● Output:
● Hello
● Welcome
● to
● GeeksforGeeks
● Output:
**kwargs
The special syntax **kwargs in function definitions in python is used to pass a keyworded, variable-length argument
list. We use the name kwargs with the double star. The reason is because the double star allows us to pass through
keyword arguments (and any number of them).
● A keyword argument is where you provide a name to the variable as you pass it into the function.
● One can think of the kwargs as being a dictionary that maps each keyword to the value that we pass alongside
it. That is why when we iterate over the kwargs there doesn’t seem to be any order in which they were printed
out.
● Example for usage of **kwargs:
# Python program to illustrate
# *kargs for variable number of keyword arguments
def myFun(**kwargs):
for key, value in kwargs.items():
print ("%s == %s" %(key, value))
# Driver code
myFun(first ='Geeks', mid ='for', last='Geeks')
● Copy CodeRun on IDE
● Output:
● last == Geeks
● mid == for
● first == Geeks
# Driver code
myFun("Hi", first ='Geeks', mid ='for', last='Geeks')
● Copy CodeRun on IDE
● Output:
● last == Geeks
● mid == for
● first == Geeks
3) What is lambda function? how to use it? Can we use recursive function inside lambda function?
Python allows you to create anonymous function i.e. function having no names using a facility called lambda function.
lambda functions are small functions usually not more than a line. It can have any number of arguments just like a
normal function. The body of lambda functions is very small and consists of only one expression. The result of the
expression is the value when the lambda is applied to an argument. Also there is no need for any return statement in
lambda function.
To create a lambda function first write keyword lambda followed by one of more arguments separated by comma,
followed by colon sign ( : ), followed by a single line expression.
r = lambda x, y: x * y
1
r(12, 3) # call the lambda function
Expected Output:
36
Here we are using two arguments x and y , expression after colon is the body of the lambda function. As you can see
lambda function has no name and is called through the variable it is assigned to.
Basic syntax
map functions expects a function object and any number of iterables like list, dictionary, etc. It executes the
function_object for each element in the sequence and returns a list of the elements modified by the function object.
Example:
def multiply2(x):
return x * 2
In the above example, map executes multiply2 function for each element in the list i.e. 1, 2, 3, 4 and returns [2, 4,
6, 8]
Let’s see how we can write the above code using map and lambda.
In the above example, each dict of dict_a will be passed as parameter to the lambda function. Result
of lambda function expression for each dict will be given as output.
Here, each i^th element of list_a and list_b will be passed as argument to the lambda function.
In Python3, map function returns an iterator or map object which gets lazily evaluated. Just like zip function is
lazily evaluated. Lazily evaluation is explained in more detail in the zip function article
Neither we can access the elements of the map object with index nor we can use len() to find the length of the map
object
We can force convert the map output i.e. the map object to list as shown below:
map_output = map(lambda x: x*2, [1, 2, 3, 4])
print(map_output) # Output: map object: <map object at 0x04D6BAB0>
list_map_output = list(map_output)
A function is a piece of code that is called by name. It can be passed data to operate on (i.e., the parameters) and
can optionally return data (the return value). All data that is passed to a function is explicitly passed.
A method is a piece of code that is called by name that is associated with an object. In most respects it is identical to
a function except for two key differences.
● Order of the arguments in the operator is different from other languages like C/C++.
● Conditional expressions have the lowest priority amongst all Python operations.
Sets are significantly faster when it comes to determining if an object is present in the set (as in x in s), but are
slower than lists when it comes to iterating over their contents.
8) Program : Given a string: “ab123$zc2” Output: “cz123$ba2” (Note: without using direct swapping. Code should run
for any given string)
9) Uses of Pandas
pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with
“relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing
practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and
flexible open source data analysis / manipulation tool available in any language. It is already well on its way toward
this goal.
● Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
● Any other form of observational / statistical data sets. The data actually need not be labeled at all to
be placed into a pandas data structure
The two primary data structures of pandas, Series (1-dimensional) and DataFrame (2-dimensional), handle the vast
majority of typical use cases in finance, statistics, social science, and many areas of engineering. For R
users, DataFrame provides everything that R’s data.frame provides and much more. pandas is built on top
of NumPy and is intended to integrate well within a scientific computing environment with many other 3rd party
libraries.
Here are just a few of the things that pandas does well:
● Easy handling of missing data (represented as NaN) in floating point as well as non-floating point
data
● Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional
objects
● Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the
user can simply ignore the labels and let Series, DataFrame, etc. automatically align the data for
you in computations
● Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for
both aggregating and transforming data
● Make it easy to convert ragged, differently-indexed data in other Python and NumPy data
structures into DataFrame objects
● Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
● Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and
saving / loading data from the ultrafast HDF5 format
● Time series-specific functionality: date range generation and frequency conversion, moving window
statistics, moving window linear regressions, date shifting and lagging, etc.
Many of these principles are here to address the shortcomings frequently experienced using other languages /
scientific research environments. For data scientists, working with data is typically divided into multiple stages:
munging and cleaning data, analyzing / modeling it, then organizing the results of the analysis into a form suitable for
plotting or tabular display. pandas is the ideal tool for all of these tasks.
● pandas is fast. Many of the low-level algorithmic bits have been extensively tweaked
in Cythoncode. However, as with anything else generalization usually sacrifices performance. So if
you focus on one feature for your application you may be able to create a faster specialized tool.
● pandas is a dependency of statsmodels, making it an important part of the statistical computing
ecosystem in Python.
● pandas has been used extensively in production in financial applications.
10) Bubble sort in python
def bubbleSort(alist):
for passnum in range(len(alist)-1,0,-1):
for i in range(passnum):
if alist[i]>alist[i+1]:
temp = alist[i]
alist[i] = alist[i+1]
alist[i+1] = temp
alist = [54,26,93,17,77,31,44,55,20]
bubbleSort(alist)
print(alist)
15) Program:
1
232
34543
4567654
16) Dictionary:
A dictionary is a collection which is unordered, changeable and indexed. In Python dictionaries are written with curly
brackets, and they have keys and values.
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
print(thisdict)
x = thisdict["model"]
List:
List is the most versatile Sequence available in Python, which can be written as a list of comma-separated values
between square brackets.
● Elements present in List maintain their order.
● The elements present in list can be of any type (int, float, string, tuple etc.).
● If you have a collection of data that does not need random access, use List.
● Where you have to deal with values which can be changed, use List.
Dictionary:
Dictionary is an unordered collection of key-value pairs. Dictionaries are used to handle large amount of data.
● When you are dealing with unique keys and you are mapping values to the keys, use Dictionary.
Memory management in Python involves a private heap containing all Python objects and data structures. The
management of this private heap is ensured internally by the Python memory manager. The Python memory
manager has different components which deal with various dynamic storage management aspects, like sharing,
segmentation, preallocation or caching.
At the lowest level, a raw memory allocator ensures that there is enough room in the private heap for storing all
Python-related data by interacting with the memory manager of the operating system. On top of the raw memory
allocator, several object-specific allocators operate on the same heap and implement distinct memory management
policies adapted to the peculiarities of every object type. For example, integer objects are managed differently within
the heap than strings, tuples or dictionaries because integers imply different storage requirements and speed/space
tradeoffs. The Python memory manager thus delegates some of the work to the object-specific allocators, but
ensures that the latter operate within the bounds of the private heap.
It is important to understand that the management of the Python heap is performed by the interpreter itself and that
the user has no control over it, even if they regularly manipulate object pointers to memory blocks inside that heap.
The allocation of heap space for Python objects and other internal buffers is performed on demand by the Python
memory manager through the Python/C API functions listed in this document.
To avoid memory corruption, extension writers should never try to operate on Python objects with the functions
exported by the C library: malloc(), calloc(), realloc() and free(). This will result in mixed calls between the C
allocator and the Python memory manager with fatal consequences, because they implement different algorithms and
operate on different heaps. However, one may safely allocate and release memory blocks with the C library allocator
for individual purposes, as shown in the following example:
PyObject *res;
char *buf = (char *) malloc(BUFSIZ); /* for I/O */
if (buf == NULL)
return PyErr_NoMemory();
...Do some I/O operation involving buf...
res = PyString_FromString(buf);
free(buf); /* malloc'ed */
return res;
In this example, the memory request for the I/O buffer is handled by the C library allocator. The Python memory
manager is involved only in the allocation of the string object returned as a result.
In most situations, however, it is recommended to allocate memory from the Python heap specifically because the
latter is under control of the Python memory manager. For example, this is required when the interpreter is extended
with new object types written in C. Another reason for using the Python heap is the desire to inform the Python
memory manager about the memory needs of the extension module. Even when the requested memory is used
exclusively for internal, highly-specific purposes, delegating all memory requests to the Python memory manager
causes the interpreter to have a more accurate image of its memory footprint as a whole. Consequently, under
certain circumstances, the Python memory manager may or may not trigger appropriate actions, like garbage
collection, memory compaction or other preventive procedures. Note that by using the C library allocator as shown in
the previous example, the allocated memory for the I/O buffer escapes completely the Python memory manager.
1. Memory management in python is managed by Python private heap space. All Python objects and data
structures are located in a private heap. The programmer does not have access to this private heap. The
python interpreter takes care of this instead.
2. The allocation of heap space for Python objects is done by Python’s memory manager. The core API gives
access to some tools for the programmer to code.
3. Python also has an inbuilt garbage collector, which recycles all the unused memory and so that it can be
made available to the heap space.
1. Python has a multi-threading package but if you want to multi-thread to speed your code up, then it’s usually
not a good idea to use it.
2. Python has a construct called the Global Interpreter Lock (GIL). The GIL makes sure that only one of your
‘threads’ can execute at any one time. A thread acquires the GIL, does a little work, then passes the GIL
onto the next thread.
3. This happens very quickly so to the human eye it may seem like your threads are executing in parallel, but
they are really just taking turns using the same CPU core.
4. All this GIL passing adds overhead to execution. This means that if you want to make your code run faster
then using the threading package often isn’t a good idea.
Resource
‘resource’ module for finding the current (Resident) memory consumption of your program
[Resident memory is the actual RAM your program is using]
objgraph
‘objgraph’ is a useful module that shows you the objects that are currently in memory
[objgraph documentation and examples are available at: https://mg.pov.lt/objgraph/]
import objgraph
import random
import inspect
class Foo(object):
def __init__(self):
self.val = None
def __str__(self):
return “foo – val: {0}”.format(self.val)
def f():
l = []
for i in range(3):
foo = Foo()
#print “id of foo: {0}”.format(id(foo))
#print “foo is: {0}”.format(foo)
l.append(foo)
return l
def main():
d = {}
l = f()
d[‘k’] = l
print “list l has {0} objects of type Foo()”.format(len(l))
objgraph.show_most_common_types()
objgraph.show_backrefs(random.choice(objgraph.by_type(‘Foo’)),
filename=“foo_refs.png”)
objgraph.show_refs(d, filename=‘sample-graph.png’)
if __name__ == “__main__”:
main()
It is used for serializing and de-serializing a Python object structure. Any object in python can be pickled so that it can
be saved on disk. What pickle does is that it “serializes” the object first before writing it to file. Pickling is a way to
convert a python object (list, dict, etc.) into a character stream.
1. Whenever Python exits, especially those Python modules which are having circular references to other
objects or the objects that are referenced from the global namespaces are not always de-allocated or freed.
2. It is impossible to de-allocate those portions of memory that are reserved by the C library.
3. On exit, because of having its own efficient clean up mechanism, Python would try to de-allocate/destroy
every other object.
27) Generators are iterators, but you can only iterate over them once. It’s because they do not store all the values in
memory, they generate the values on the fly. You use them by iterating over them, either with a ‘for’ loop or by
passing them to any function or construct that iterates. Most of the time generators are implemented as functions.
However, they do not return a value, they yield it. Here is a simple example of a generator function:
def generator_function():
for i in range(10):
yield i
# Output: 0
# 1
# 2
# 3
# 4
# 5
# 6
# 7
# 8
# 9
It is not really useful in this case. Generators are best for calculating large sets of results (particularly calculations
involving loops themselves) where you don’t want to allocate the memory for all results at the same time. Many
Standard Library functions that return lists in Python 2 have been modified to return generators in Python 3
because generators require fewer resources.
Generator is a function that returns an object (iterator) which we can iterate over (one value at a time).
Why generators are used in Python?
There are several reasons which make generators an attractive implementation to go for.
1. Easy to Implement
Generators can be implemented in a clear and concise way as compared to their iterator class counterpart. Following
is an example to implement a sequence of power of 2's using iterator class.
class PowTwo:
self.max = max
def __iter__(self):
self.n = 0
return self
def __next__(self):
raise StopIteration
result = 2 ** self.n
self.n += 1
return result
This was lengthy. Now lets do the same using a generator function.
n=0
yield 2 ** n
n += 1
Since, generators keep track of details automatically, it was concise and much cleaner in implementation.
2. Memory Efficient
A normal function to return a sequence will create the entire sequence in memory before returning the result. This is
an overkill if the number of items in the sequence is very large.
Generator implementation of such sequence is memory friendly and is preferred since it only produces one item at a
time.
Generators are excellent medium to represent an infinite stream of data. Infinite streams cannot be stored in memory
and since generators produce only one item at a time, it can represent infinite stream of data.
The following example can generate all the even numbers (at least in theory).
def all_even():
n=0
while True:
yield n
n += 2
4. Pipelining Generators
Generators can be used to pipeline a series of operations. This is best illustrated using an example.
Suppose we have a log file from a famous fast food chain. The log file has a column (4th column) that keeps track of
the number of pizza sold every hour and we want to sum it to find the total pizzas sold in 5 years.
Assume everything is in string and numbers that are not available are marked as 'N/A'. A generator implementation of
this could be as follows.
This pipelining is efficient and easy to read (and yes, a lot cooler!).
Extra :-
A thread can be flagged as a "daemon thread". The significance of this flag is that the entire Python program
exits when only daemon threads are left. The initial value is inherited from the creating thread.
When Daemon Threads are useful
In a big project, some threads are there to do some background task such as sending data,
performing periodic garbage collection etc. It can be done by non-daemon thread. But if non-
daemon thread is used, the main thread has to keep track of them manually. However, using
daemon thread the main thread can completely forget about this task and this task will either
complete or killed when main thread exits.
There are several implementations of Python, for example, CPython, IronPython, RPython, etc.
Some of them have a GIL, some don't. For example, CPython has the GIL:
Applications written in programming languages with a GIL can be designed to use separate
processes to achieve full parallelism, as each process has its own interpreter and in turn has its
own GIL.
The GIL is controversial because it prevents multithreaded CPython programs from taking full
advantage of multiprocessor systems in certain situations. Note that potentially blocking or long-
running operations, such as I/O, image processing, and NumPy number crunching, happen outside
the GIL. Therefore it is only in multithreaded programs that spend a lot of time inside the GIL,
interpreting CPython bytecode, that the GIL becomes a bottleneck.
● From http://www.grouplens.org/node/244
Python has a GIL as opposed to fine-grained locking for several reasons:
● It is faster in the multi-threaded case for cpu-bound programs that do their compute-intensive
work in C libraries.
● It makes C extensions easier to write: there will be no switch of Python threads except where
you allow it to happen (i.e. between the Py_BEGIN_ALLOW_THREADS and
Py_END_ALLOW_THREADS macros).
● It makes wrapping C libraries easier. You don't have to worry about thread-safety. If the
library is not thread-safe, you simply keep the GIL locked while you call it.
● When join method is invoked, the calling thread is blocked till the thread object on which it was
called is terminated.
● For example, when the join() is invoked from a main thread, the main thread
waits till the child thread on which join is invoked exits.
The significance of join() method is, if join() is not invoked, the main thread may
exit before the child thread, which will result undetermined behaviour of programs and affect
program invariants and integrity of the data on which the program operates.
● In such circumstances calling thread may send a signal through event object and ask the calling thread to stop.
● Exceptions:
●
Calling join() on the same thread will result in a deadlock.
Hence a RuntimeError is raised when join() is invoked on the same thread.
● Calling join() on a thread which has not yet been started also causes a RuntimeError.
● Python compiles the .py files and saves it as .pyc files , so it can reference them in subsequent invocations.
The .pyc contain the compiled bytecode of Python source files. The .pyc contain the compiled bytecode
of Python source files, which is what the Python interpreter compiles the source to. This code is then executed
by Python's virtual machine . There's no harm in deleting them (.pyc), but they will save compilation time if you're
doing lots of processing.
● Python is an interpreted language , as opposed to a compiled one, though the distinction can be blurry because
of the presence of the bytecode compiler. Compiling usually means converting to machine code which is what
runs the fastest. But interpreters take human readable text and execute it. They may do this with an intermediate
stage.
● For example, When you run myprog.py source file, the python interpreter first looks to see if any 'myprog.pyc'
(which is the byte-code compiled version of 'myprog.py') exists, and if it is more recent than 'myprog.py'. If so, the
interpreter runs it. If it does not exist, or 'myprog.py' is more recent than it (meaning you have changed the
source file), the interpreter first compiles 'myprog.py' to 'myprog.pyc'.
● There is one exception to the above example. If you put '#! /usr/bin/env python' on the first line of 'myprog.py',
make it executable , and then run 'myprog.py' by itself.
7) Decorator –
decorator is a function that takes another function and extends the behavior of the latter function
without explicitly modifying it. decorators to add functionality to an existing code.
This is also called metaprogramming as a part of the program tries to modify another part of the
program at compile time.
8) The purpose of having a wrapper function is that a function decorator receives a function object to decorate,
and it must return the decorated function.
9) Generator is a function that returns an object (iterator) which we can iterate over (one value at a time).
Python generators are a simple way of creating iterators. All the overhead we mentioned above are
automatically handled by generators in Python.
10) Uses of pickling :
storing python objects in a database.
converting an arbitrary python object to a string so that it can be used as a dictionary key (e.g. for caching &
memoization).
sending python data over a TCP connection in a multi-core or distributed system (marshalling)
11) List Comprehension –
● new_list = []
● for i in old_list:
● if filter(i):
● new_list.append(expressions(i))
● List comprehension -
18) The purpose of zip() is to map the similar index of multiple containers so that they can be used just using as
single entity.
Syntax :
zip(*iterators)
Parameters :
Python iterables or containers ( list, string etc )
Return Value :
Returns a single iterator object, having mapped values from all the
containers.
# initializing lists
name = [ "Manjeet", "Nikhil", "Shambhavi", "Astha" ]
roll_no = [ 4, 1, 3, 2 ]
marks = [ 40, 50, 60, 70 ]
# initializing lists
print("\n")
# unzipping values
namz, roll_noz, marksz = zip(*mapped)
19) Tuples are stored in a single block of memory. Tuples are immutalbe so, It doesn't require extraspace to store
new objects. Lists are allocated in two blocks: the fixed one with all the Python object information and a variable sized
block for the data. It is the reason creating a tuple is faster than List.
20) __init__ is a special method in Python classes, it is the constructor method for a class.
21) Object References - Once we have some data types, the next thing we need are variables in which to store them.
Python doesn’t have variables as such, but instead has objectreferences. When it comes to immutable Shallow and
deep copying objects like ints and strs, there is no discernable difference between a variable and an object reference.
As for mutable objects, there is a difference, but it rarely matters in practice. We will use the terms variable and object
reference interchangeably. Let’s look at a few tiny examples, and then discuss some of the details.
x = "blue"
y = "green"
z=x
The syntax is simply objectReference = value. There is no need for predeclaration and no need to specify the value’s
type. When Python executes the first statement it creates a str object with the text “blue”, and creates an object
reference called x that refers to the str object. For all practical purposes we can say that “variable x has been
assigned the ‘blue’ string”. The second statement is similar. The third statement creates a new object reference called
z and sets it to refer to the same object that the x object reference refers to (in this case the str containing the text
“blue”).The = operator is not the same as the variable assignment operator in some other languages. The = operator
binds an object reference to an object inmemory. If the object reference already exists, it is simply re-bound to refer to
the object on the right of the = operator; if the object reference does not exist it is created by the = operator.
7)
● def extendList(val, list=[]):
● list.append(val)
● return list
● list1 = extendList(10)
● list2 = extendList(123,[])
● list3 = extendList('a')
● OUTPUT -
list1 = [10, 'a']
list2 = [123]
list3 = [10, 'a']
def multipliers():
return [lambda x : i * x for i in range(4)]
OUTPUT - The output of the above code will be [6, 6, 6, 6] (not [0, 2, 4, 6]).
The reason for this is that Python’s closures are late binding. This means that the values of variables used in closures
are looked up at the time the inner function is called. So as a result, when any of the functions returned
by multipliers() are called, the value of i is looked up in the surrounding scope at that time. By then, regardless of
which of the returned functions is called, the forloop has completed and i is left with its final value of 3. Therefore,
every returned function multiplies the value it is passed by 3, so since a value of 2 is passed in the above code, they
all return a value of 6 (i.e., 3 x 2).
MAGIC FUNCTION - Dunder or magic methods in Python are the methods having two prefix and
suffix underscores in the method name. Dunder here means “Double Under (Underscores)”.
A special magic method in Python allows instances of your classes to behave as if they
were functions, so that you can "call" them, pass them to functions that takefunctions as
arguments, and so on
https://www.toptal.com/python/interview-questions
Deloitte:
1) Inheritance , types of it
2) To print the reverse of natural num if the number is odd , else the whole range
3) What is faster – left join or inner join
QUEZX -
EXTRA INFO -
>>> df1.to_excel("output.xlsx",
If you wish to write to more than one sheet in the workbook, it is necessary to specify an ExcelWriter object:
To set the library that is used to write the Excel file, you can pass the engine keyword (the default engine is
automatically chosen depending on the file extension):
Dictionary Key – Should be Hashable and immutable. Therefore, tuple, number and string can be used as a key.
Similarly, object (instance of a class) can be used as key.
The following code works well because by default, your class object are hashable :
Class Foo(object):
def __init__(self):
pass
myinstance = Foo()
mydict = {myinstance : 'Hello world'}
print mydict[myinstance]
Output : Hello world
TOPICS –
1) Inheritance in Python - https://www.geeksforgeeks.org/inheritance-in-python/
2) Magic Function – In Sheet
3) Monkey Patching - In Python, the term monkey patch refers to dynamic (or run-time) modifications
of a class or module. In Python, we can actually change the behavior of code at run-time.
# monk.py
class A:
def func(self):
print "func() is being called"
We use above module (monk) in below code and change behavior of func() at run-time by assigning
different value.
import monk
def monkey_f(self):
print "monkey_f() is being called"
4) Operation Overloading
5) OS module - The OS module in Python provides a way of using operating system dependent
functionality.
import os
Set the current numeric umask and return the previous umask.
os.umask(mask)
6) Database connection
7) Accumulate
8) Zip vs iZip
9) Numpy
10) Pandas
11) SciPy
12) MRO
13) Namescapes
14) Scope
Notes –
List containing list can’t get converted into set . Throws UNHASABLE list error .
L1 = [1,2,2,3]
L2 = SET(L1)
PRINT(l2) – 1,2,3
L1 = [1,2,3,[1,2]]
L2 = SET(L1)
What is namespace:
A namespace is a system to have a unique name for each and every object in Python. An object might be a variable
or a method. Python itself maintains a namespace in the form of a Python dictionary. Let’s go through an example, a
directory-file system structure in computers. Needless to say, that one can have multiple directories having a file with
the same name inside of every directory. But one can get directed to the file, one wishes, just by specifying the
absolute path to the file.
Real-time example, the role of a namespace is like a surname. One might not find a single “Alice” in the class there
might be multiple “Alice” but when you particularly ask for “Alice Lee” or “Alice Clark” (with a surname), there will be
only one (time being don’t think of both first name and surname are same for multiple students).
On the similar lines, Python interpreter understands what exact method or variable one is trying to point to in the
code, depending upon the namespace. So, the division of the word itself gives little more information. Its Name
(which means name, an unique identifier) + Space(which talks something related to scope). Here, a name might be of
any Python method or variable and space depends upon the location from where is trying to access a variable or a
method.
Types of namespaces :
When Python interpreter runs solely without and user-defined modules, methods, classes, etc. Some functions like
print(), id() are always present, these are built in namespaces. When a user creates a module, a global namespace
gets created, later creation of local functions creates the local namespace. The built-in namespace encompasses
global namespace and global namespace encompasses local namespace.
Lifetime of a namespace :
A lifetime of a namespace depends upon the scope of objects, if the scope of an object ends, the lifetime of that
namespace comes to an end. Hence, it is not possible to access inner namespace’s objects from an outer
namespace.
Example:
filter_none
brightness_4
var1 = 5
def some_func():
# var2 is in the local namespace
var2 = 6
def some_inner_func():
# namespace
var3 = 7
As shown in the following figure, same object name can be present in multiple namespaces as isolation between the
same name is maintained by their namespace.
But in some cases, one might be interested in updating or processing global variable only, as shown in the following
example, one should mark it explicitly as global and the update or process.
filter_none
edit
play_arrow
brightness_4
# global variable
count = 5
def some_method():
global count
count = count + 1
print(count)
some_method()
Output:
Example 1:
filter_none
edit
play_arrow
brightness_4
# a scope of object
def some_func():
print(var)
some_func()
Output:
some_func()
print(var)
As can be seen in the above output the function some_func() is in the scope from main but var is not avaialable in the
scope of main. Similarly, in case of inner functions, outer functions don’t have accessibility of inner local variables
which are local to inner functions and out of scope for outer functions. Lets take an example to have details
understanding of the same:
Example 2:
filter_none
edit
play_arrow
brightness_4
# a scope of object
def some_func():
print("Inside some_func")
def some_inner_func():
var = 10
some_inner_func()
some_func()
Output:
Inside some_func
some_func()
Argsparse
Json
Bank Of America –
1) Namescape
2) Encapsulation and Abstraction
3) Encoding – sha1 and sha2
4) Property
5) Inheritance
Johnson –
1) Lambda significance
2) Callback
3) Utility Function
4) Pandas
5) Networking