diff --git a/.gitignore b/.gitignore
index c5a5ca5..79b590d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,5 @@
build
build/
build/*
+*~
+
diff --git a/README.md b/README.md
index 520f6a5..d2db74e 100644
--- a/README.md
+++ b/README.md
@@ -3,14 +3,25 @@ OpenCV2-Python-Guide
This repo contains tutorials on OpenCV-Python library using new cv2 interface
-Contents
-----------
-* source : contains the original source of docs in markup language
-* build : contains ready-to-use documentation in html format
+**IMP - This tutorial is meant for OpenCV 3x version. Not OpenCV 2x**
+=======================================================================
+
+**IMP - This tutorial is meant for OpenCV 3x version. Not OpenCV 2x**
+
+**IMP - This tutorial is meant for OpenCV 3x version. Not OpenCV 2x**
+
+Please try the examples with OpenCV 3x before sending any bug reports
+
+Data files
+-----------
+
+The input data used in these tutorials are given in **Data** folder
Online
---------
-An online version of this tutorials can be found at https://opencv-python-tutroals.readthedocs.org/en/latest/index.html
+
+* **For official tutorials, please visit : http://docs.opencv.org/trunk/doc/py_tutorials/py_tutorials.html**
+* https://opencv-python-tutroals.readthedocs.org/en/latest/index.html - This is only for checking. May contain lots of errors, please stick to the official tutorials.
Offline
---------
diff --git a/data/butterfly.jpg b/data/butterfly.jpg
new file mode 100644
index 0000000..67d60f0
Binary files /dev/null and b/data/butterfly.jpg differ
diff --git a/data/home.jpg b/data/home.jpg
new file mode 100644
index 0000000..7528a7d
Binary files /dev/null and b/data/home.jpg differ
diff --git a/data/left.jpg b/data/left.jpg
new file mode 100644
index 0000000..f4c2630
Binary files /dev/null and b/data/left.jpg differ
diff --git a/data/messi5.jpg b/data/messi5.jpg
new file mode 100644
index 0000000..cd43761
Binary files /dev/null and b/data/messi5.jpg differ
diff --git a/data/readme.txt b/data/readme.txt
new file mode 100644
index 0000000..82de6f0
--- /dev/null
+++ b/data/readme.txt
@@ -0,0 +1,19 @@
+This folder contains data files used in these tutorials.
+
+Not all files are available. I had to format my system and lost many files. Some are taken from internet and I have put them in this folder. Some files are my own, so no way to recover them.
+
+Some video files are also missing.
+
+Image
+-------
+lena.jpg
+butterfly.jpg
+home.jpg
+messi5.jpg
+left.jpg
+right.jpg
+
+Feature Matching - https://github.com/Itseez/opencv/blob/master/samples/c/box.png
+ - https://github.com/Itseez/opencv/blob/master/samples/c/box_in_scene.png
+
+Background subtraction - https://github.com/Itseez/opencv/blob/master/samples/gpu/768x576.avi
diff --git a/data/right.jpg b/data/right.jpg
new file mode 100644
index 0000000..4a4bc53
Binary files /dev/null and b/data/right.jpg differ
diff --git a/source/py_tutorials/py_bindings/py_bindings_basics/py_bindings_basics.rst b/source/py_tutorials/py_bindings/py_bindings_basics/py_bindings_basics.rst
new file mode 100644
index 0000000..7e48930
--- /dev/null
+++ b/source/py_tutorials/py_bindings/py_bindings_basics/py_bindings_basics.rst
@@ -0,0 +1,112 @@
+.. _Bindings_Basics:
+
+How OpenCV-Python Bindings Works?
+************************************
+
+Goal
+=====
+
+Learn:
+
+ * How OpenCV-Python bindings are generated?
+ * How to extend new OpenCV modules to Python?
+
+How OpenCV-Python bindings are generated?
+=========================================
+
+In OpenCV, all algorithms are implemented in C++. But these algorithms can be used from different languages like Python, Java etc. This is made possible by the bindings generators. These generators create a bridge between C++ and Python which enables users to call C++ functions from Python. To get a complete picture of what is happening in background, a good knowledge of Python/C API is required. A simple example on extending C++ functions to Python can be found in official Python documentation[1]. So extending all functions in OpenCV to Python by writing their wrapper functions manually is a time-consuming task. So OpenCV does it in a more intelligent way. OpenCV generates these wrapper functions automatically from the C++ headers using some Python scripts which are located in ``modules/python/src2``. We will look into what they do.
+
+First, ``modules/python/CMakeFiles.txt`` is a CMake script which checks the modules to be extended to Python. It will automatically check all the modules to be extended and grab their header files. These header files contain list of all classes, functions, constants etc. for that particular modules.
+
+Second, these header files are passed to a Python script, ``modules/python/src2/gen2.py``. This is the Python bindings generator script. It calls another Python script ``modules/python/src2/hdr_parser.py``. This is the header parser script. This header parser splits the complete header file into small Python lists. So these lists contain all details about a particular function, class etc. For example, a function will be parsed to get a list containing function name, return type, input arguments, argument types etc. Final list contains details of all the functions, structs, classes etc. in that header file.
+
+But header parser doesn't parse all the functions/classes in the header file. The developer has to specify which functions should be exported to Python. For that, there are certain macros added to the beginning of these declarations which enables the header parser to identify functions to be parsed. These macros are added by the developer who programs the particular function. In short, the developer decides which functions should be extended to Python and which are not. Details of those macros will be given in next session.
+
+So header parser returns a final big list of parsed functions. Our generator script (gen2.py) will create wrapper functions for all the functions/classes/enums/structs parsed by header parser (You can find these header files during compilation in the ``build/modules/python/`` folder as ``pyopencv_generated_*.h`` files). But there may be some basic OpenCV datatypes like Mat, Vec4i, Size. They need to be extended manually. For example, a Mat type should be extended to Numpy array, Size should be extended to a tuple of two integers etc. Similarly, there may be some complex structs/classes/functions etc. which need to be extended manually. All such manual wrapper functions are placed in ``modules/python/src2/pycv2.hpp``.
+
+So now only thing left is the compilation of these wrapper files which gives us **cv2** module. So when you call a function, say ``res = equalizeHist(img1,img2)`` in Python, you pass two numpy arrays and you expect another numpy array as the output. So these numpy arrays are converted to ``cv::Mat`` and then calls the ``equalizeHist()`` function in C++. Final result, ``res`` will be converted back into a Numpy array. So in short, almost all operations are done in C++ which gives us almost same speed as that of C++.
+
+So this is the basic version of how OpenCV-Python bindings are generated.
+
+
+How to extend new modules to Python?
+=====================================
+
+Header parser parse the header files based on some wrapper macros added to function declaration. Enumeration constants don't need any wrapper macros. They are automatically wrapped. But remaining functions, classes etc. need wrapper macros.
+
+Functions are extended using ``CV_EXPORTS_W`` macro. An example is shown below.
+
+.. code-block:: cpp
+
+ CV_EXPORTS_W void equalizeHist( InputArray src, OutputArray dst );
+
+Header parser can understand the input and output arguments from keywords like ``InputArray, OutputArray`` etc. But sometimes, we may need to hardcode inputs and outputs. For that, macros like ``CV_OUT, CV_IN_OUT`` etc. are used.
+
+.. code-block:: cpp
+
+ CV_EXPORTS_W void minEnclosingCircle( InputArray points,
+ CV_OUT Point2f& center, CV_OUT float& radius );
+
+For large classes also, ``CV_EXPORTS_W`` is used. To extend class methods, ``CV_WRAP`` is used. Similarly, ``CV_PROP`` is used for class fields.
+
+.. code-block:: cpp
+
+ class CV_EXPORTS_W CLAHE : public Algorithm
+ {
+ public:
+ CV_WRAP virtual void apply(InputArray src, OutputArray dst) = 0;
+
+ CV_WRAP virtual void setClipLimit(double clipLimit) = 0;
+ CV_WRAP virtual double getClipLimit() const = 0;
+ }
+
+Overloaded functions can be extended using ``CV_EXPORTS_AS``. But we need to pass a new name so that each function will be called by that name in Python. Take the case of integral function below. Three functions are available, so each one is named with a suffix in Python. Similarly ``CV_WRAP_AS`` can be used to wrap overloaded methods.
+
+.. code-block:: cpp
+
+ //! computes the integral image
+ CV_EXPORTS_W void integral( InputArray src, OutputArray sum, int sdepth = -1 );
+
+ //! computes the integral image and integral for the squared image
+ CV_EXPORTS_AS(integral2) void integral( InputArray src, OutputArray sum,
+ OutputArray sqsum, int sdepth = -1, int sqdepth = -1 );
+
+ //! computes the integral image, integral for the squared image and the tilted integral image
+ CV_EXPORTS_AS(integral3) void integral( InputArray src, OutputArray sum,
+ OutputArray sqsum, OutputArray tilted,
+ int sdepth = -1, int sqdepth = -1 );
+
+Small classes/structs are extended using ``CV_EXPORTS_W_SIMPLE``. These structs are passed by value to C++ functions. Examples are KeyPoint, Match etc. Their methods are extended by ``CV_WRAP`` and fields are extended by ``CV_PROP_RW``.
+
+.. code-block:: cpp
+
+ class CV_EXPORTS_W_SIMPLE DMatch
+ {
+ public:
+ CV_WRAP DMatch();
+ CV_WRAP DMatch(int _queryIdx, int _trainIdx, float _distance);
+ CV_WRAP DMatch(int _queryIdx, int _trainIdx, int _imgIdx, float _distance);
+
+ CV_PROP_RW int queryIdx; // query descriptor index
+ CV_PROP_RW int trainIdx; // train descriptor index
+ CV_PROP_RW int imgIdx; // train image index
+
+ CV_PROP_RW float distance;
+ };
+
+Some other small classes/structs can be exported using ``CV_EXPORTS_W_MAP`` where it is exported to a Python native dictionary. Moments() is an example of it.
+
+.. code-block:: cpp
+
+ class CV_EXPORTS_W_MAP Moments
+ {
+ public:
+ //! spatial moments
+ CV_PROP_RW double m00, m10, m01, m20, m11, m02, m30, m21, m12, m03;
+ //! central moments
+ CV_PROP_RW double mu20, mu11, mu02, mu30, mu21, mu12, mu03;
+ //! central normalized moments
+ CV_PROP_RW double nu20, nu11, nu02, nu30, nu21, nu12, nu03;
+ };
+
+So these are the major extension macros available in OpenCV. Typically, a developer has to put proper macros in their appropriate positions. Rest is done by generator scripts. Sometimes, there may be an exceptional cases where generator scripts cannot create the wrappers. Such functions need to be handled manually. But most of the time, a code written according to OpenCV coding guidelines will be automatically wrapped by generator scripts.
diff --git a/source/py_tutorials/py_bindings/py_table_of_contents_bindings/images/inpainticon.jpg b/source/py_tutorials/py_bindings/py_table_of_contents_bindings/images/inpainticon.jpg
new file mode 100644
index 0000000..dc22cf0
Binary files /dev/null and b/source/py_tutorials/py_bindings/py_table_of_contents_bindings/images/inpainticon.jpg differ
diff --git a/source/py_tutorials/py_bindings/py_table_of_contents_bindings/images/nlm_icon.jpg b/source/py_tutorials/py_bindings/py_table_of_contents_bindings/images/nlm_icon.jpg
new file mode 100644
index 0000000..0861964
Binary files /dev/null and b/source/py_tutorials/py_bindings/py_table_of_contents_bindings/images/nlm_icon.jpg differ
diff --git a/source/py_tutorials/py_bindings/py_table_of_contents_bindings/py_table_of_contents_bindings.rst b/source/py_tutorials/py_bindings/py_table_of_contents_bindings/py_table_of_contents_bindings.rst
new file mode 100644
index 0000000..70c40b5
--- /dev/null
+++ b/source/py_tutorials/py_bindings/py_table_of_contents_bindings/py_table_of_contents_bindings.rst
@@ -0,0 +1,36 @@
+.. _PY_Table-Of-Content-Bindings:
+
+
+OpenCV-Python Bindings
+--------------------------------
+
+Here, you will learn how OpenCV-Python bindings are generated.
+
+
+* :ref:`Bindings_Basics`
+
+ .. tabularcolumns:: m{100pt} m{300pt}
+ .. cssclass:: toctableopencv
+
+ =========== ======================================================
+ |bind1| Learn how OpenCV-Python bindings are generated.
+
+ =========== ======================================================
+
+ .. |bind1| image:: images/nlm_icon.jpg
+ :height: 90pt
+ :width: 90pt
+
+
+
+
+
+.. raw:: latex
+
+ \pagebreak
+
+.. We use a custom table of content format and as the table of content only informs Sphinx about the hierarchy of the files, no need to show it.
+.. toctree::
+ :hidden:
+
+ ../py_bindings_basics/py_bindings_basics
diff --git a/source/py_tutorials/py_calib3d/py_depthmap/py_depthmap.rst b/source/py_tutorials/py_calib3d/py_depthmap/py_depthmap.rst
index b848f0e..aac4dd0 100644
--- a/source/py_tutorials/py_calib3d/py_depthmap/py_depthmap.rst
+++ b/source/py_tutorials/py_calib3d/py_depthmap/py_depthmap.rst
@@ -48,7 +48,7 @@ Below code snippet shows a simple procedure to create disparity map.
plt.imshow(disparity,'gray')
plt.show()
-Below image contains the original image (left) and its disparity map (right). As you can see, result is contaminated with high degree of noise. By adjusting the values of numDisparities and blockSize, you can get more better result.
+Below image contains the original image (left) and its disparity map (right). As you can see, result is contaminated with high degree of noise. By adjusting the values of numDisparities and blockSize, you can get better results.
.. image:: images/disparity_map.jpg
:alt: Disparity Map
diff --git a/source/py_tutorials/py_core/py_basic_ops/py_basic_ops.rst b/source/py_tutorials/py_core/py_basic_ops/py_basic_ops.rst
index dbb6404..62f70ff 100644
--- a/source/py_tutorials/py_core/py_basic_ops/py_basic_ops.rst
+++ b/source/py_tutorials/py_core/py_basic_ops/py_basic_ops.rst
@@ -49,7 +49,7 @@ You can modify the pixel values the same way.
.. warning:: Numpy is a optimized library for fast array calculations. So simply accessing each and every pixel values and modifying it will be very slow and it is discouraged.
-.. note:: Above mentioned method is normally used for selecting a region of array, say first 5 rows and last 3 columns like that. For individual pixel access, Numpy array methods, ``array.item()`` and ``array.itemset()`` is considered to be more better. But it always returns a scalar. So if you want to access all B,G,R values, you need to call ``array.item()`` separately for all.
+.. note:: Above mentioned method is normally used for selecting a region of array, say first 5 rows and last 3 columns like that. For individual pixel access, Numpy array methods, ``array.item()`` and ``array.itemset()`` is considered to be better. But it always returns a scalar. So if you want to access all B,G,R values, you need to call ``array.item()`` separately for all.
Better pixel accessing and editing method :
@@ -94,7 +94,7 @@ Image datatype is obtained by ``img.dtype``:
Image ROI
===========
-Sometimes, you will have to play with certain region of images. For eye detection in images, first face detection is done all over the image and when face is obtained, we select the face region alone and search for eyes inside it instead of searching whole image. It improves accuracy (because eyes are always on faces :D ) and performance (because we search for a small area)
+Sometimes, you will have to play with certain region of images. For eye detection in images, first perform face detection over the image until the face is found, then search within the face region for eyes. This approach improves accuracy (because eyes are always on faces :D ) and performance (because we search for a small area).
ROI is again obtained using Numpy indexing. Here I am selecting the ball and copying it to another region in the image:
::
@@ -111,7 +111,7 @@ Check the results below:
Splitting and Merging Image Channels
======================================
-Sometimes you will need to work separately on B,G,R channels of image. Then you need to split the BGR images to single planes. Or another time, you may need to join these individual channels to BGR image. You can do it simply by:
+The B,G,R channels of an image can be split into their individual planes when needed. Then, the individual channels can be merged back together to form a BGR image again. This can be performed by:
::
>>> b,g,r = cv2.split(img)
@@ -121,15 +121,16 @@ Or
>>> b = img[:,:,0]
-Suppose, you want to make all the red pixels to zero, you need not split like this and put it equal to zero. You can simply use Numpy indexing, and that is more faster.
+Suppose, you want to make all the red pixels to zero, you need not split like this and put it equal to zero. You can simply use Numpy indexing which is faster.
::
>>> img[:,:,2] = 0
-.. warning:: ``cv2.split()`` is a costly operation (in terms of time). So do it only if you need it. Otherwise go for Numpy indexing.
+.. warning:: ``cv2.split()`` is a costly operation (in terms of time), so only use it if necessary. Numpy indexing is much more efficient and should be used if possible.
Making Borders for Images (Padding)
====================================
+
If you want to create a border around the image, something like a photo frame, you can use **cv2.copyMakeBorder()** function. But it has more applications for convolution operation, zero padding etc. This function takes following arguments:
* **src** - input image
diff --git a/source/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.rst b/source/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.rst
index 17d9432..15f86a9 100644
--- a/source/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.rst
+++ b/source/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.rst
@@ -58,7 +58,7 @@ Below is the code which are commented in detail :
upper_blue = np.array([130,255,255])
# Threshold the HSV image to get only blue colors
- mask = cv2.inRange(hsv, lower_green, upper_green)
+ mask = cv2.inRange(hsv, lower_blue, upper_blue)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
diff --git a/source/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.rst b/source/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.rst
index ae28199..5c1b769 100644
--- a/source/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.rst
+++ b/source/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.rst
@@ -43,7 +43,7 @@ To draw the contours, ``cv2.drawContours`` function is used. It can also be used
To draw all the contours in an image:
::
- img = cv2.drawContour(img, contours, -1, (0,255,0), 3)
+ img = cv2.drawContours(img, contours, -1, (0,255,0), 3)
To draw an individual contour, say 4th contour:
::
diff --git a/source/py_tutorials/py_imgproc/py_filtering/py_filtering.rst b/source/py_tutorials/py_imgproc/py_filtering/py_filtering.rst
index 80b191b..d2d1082 100644
--- a/source/py_tutorials/py_imgproc/py_filtering/py_filtering.rst
+++ b/source/py_tutorials/py_imgproc/py_filtering/py_filtering.rst
@@ -7,21 +7,21 @@ Goals
=======
Learn to:
- * Blur the images with various low pass filters
+ * Blur imagess with various low pass filters
* Apply custom-made filters to images (2D convolution)
2D Convolution ( Image Filtering )
====================================
-As in one-dimensional signals, images also can be filtered with various low-pass filters(LPF), high-pass filters(HPF) etc. LPF helps in removing noises, blurring the images etc. HPF filters helps in finding edges in the images.
+As for one-dimensional signals, images also can be filtered with various low-pass filters (LPF), high-pass filters (HPF), etc. A LPF helps in removing noise, or blurring the image. A HPF filters helps in finding edges in an image.
-OpenCV provides a function **cv2.filter2D()** to convolve a kernel with an image. As an example, we will try an averaging filter on an image. A 5x5 averaging filter kernel will look like below:
+OpenCV provides a function, **cv2.filter2D()**, to convolve a kernel with an image. As an example, we will try an averaging filter on an image. A 5x5 averaging filter kernel can be defined as follows:
.. math::
K = \frac{1}{25} \begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \end{bmatrix}
-Operation is like this: keep this kernel above a pixel, add all the 25 pixels below this kernel, take its average and replace the central pixel with the new average value. It continues this operation for all the pixels in the image. Try this code and check the result:
+Filtering with the above kernel results in the following being performed: for each pixel, a 5x5 window is centered on this pixel, all pixels falling within this window are summed up, and the result is then divided by 25. This equates to computing the average of the pixel values inside that window. This operation is performed for all the pixels in the image to produce the output filtered image. Try this code and check the result:
::
import cv2
@@ -48,20 +48,20 @@ Result:
Image Blurring (Image Smoothing)
==================================
-Image blurring is achieved by convolving the image with a low-pass filter kernel. It is useful for removing noises. It actually removes high frequency content (eg: noise, edges) from the image. So edges are blurred a little bit in this operation. (Well, there are blurring techniques which doesn't blur the edges too). OpenCV provides mainly four types of blurring techniques.
+Image blurring is achieved by convolving the image with a low-pass filter kernel. It is useful for removing noise. It actually removes high frequency content (e.g: noise, edges) from the image resulting in edges being blurred when this is filter is applied. (Well, there are blurring techniques which do not blur edges). OpenCV provides mainly four types of blurring techniques.
1. Averaging
--------------
-This is done by convolving image with a normalized box filter. It simply takes the average of all the pixels under kernel area and replace the central element. This is done by the function **cv2.blur()** or **cv2.boxFilter()**. Check the docs for more details about the kernel. We should specify the width and height of kernel. A 3x3 normalized box filter would look like below:
+This is done by convolving the image with a normalized box filter. It simply takes the average of all the pixels under kernel area and replaces the central element with this average. This is done by the function **cv2.blur()** or **cv2.boxFilter()**. Check the docs for more details about the kernel. We should specify the width and height of kernel. A 3x3 normalized box filter would look like this:
.. math::
K = \frac{1}{9} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{bmatrix}
-.. note:: If you don't want to use normalized box filter, use **cv2.boxFilter()**. Pass an argument ``normalize=False`` to the function.
+.. note:: If you don't want to use a normalized box filter, use **cv2.boxFilter()** and pass the argument ``normalize=False`` to the function.
-Check a sample demo below with a kernel of 5x5 size:
+Check the sample demo below with a kernel of 5x5 size:
::
import cv2
@@ -85,10 +85,10 @@ Result:
:align: center
-2. Gaussian Blurring
+2. Gaussian Filtering
----------------------
-In this, instead of box filter, gaussian kernel is used. It is done with the function, **cv2.GaussianBlur()**. We should specify the width and height of kernel which should be positive and odd. We also should specify the standard deviation in X and Y direction, sigmaX and sigmaY respectively. If only sigmaX is specified, sigmaY is taken as same as sigmaX. If both are given as zeros, they are calculated from kernel size. Gaussian blurring is highly effective in removing gaussian noise from the image.
+In this approach, instead of a box filter consisting of equal filter coefficients, a Gaussian kernel is used. It is done with the function, **cv2.GaussianBlur()**. We should specify the width and height of the kernel which should be positive and odd. We also should specify the standard deviation in the X and Y directions, sigmaX and sigmaY respectively. If only sigmaX is specified, sigmaY is taken as equal to sigmaX. If both are given as zeros, they are calculated from the kernel size. Gaussian filtering is highly effective in removing Gaussian noise from the image.
If you want, you can create a Gaussian kernel with the function, **cv2.getGaussianKernel()**.
@@ -105,12 +105,12 @@ Result:
:align: center
-3. Median Blurring
+3. Median Filtering
--------------------
-Here, the function **cv2.medianBlur()** takes median of all the pixels under kernel area and central element is replaced with this median value. This is highly effective against salt-and-pepper noise in the images. Interesting thing is that, in the above filters, central element is a newly calculated value which may be a pixel value in the image or a new value. But in median blurring, central element is always replaced by some pixel value in the image. It reduces the noise effectively. Its kernel size should be a positive odd integer.
+Here, the function **cv2.medianBlur()** computes the median of all the pixels under the kernel window and the central pixel is replaced with this median value. This is highly effective in removing salt-and-pepper noise. One interesting thing to note is that, in the Gaussian and box filters, the filtered value for the central element can be a value which may not exist in the original image. However this is not the case in median filtering, since the central element is always replaced by some pixel value in the image. This reduces the noise effectively. The kernel size must be a positive odd integer.
-In this demo, I added a 50% noise to our original image and applied median blur. Check the result:
+In this demo, we add a 50% noise to our original image and use a median filter. Check the result:
::
median = cv2.medianBlur(img,5)
@@ -125,11 +125,11 @@ Result:
4. Bilateral Filtering
-----------------------
-**cv2.bilateralFilter()** is highly effective in noise removal while keeping edges sharp. But the operation is slower compared to other filters. We already saw that gaussian filter takes the a neighbourhood around the pixel and find its gaussian weighted average. This gaussian filter is a function of space alone, that is, nearby pixels are considered while filtering. It doesn't consider whether pixels have almost same intensity. It doesn't consider whether pixel is an edge pixel or not. So it blurs the edges also, which we don't want to do.
+As we noted, the filters we presented earlier tend to blur edges. This is not the case for the bilateral filter, **cv2.bilateralFilter()**, which was defined for, and is highly effective at noise removal while preserving edges. But the operation is slower compared to other filters. We already saw that a Gaussian filter takes the a neighborhood around the pixel and finds its Gaussian weighted average. This Gaussian filter is a function of space alone, that is, nearby pixels are considered while filtering. It does not consider whether pixels have almost the same intensity value and does not consider whether the pixel lies on an edge or not. The resulting effect is that Gaussian filters tend to blur edges, which is undesirable.
-Bilateral filter also takes a gaussian filter in space, but one more gaussian filter which is a function of pixel difference. Gaussian function of space make sure only nearby pixels are considered for blurring while gaussian function of intensity difference make sure only those pixels with similar intensity to central pixel is considered for blurring. So it preserves the edges since pixels at edges will have large intensity variation.
+The bilateral filter also uses a Gaussian filter in the space domain, but it also uses one more (multiplicative) Gaussian filter component which is a function of pixel intensity differences. The Gaussian function of space makes sure that only pixels are 'spatial neighbors' are considered for filtering, while the Gaussian component applied in the intensity domain (a Gaussian function of intensity differences) ensures that only those pixels with intensities similar to that of the central pixel ('intensity neighbors') are included to compute the blurred intensity value. As a result, this method preserves edges, since for pixels lying near edges, neighboring pixels placed on the other side of the edge, and therefore exhibiting large intensity variations when compared to the central pixel, will not be included for blurring.
-Below samples shows use bilateral filter (For details on arguments, visit docs).
+The sample below demonstrates the use of bilateral filtering (For details on arguments, see the OpenCV docs).
::
blur = cv2.bilateralFilter(img,9,75,75)
@@ -140,12 +140,14 @@ Result:
:alt: Bilateral Filtering
:align: center
-See, the texture on the surface is gone, but edges are still preserved.
+Note that the texture on the surface is gone, but edges are still preserved.
Additional Resources
======================
-1. Details about the `bilateral filtering `_
+1. Details about the `bilateral filtering can be found at `_
Exercises
===========
+
+Take an image, add Gaussian noise and salt and pepper noise, compare the effect of blurring via box, Gaussian, median and bilateral filters for both noisy images, as you change the level of noise.
diff --git a/source/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.rst b/source/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.rst
index db24865..b1a6613 100644
--- a/source/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.rst
+++ b/source/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.rst
@@ -21,7 +21,7 @@ Morphological transformations are some simple operations based on the image shap
1. Erosion
--------------
-The basic idea of erosion is just like soil erosion only, it erodes away the boundaries of foreground object (Always try to keep foreground in white). So what it does? The kernel slides through the image (as in 2D convolution). A pixel in the original image (either 1 or 0) will be considered 1 only if all the pixels under the kernel is 1, otherwise it is eroded (made to zero).
+The basic idea of erosion is just like soil erosion only, it erodes away the boundaries of foreground object (Always try to keep foreground in white). So what does it do? The kernel slides through the image (as in 2D convolution). A pixel in the original image (either 1 or 0) will be considered 1 only if all the pixels under the kernel is 1, otherwise it is eroded (made to zero).
So what happends is that, all the pixels near boundary will be discarded depending upon the size of kernel. So the thickness or size of the foreground object decreases or simply white region decreases in the image. It is useful for removing small white noises (as we have seen in colorspace chapter), detach two connected objects etc.
diff --git a/source/py_tutorials/py_ml/py_svm/py_svm_basics/py_svm_basics.rst b/source/py_tutorials/py_ml/py_svm/py_svm_basics/py_svm_basics.rst
index 2bd7d6b..72b80b5 100644
--- a/source/py_tutorials/py_ml/py_svm/py_svm_basics/py_svm_basics.rst
+++ b/source/py_tutorials/py_ml/py_svm/py_svm_basics/py_svm_basics.rst
@@ -62,9 +62,9 @@ Let us define a kernel function :math:`K(p,q)` which does a dot product between
.. math::
- K(p,q) = \phi(p).\phi(q) &= \phi(p)^T \phi(q) \\
+ K(p,q) = \phi(p).\phi(q) &= \phi(p)^T , \phi(q) \\
&= (p_{1}^2,p_{2}^2,\sqrt{2} p_1 p_2).(q_{1}^2,q_{2}^2,\sqrt{2} q_1 q_2) \\
- &= p_1 q_1 + p_2 q_2 + 2 p_1 q_1 p_2 q_2 \\
+ &= p_{1}^2 q_{1}^2 + p_{2}^2 q_{2}^2 + 2 p_1 q_1 p_2 q_2 \\
&= (p_1 q_1 + p_2 q_2)^2 \\
\phi(p).\phi(q) &= (p.q)^2
diff --git a/source/py_tutorials/py_tutorials.rst b/source/py_tutorials/py_tutorials.rst
index e4818ec..c3e9a0e 100644
--- a/source/py_tutorials/py_tutorials.rst
+++ b/source/py_tutorials/py_tutorials.rst
@@ -159,6 +159,20 @@ OpenCV-Python Tutorials
:alt: OD Icon
+* :ref:`PY_Table-Of-Content-Bindings`
+
+ .. tabularcolumns:: m{100pt} m{300pt}
+ .. cssclass:: toctableopencv
+
+ =========== =====================================================================
+ |PyBin| In this section, we will see how OpenCV-Python bindings are generated
+
+ =========== =====================================================================
+
+ .. |PyBin| image:: images/obj_icon.jpg
+ :height: 80pt
+ :width: 80pt
+ :alt: OD Icon
.. raw:: latex
@@ -178,6 +192,4 @@ OpenCV-Python Tutorials
py_ml/py_table_of_contents_ml/py_table_of_contents_ml
py_photo/py_table_of_contents_photo/py_table_of_contents_photo
py_objdetect/py_table_of_contents_objdetect/py_table_of_contents_objdetect
-
-
-
+ py_bindings/py_table_of_contents_bindings/py_table_of_contents_bindings