Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1,226 changes: 1,226 additions & 0 deletions Assignment 2 & 3/Face_recognition.ipynb

Large diffs are not rendered by default.

39 changes: 38 additions & 1 deletion Assignment 2 & 3/Readme.md
Original file line number Diff line number Diff line change
@@ -1 +1,38 @@
Please add the assignment-2 & 3 details here
## **Introduction:**
</br> Face recognition has been a topic of research since 1961 and there were many algorithms developed to improve the recognition rate. In earlier times PCA was used for face classification. There has been a great transformation for feature extraction when N. Dalal and B. Triggs have introduced the concept of Histograms of Oriented Gradient for human detection in [6]. In PCA we use eigen vectors as features while in HoG we will use feature vectors which will be extracted with the help of oriented gradients and all the mathematical analysis has been greatly explained in article 'Improved Face Recognition Rate Using HOG Features and SVM Classifier' [5]. There has been a detailed and intuitive article on HoG feature extraction to get strong intuition on how image matrix will get converted to HoG feature vector in [2]. After extracting features we will use Support Vector Machine with different image size and K-Nearest Neighbor model to predict the class of the face.
</br>
## **Approach:**
We have followed the basic image classification approach with the help of 2 classifiers and then comapred the results based on the accuracy we get and the time model takes to get learn. The flow of image classification starts with partitioning the dataset (Dataset: Fetch lfw people) into training and testing folders and then carry out the feature vectors for each image in training as well as in testing folder and terminate the process by classifying the image [5].
</br>
We will now discuss the approach taken to extract our feature vectors using HoG. The HoG feature discriptor focuses on the shape and edges of the image, along with that it also focus on the direction and orientation of the edge and because of this it is not similar to other edge detectors. The key idea behind this is to divide the image into small portions and for each portion we will calculate gradient and orientation. After getting small region we will calculate gradient of each pixel by simply subtracting the neighbor pixels in up-down and side-ways manner. After that we need to calculate the magnitude by the gradients in x and y directions using formula <img src="https://latex.codecogs.com/svg.latex?magnitude=\sqrt{(G_x)^2+(G_y)^2}" title="\Large magnitude=\sqrt{(G_x)^2+(G_y)^2}" />
and orientation can be found using <img src="https://latex.codecogs.com/svg.latex?tan(\phi)=\frac{G_y}{G_x}" title="\Large tan(\phi)=\frac{G_y}{G_x}" />
[1],where Gx is the gradient in horizontal direction and Gy is the gradient in the vertical direction [2].
</br> After getting the magnitude and orientation will be add to the histogram with scale = 20 and we will get 9 elements from one feature vector of one 8x8 image region which will be return in bin variable if we use HoG inbuilt function [2]. With the help of these feature vectors we will train our classifier and test on the testing data. After testing the model we will compare the values of predicted and ground truth to get the accuracy of our model.
</br>
## **Results:**
HoG implementation on two faces: </br>
Image 1 </br>
![Image 1](https://github.com/yashpatel301/Computer-Vision-Basics/blob/main/Face%20Recognition/Results/HOG_image11.png)

</br>HoG image 1 </br>
![HoG image 1](https://github.com/yashpatel301/Computer-Vision-Basics/blob/main/Face%20Recognition/Results/HOG_image_12.png)

</br> Image 2 </br>
![Image 2](https://github.com/yashpatel301/Computer-Vision-Basics/blob/main/Face%20Recognition/Results/HOG_image_21.png)

</br>HoG image 2 </br>
![HoG image 2](https://github.com/yashpatel301/Computer-Vision-Basics/blob/main/Face%20Recognition/Results/HOG_image_22.png)

</br> Results of classfiers used with different input image size (Comparing with respect to Accuracy of model and Time taken to train)
![Accuracy of Classifiers](https://github.com/yashpatel301/Computer-Vision-Basics/blob/main/Face%20Recognition/Results/Accuracy_SVM_vs_KNN.png)

## **Installation guidelines and platform details:**
</br> This assigment is performed on Google colab only. All the libraries are mentioned at the beginning of the code and are imported without any installation needed in colab and for anaconda environment all the required libraries are mentioned in the requirements.txt

## **References:**
1. https://stackoverflow.com/questions/11256433/how-to-show-math-equations-in-general-githubs-markdownnot-githubs-blog
2. https://www.analyticsvidhya.com/blog/2019/09/feature-engineering-images-introduction-hog-feature-descriptor/
3. https://scikit-learn.org/stable/modules/svm.html
4. https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html
5. Dadi, H. and Pillutla, G., 2021. Improved Face Recognition Rate Using HOG Features and SVM Classifier. [online] www.iosrjournals.org. Available at: <https://www.researchgate.net/profile/Pg-Mohan/publication/305709603_Improved_Face_Recognition_Rate_Using_HOG_Features_and_SVM_Classifier/links/57aedbba08ae95f9d8f11b57/Improved-Face-Recognition-Rate-Using-HOG-Features-and-SVM-Classifier.pdf> [Accessed 17 April 2021].
6. Dalal, N. and Triggs, B., 2021. Histograms of oriented gradients for human detection. [online] Ieeexplore.ieee.org. Available at: <https://ieeexplore.ieee.org/abstract/document/1467360> [Accessed 17 April 2021].
35 changes: 35 additions & 0 deletions Assignment 2 & 3/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
imageio==2.4.1
Keras==2.4.3
Keras-Preprocessing==1.1.2
keras-vis==0.4.1
matplotlib==3.2.2
matplotlib-venn==0.11.6
networkx==2.5.1
nltk==3.2.5
numpy==1.19.5
opencv-contrib-python==4.1.2.30
opencv-python==4.1.2.30
pandas==1.1.5
pandas-datareader==0.9.0
pandas-gbq==0.13.3
pandas-profiling==1.4.1
pickleshare==0.7.5
Pillow==7.1.2
pip-tools==4.5.1
pymc3==3.7
regex==2019.12.20
scikit-image==0.16.2
scikit-learn==0.22.2.post1
scipy==1.4.1
seaborn==0.11.1
sklearn==0.0
sklearn-pandas==1.8.0
tensorboard==2.4.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.4.1
tensorflow-datasets==4.0.1
tensorflow-estimator==2.4.0
tensorflow-gcs-config==2.4.0
tensorflow-hub==0.12.0
tensorflow-metadata==0.29.0
tensorflow-probability==0.12.1
46 changes: 45 additions & 1 deletion Assignment 4 & 5/Readme.md
Original file line number Diff line number Diff line change
@@ -1 +1,45 @@
Please add the assignment-4 & 5 details here
## **Introduction:**
</br> Convolutional Neural Network has been developed so well when it comes to image classification. There has been some amazing applications of Image classification in computer vision and classifying the disease infected leaves and normal leaves is one of the important application of Image classification. By automatic and an early diagnosis of a disease and its severity, effective, and timely treatment can be taken in advance [4]. The proposed work presents the comparison of two different Convolutional Neural Network (CNN) architectures to classify diseases of the citrus leaf. We have created our own CNN with 3 Conv - 3 Max pooling and 2 Dense layers [3] and also have added Dropout layer to overcome overfitting faced. This work also comprises of expermentation with hyper-parameters such as learning rate, dropout percentage and activation function at output layer and has compared the performance.
</br>

## **Approach:**
Approach will consists of the architecture of CNN which is used and the experiments we have carried out based on values of hyper-parameters.
#### ***Architecture***
There has been 3 Convolutional and 3 max pooling layer used by stacking one behind other. After getting higher level features, flattening of the feature matrix has been done and passed through Dense layers. There has been one dropout layer present in between two Dense layers.
#### ***Expermients with hyper-parameters***
There has been ReLU activation used for all the layers except output layer, and for the output layer two different activation functions have been experimented i.e., sigmoid and softmax. Sigmoid activation function maps any values from −∞ to ∞, to 0-1 and gives the predictive analysis of classification. Softmax activation is also famous for its mapping of large values to range of 0 − 1 as it is a probabilistic function which gives proportion of similarity in probabilistic manner to different class and hence it is used at the outputlayer. </br>
A list of learning rates, starting with 10^-3 and increment by multiple of 10 till 1. We have got five different values. Mapping of alpha values with index: [1 : 0.0001, 2 : 0.001, 3 : 0.01, 4 : 0.1, 5 : 1]. </br>
When we talk about accuracy plots between training and validation accuracy, there can be overfitting and underfitting where we don’t want to stuck. In first scenerio where we can see that training accuracy is quite higher than validation and also the rate of increment in validation accuracy is not so encouraging and thus we can say that there might be problem of overfitting.</br>

## **Results:**
Results of experimenting with Sigmoid and Softmax activation function.
</br> Sigmoid function </br>
![Sigmoid function](https://github.com/yashpatel301/Computer-Vision-Basics/blob/main/Citrus-leaves-Classification/Results/LOSS_SIGMOID.png)

</br> Softmax function </br>
![Softmax function](https://github.com/yashpatel301/Computer-Vision-Basics/blob/main/Citrus-leaves-Classification/Results/LOSS_SOFTMAX.png)

</br>Results of experimenting with different learning rates.
</br> Accuracy </br>
![Accuracy](https://github.com/yashpatel301/Computer-Vision-Basics/blob/main/Citrus-leaves-Classification/Results/LR_accuracy.png)

</br> Validation Accuracy </br>
![Validation Accuracy](https://github.com/yashpatel301/Computer-Vision-Basics/blob/main/Citrus-leaves-Classification/Results/LR_val_acc.png)

</br>Results of experimenting with Dropout percentage.
</br> Dropout 30% </br>
![Accuracy](https://github.com/yashpatel301/Computer-Vision-Basics/blob/main/Citrus-leaves-Classification/Results/Dropout_30.png)

</br> Dropout 50% </br>
![Validation Accuracy](https://github.com/yashpatel301/Computer-Vision-Basics/blob/main/Citrus-leaves-Classification/Results/Dropout_50.png)

## **Installation guidelines and platform details:**
This assigment is performed on Google colab only. All the libraries are mentioned at the beginning of the code and are imported without any installation needed colab and for anaconda environment and all the required libraries are mentioned in the requirements.txt.

## **References:**
1. https://keras.io/api/preprocessing/image/
2. https://vijayabhaskar96.medium.com/tutorial-image-classification-with-keras-flow-from-directory-and-generators-95f75ebe5720
3. https://www.geeksforgeeks.org/python-image-classification-using-keras/
4. Singh, U., Chouhan, S., Jain, S. and Jain, S., 2021. Multilayer Convolution Neural Network for the Classification of Mango Leaves Infected by Anthracnose Disease. [online] Ieeexplore.ieee.org. Available at: <https://ieeexplore.ieee.org/document/8675730?denied=> [Accessed 17 April 2021].
5. Afifi, A.; Alhumam, A.;Abdelwahab, A. Convolutional NeuralNetwork for Automatic Identificationof Plant Diseases with Limited Data.Plants2021,10, 28.https://dx.doi.org/10.3390/plants10010028
6. https://www.sciencedirect.com/science/article/abs/pii/S0168169920302258
1,192 changes: 1,192 additions & 0 deletions Assignment 4 & 5/classification_of_citrus_leaves.ipynb

Large diffs are not rendered by default.

35 changes: 35 additions & 0 deletions Assignment 4 & 5/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
imageio==2.4.1
Keras==2.4.3
Keras-Preprocessing==1.1.2
keras-vis==0.4.1
matplotlib==3.2.2
matplotlib-venn==0.11.6
networkx==2.5.1
nltk==3.2.5
numpy==1.19.5
opencv-contrib-python==4.1.2.30
opencv-python==4.1.2.30
pandas==1.1.5
pandas-datareader==0.9.0
pandas-gbq==0.13.3
pandas-profiling==1.4.1
pickleshare==0.7.5
Pillow==7.1.2
pip-tools==4.5.1
pymc3==3.7
regex==2019.12.20
scikit-image==0.16.2
scikit-learn==0.22.2.post1
scipy==1.4.1
seaborn==0.11.1
sklearn==0.0
sklearn-pandas==1.8.0
tensorboard==2.4.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.4.1
tensorflow-datasets==4.0.1
tensorflow-estimator==2.4.0
tensorflow-gcs-config==2.4.0
tensorflow-hub==0.12.0
tensorflow-metadata==0.29.0
tensorflow-probability==0.12.1