Tuesday 26 September 2017

Motion Gesture Recognition within 200 lines using OpenCV +Python

Hello Readers. In this tutorial, I will be teaching you gesture recognition in OpenCV+Python using only Image Processing and no Machine Learning or any Neural Networks.

What is Gesture Recognition

Gesture recognition is the mathematical interpretation of a human motion by a computing device. There are many algorithms out on the Internet that gives a very good and accurate results on gesture recognition. But here we are not going see them. I developed a very simple and naive algorithm to recognize gestures that are made up of straight lines. Lets see....
Gesture Recognition

Give me the code already...

Ok. I got you. Here is the code https://github.com/EvilPort2/SimpleGestureRecognition.

Why no Machine Learning or Neural Net?

The answer to this question is very simple. I do not know much about them. Though I have some knowledge about machine learning, I have little to no knowledge about neural networks.

Requirements

  1. A computer with a good camera
  2. A yellow (though any other colour can be used) piece of paper to be worn in a finger (for image segmentation)
  3. OpenCV for Python 3
  4. PyAutoGui for Python 3
  5. Python 3
  6. Text Editor like Sublime Text 3 or Atom
  7. A little bit of knowledge in Maths

Steps we are taking

Since I am using only image processing for this project, I will be using only the direction of movement to determine the gesture.
  1. Take one frame at a time and convert it from RGB colour space to HSV colour space for better yellow colour segmentation.
  2. Use a mask for yellow colour.
  3. Bluring and thresholding the mask.
  4. If a yellow colour is found and it crosses a reasonable area threshold, we start to create a gesture.
  5. The direction of movement of the yellow cap is calculated by taking the difference between the old centre and the new centre of the yellow colour after every 5th iteration or frame.
  6. Take the directions and store in a list until the yellow cap disappears from the frame.
  7. Process the created direction list and the processed direction list is used to take a certain action like a keyboard shortcut.
 

Let's get our hands dirty...

gesture_action.py

Let us begin with all the important imports and a few global variables
import cv2 import numpy as np from collections import deque import pyautogui as gui from gesture_api import do_gesture_action cam = cv2.VideoCapture(0) # Camera Object yellow_lower = np.array([7, 96, 85]) # HSV yellow lower yellow_upper = np.array([255, 255, 255]) # HSV yellow upper screen_width, screen_height = gui.size() camx, camy = 360, 240 # Resize resolution buff = 128 line_pts = deque(maxlen = buff) # Create a deque data structure which store the present location of centre point of the yellow patch

The gesture_api is a different file that I created. do_gesture_action is a function in that file. The yellow_lower and yellow_upper can be determined by using this python program. So in your case, these values might be different in different lighting conditions. The easiest way to use it is to put the yellow paper in front of the camera and then slowly increasing the lower parameters(H_MIN, V_MIN, S_MIN) one by one and then slowly decreasing the upper parameters (H_MAX, V_MAX, S_MAX). When the adjusting has been done you will find that only the yellow paper will have a corresponding white patch and rest of the image will be dark. Now let's get into the main function and some of its local variables
def gesture_action(): centerx, centery = 0, 0 # Present location of the centre of the yellow patch old_centerx, old_centery = 0, 0 # Previous location of the centre of the yellow patch area1 = 0 # Area of the yellow patch c = 0 # Stores the number of yellow objects in the picture flag_do_gesture = 0 # If a gesture has been completed then this flag is 1 flag0 = True # Checks if a yellow object is present in the frame created_gesture_hand1 = [] # stores the direction of the movement

With that out of the way we can now extract each frame and do the operations as required. These are steps we will be doing
    1. Get a frame
    2. Flip and resize the image to 360*240 for faster processing
    3. Convert the frame from RGB colour space to HSV colour space
    4. Now we will be using the yellow colour mask to segment the yellow colour
    5. Because every camera has some flaws in them which introduces some error in the frame hence we need to reduce the noise in the image and the easiest way to do that is to heavily blur the frame.
    6. Now if we set the colour threshold to any colour which is above black then we can get the almost exact shape of the the yellow patch.
    7. Take the contour of the thresholded frame.
    8. Repeat the above steps for every frame

while True: _, img = cam.read() # Resize for faster processing. Flipping for better orientation img = cv2.flip(img, 1) img = cv2.resize(img, (camx, camy)) # Convert to HSV for better color segmentation imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # Mask for yellow color mask = cv2.inRange(imgHSV, yellow_lower, yellow_upper) # Bluring to reduce noises blur = cv2.medianBlur(mask, 15) blur = cv2.GaussianBlur(blur , (5,5), 0) # Thresholding _,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) cv2.imshow("Thresh", thresh) _, contours, _ = cv2.findContours(thresh.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
After getting the contours we can have 2 cases-
  1. Number of contours is greater than zero then yellow colored objects are in the frame.
  2. Number of contours is zero then no yellow colored objects are in the frame.

Case 1- Yellow colored objects in the frame

  1. Assign 0 to flag_do_gesture.
  2. Take the contour that has the maximum area. Let us call this max_contour.
  3. Find a minimum area rectangle that surrounds the max_contour.
  4. Take the width and height of the rectangle.
  5. Find the area of the rectangle by width*height.
  6. If the area crosses a reasonable threshold then start making a gesture. I found the threshold experimenting with different values and in my case it was 450.
  7. If the area of the contour crosses the threshold then find the center of the yellow object.
  8. Draw a rectangular box around it.
  9. Draw a dot at the center.
  10. Append the center to the deque line_pts.
  11. Update the center after every 5th iteration or frame.
  12. At the 5th iteration take the difference between the old center (x1, y1) and new center (x2, y2). I have used diffx = (x2-x1) and diffy = (y2-y1).
  13. Hence values of diffx and diffy gives us the direction of movement.
  14. If the flag0 is False then append the direction to the created_gesture_hand1 list.
  15. Draw a line for all the points in line_pts
  16. Assign False to flag0.

Case 2- No yellow colored objects in the frame

  1. Empty the deque line_pts.
  2. Process the created_gesture_hand1 by removing the 'St' and the consecutive directions. Let us call it processed_gesture_hand1.
  3. If flag_do_gesture is 0 and processed_gesture_hand1 then take an action corresponding to a particular gesture.
  4. Assign 1 to flag_do_gesture. This avoids the gesture action to be run only once and not repeatedly.
  5. Empty created_hand_gesture.
  6. Assign True to flag0.
Enough said..... In a code it looks something like this
if len(contours) == 0: # Completion of a gesture line_pts = deque(maxlen = buff) # Empty the deque processed_gesture_hand1 = tuple(process_created_gesture(created_gesture_hand1)) if flag_do_gesture == 0: # flag_do_gesture to make sure that gesture runs only once and not repeatedly if processed_gesture_hand1 != (): do_gesture_action(processed_gesture_hand1) flag_do_gesture = 1 print(processed_gesture_hand1) # for debugging purposes created_gesture_hand1 = [] flag0 = True else: flag_do_gesture = 0 max_contour = max(contours, key = cv2.contourArea) rect1 = cv2.minAreaRect(max_contour) (w, h) = rect1[1] area1 = w*h if area1 > 450: center1 = list(rect1[0]) box = cv2.boxPoints(rect1) # to draw a rectangle box = np.int0(box) cv2.drawContours(img,[box],0,(0,0,255),2) centerx = center1[0] = int(center1[0]) # center of the rectangle centery = center1[1] = int(center1[1]) cv2.circle(img, (centerx, centery), 2, (0, 255, 0), 2) line_pts.appendleft(tuple(center1)) if c == 0: old_centerx = centerx old_centery = centery c += 1 diffx, diffy = 0, 0 if c > 5: # check after every 5 iteration the new center diffx = centerx - old_centerx diffy = centery - old_centery c = 0 if flag0 == False: # the difference between the old center and the new center determines the direction of the movement if abs(diffx) <=10 and abs(diffy) <= 10: created_gesture_hand1.append("St") elif diffx > 15 and abs(diffy) <= 15: created_gesture_hand1.append("E") elif diffx < -15 and abs(diffy) <= 15: created_gesture_hand1.append("W") elif abs(diffx) <= 15 and diffy < -15: created_gesture_hand1.append("N") elif abs(diffx) <= 15 and diffy > 15: created_gesture_hand1.append("S") elif diffx > 25 and diffy > 25: created_gesture_hand1.append("SE") elif diffx < -25 and diffy > 25: created_gesture_hand1.append("SW") elif diffx > 25 and diffy < -25: created_gesture_hand1.append("NE") elif diffx < -25 and diffy < -25: created_gesture_hand1.append("NW") for i in range(1, len(line_pts)): if line_pts[i - 1] is None or line_pts[i] is None: continue cv2.line(img, line_pts[i-1], line_pts[i], (0, 255, 0), 2) flag0 = False

The process_created_gesture function looks like this
def process_created_gesture(created_gesture): """ function to remove all the St direction and removes duplicate direction if they occur consecutively. """ if created_gesture != []: for i in range(created_gesture.count("St")): created_gesture.remove("St") for j in range(len(created_gesture)): for i in range(len(created_gesture) - 1): if created_gesture[i] == created_gesture[i+1]: created_gesture.remove(created_gesture[i+1]) break return created_gesture
So the whole file gesture_action.py looks like this.
import cv2 import numpy as np import pyautogui as gui from gesture_api import do_gesture_action from collections import deque cam = cv2.VideoCapture(0) yellow_lower = np.array([7, 96, 85]) # HSV yellow lower yellow_upper = np.array([255, 255, 255]) # HSV yellow upper screen_width, screen_height = gui.size() camx, camy = 480, 360 buff = 128 line_pts = deque(maxlen = buff) def process_created_gesture(created_gesture): """ function to remove all the St direction and removes duplicate direction if they occur consecutively. """ if created_gesture != []: for i in range(created_gesture.count("St")): created_gesture.remove("St") for j in range(len(created_gesture)): for i in range(len(created_gesture) - 1): if created_gesture[i] == created_gesture[i+1]: created_gesture.remove(created_gesture[i+1]) break return created_gesture def gesture_action(): centerx, centery = 0, 0 old_centerx, old_centery = 0, 0 area1 = 0 c = 0 flag_do_gesture = 0 flag0 = True created_gesture_hand1 = [] while True: _, img = cam.read() # Resize for faster processing. Flipping for better orientation img = cv2.flip(img, 1) img = cv2.resize(img, (camx, camy)) # Convert to HSV for better color segmentation imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # Mask for yellow color mask = cv2.inRange(imgHSV, yellow_lower, yellow_upper) # Bluring to reduce noises blur = cv2.medianBlur(mask, 15) blur = cv2.GaussianBlur(blur , (5,5), 0) # Thresholding _,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) cv2.imshow("Thresh", thresh) _, contours, _ = cv2.findContours(thresh.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE) w, h = 0, 0 if len(contours) == 0: # Completion of a gesture line_pts = deque(maxlen = buff) # Empty the deque processed_gesture_hand1 = tuple(process_created_gesture(created_gesture_hand1)) if flag_do_gesture == 0: # flag_do_gesture to make sure that gesture runs only once and not repeatedly if processed_gesture_hand1 != (): do_gesture_action(processed_gesture_hand1) flag_do_gesture = 1 print(processed_gesture_hand1) # for debugging purposes created_gesture_hand1 = [] flag0 = True else: flag_do_gesture = 0 max_contour = max(contours, key = cv2.contourArea) rect1 = cv2.minAreaRect(max_contour) (w, h) = rect1[1] area1 = w*h if area1 > 450: center1 = list(rect1[0]) box = cv2.boxPoints(rect1) # to draw a rectangle box = np.int0(box) cv2.drawContours(img,[box],0,(0,0,255),2) centerx = center1[0] = int(center1[0]) # center of the rectangle centery = center1[1] = int(center1[1]) cv2.circle(img, (centerx, centery), 2, (0, 255, 0), 2) line_pts.appendleft(tuple(center1)) if c == 0: old_centerx = centerx old_centery = centery c += 1 diffx, diffy = 0, 0 if c > 5: # check after every 5 iteration the new center diffx = centerx - old_centerx diffy = centery - old_centery c = 0 if flag0 == False: # the difference between the old center and the new center determines the direction of the movement if abs(diffx) <=10 and abs(diffy) <= 10: created_gesture_hand1.append("St") elif diffx > 15 and abs(diffy) <= 15: created_gesture_hand1.append("E") elif diffx < -15 and abs(diffy) <= 15: created_gesture_hand1.append("W") elif abs(diffx) <= 15 and diffy < -15: created_gesture_hand1.append("N") elif abs(diffx) <= 15 and diffy > 15: created_gesture_hand1.append("S") elif diffx > 25 and diffy > 25: created_gesture_hand1.append("SE") elif diffx < -25 and diffy > 25: created_gesture_hand1.append("SW") elif diffx > 25 and diffy < -25: created_gesture_hand1.append("NE") elif diffx < -25 and diffy < -25: created_gesture_hand1.append("NW") for i in range(1, len(line_pts)): if line_pts[i - 1] is None or line_pts[i] is None: continue cv2.line(img, line_pts[i-1], line_pts[i], (0, 255, 0), 2) flag0 = False cv2.imshow("IMG", img) if cv2.waitKey(1) == ord('q'): break cv2.destroyAllWindows() cam.release() gesture_action()

gesture_api.py

This file contains nothing but the gesture directions and the keyboard shortcut that it needs to emulate. So a square can be made using directions like (North, West, South, East). Now let's say that when a square is made we need to emulate the keyboard shortcut winkey (For Windows) or altleft+f1 (For KDE) and so on. We can have 2 cases for the keyboard shortcut emulation.
  • Only one key press needs to be emulated e.g winkey
  • More than one key press needs to be emulated e.g winkey + l, alt + f4 etc.
For the first case, we need to just press the key. For the second case, we need to hold all the keys except the last key, press the last key and then un-hold the keys. In code this can be accomplished by-
import pyautogui as gui import os GEST_START = ("N", "E", "S", "W") GEST_CLOSE = ("SE", "N", "SW") GEST_COPY = ("W", "S", "E") GEST_PASTE = ("SE", "NE") GEST_CUT = ("SW", "N", "SE") GEST_ALT_TAB = ("SE", "SW") GEST_ALT_SHIFT_TAB = ("SW", "SE") GEST_MAXIMISE = ("N",) GEST_MINIMISE = ("S",) GEST_LOCK = ("S", "E") GEST_TASK_MANAGER = ("E", "W", "S") GEST_NEW_FILE = ("N", "SE", "N") GEST_SELECT_ALL = ("NE", "SE", "NW", "W") # Gesture set containing the directions and the key press actions GESTURES = {GEST_CUT: ('ctrlleft', 'x'), GEST_CLOSE: ('altleft', 'f4'), GEST_ALT_SHIFT_TAB: ('altleft', 'shiftleft', 'tab'), GEST_PASTE: ('ctrlleft', 'v'), GEST_ALT_TAB: ('altleft', 'tab'), GEST_COPY: ('ctrlleft', 'c'), GEST_NEW_FILE: ('ctrlleft', 'n'), GEST_SELECT_ALL: ('ctrlleft', 'a')} # Windows PCs if os.name == 'nt': GESTURES[GEST_START] = ('winleft',) GESTURES[GEST_LOCK] = ('winleft', 'l') GESTURES[GEST_TASK_MANAGER] = ('ctrlleft', 'shiftleft', 'esc') # Linux using KDE else: GESTURES[GEST_START] = ('altleft', 'f1') GESTURES[GEST_LOCK] = ('ctrlleft', 'altleft', 'l') GESTURES[GEST_TASK_MANAGER] = ('ctrlleft', 'esc') def do_gesture_action(gesture): if gesture in GESTURES.keys(): keys = list(GESTURES[gesture]) last_key = keys.pop() # get the last key press if len(keys) >= 1: # case 2 for key in keys: # hold all the keys except the last key gui.keyDown(key) gui.press(last_key) # press the last key. for case 1 the last key and the first key are the same if len(keys) >= 1: keys.reverse() # un-holding the keys for key in keys: gui.keyUp(key)
 

Conclusion

Yes. And that's about it. Using only 2 files and only image processing we have successfully implemented a very simple and naive gesture recognition system. That too happened within only 200 lines of code. Get the full code here. You can find me on-
Bye.....

Sunday 24 September 2017

Spam Detection using Machine Learning in Python Part 3 - Training and Testing

Welcome back to Part 3 of the tutorial. In this part we will be creating our feature set and training and testing our models. If you have not watched the previous part look here. Part 2 is really important.

Let's jump straight into today's part.

Step 4:- Creating the feature data set

From the previous part we had seen that the top 2000 words are our features. But only the features won't be enough for us. Every feature needs to have some value for a particular sentence. Since the our features are the top 2000 words among the bag of words our features can have one of 2 values. They are:-
  • True - If the feature is present in the sentence.
  • False - If the feature is not present in the sentence.
To do this specific task we create a different function called find_feature that takes the features variable word_features and a message as an input. The feature set variable called feature is a set type data structure where the word feature is the key and the presence or absence of the feature in the particular sentence is the value of the key.

def find_feature(word_features, message): # find features of a message feature = {} for word in word_features: feature[word] = word in message.lower() return feature
Let us call this function repetitively for every message in all_message variable to create our feature set.

random.shuffle(all_messages) random.shuffle(all_messages) random.shuffle(all_messages) print("\nCreating feature set....") featureset = [(find_feature(word_features, message), category) for (message, category) in all_messages] print("Feature set created.") trainingset = featureset[:int(len(featureset)*3/4)] testingset = featureset[int(len(featureset)*3/4):] print("\nLength of feature set ", len(featureset)) print("Length of training set ", len(trainingset)) print("Length of testing set ", len(testingset))
What I did here is I took all_messages and gave it a good shuffle to remove any bias. Then I called the find_feature function for all the messages in all_messages. Then I split the featureset variable in 2 parts. The first 3/4th is used to train our models and the rest 1/4th is used to set our models. So how does our feature set look like? Something like this
Here S1, S2 ... Sn are the messages or sentences
A1, A2, A3, .... A2000 are the features.
Result is the classification of the sentence or message S which are stored in all_messages.

Step 5:- Training and Testing

Now that we have our training and testing set we can now train our models. But presently our program looks something like this-

import nltk from nltk.corpus import stopwords import string def find_feature(word_features, message): # find features of a message feature = {} for word in word_features: feature[word] = word in message.lower() return feature with open('SMSSpamCollection') as f: messages = f.read().split('\n') print("Creating bag of words....") all_messages = [] # stores all the messages along with their classification all_words = [] # bag of words for message in messages: if message.split('\t')[0] == "spam": all_messages.append([message.split('\t')[1], "spam"]) else: all_messages.append([message.split('\t')[1], "ham"]) for s in string.punctuation: # Remove punctuations if s in message: message = message.replace(s, " ") stop = stopwords.words('english') for word in message.split(" "): # Remove stopwords if not word in stop: all_words.append(word.lower()) print("Bag of words created.") random.shuffle(all_messages) random.shuffle(all_messages) random.shuffle(all_messages) all_words = nltk.FreqDist(all_words) word_features = list(all_words.keys())[:2000] # top 2000 words are our features print("\nCreating feature set....") featureset = [(find_feature(word_features, message), category) for (message, category) in all_messages] print("Feature set created.") trainingset = featureset[:int(len(featureset)*3/4)] testingset = featureset[int(len(featureset)*3/4):] print("\nLength of feature set ", len(featureset)) print("Length of training set ", len(trainingset)) print("Length of testing set ", len(testingset))
Now that looks dirty. Damn ugly. Let us put it into a function for better readability

import nltk from nltk.corpus import stopwords import string def find_feature(word_features, message): # find features of a message feature = {} for word in word_features: feature[word] = word in message.lower() return feature def create_training_testing(): with open('SMSSpamCollection') as f: messages = f.read().split('\n') print("Creating bag of words....") all_messages = [] # stores all the messages along with their classification all_words = [] # bag of words for message in messages: if message.split('\t')[0] == "spam": all_messages.append([message.split('\t')[1], "spam"]) else: all_messages.append([message.split('\t')[1], "ham"]) for s in string.punctuation: # Remove punctuations if s in message: message = message.replace(s, " ") stop = stopwords.words('english') for word in message.split(" "): # Remove stopwords if not word in stop: all_words.append(word.lower()) print("Bag of words created.") random.shuffle(all_messages) random.shuffle(all_messages) random.shuffle(all_messages) all_words = nltk.FreqDist(all_words) word_features = list(all_words.keys())[:2000] # top 2000 words are our features print("\nCreating feature set....") featureset = [(find_feature(word_features, message), category) for (message, category) in all_messages] print("Feature set created.") trainingset = featureset[:int(len(featureset)*3/4)] testingset = featureset[int(len(featureset)*3/4):] print("\nLength of feature set ", len(featureset)) print("Length of training set ", len(trainingset)) print("Length of testing set ", len(testingset)) return word_features, featureset, trainingset, testingset
With that out of the way we can now create our models. In this project we will be creating 5 different algorithms to train 5 different models. The algorithms are-
  • Naive Bayes
  • Multinomial Naive Bayes
  • Bernoulli Naive Bayes
  • Stochastic Gradient Descent
  • Logistic Regression

Oh no!!! Algorithms.... Maths..... I think I am done with this tutorial...... 

Do not worry about the algorithms. You do not have to write the algorithm on your own from scratch. Scikit-Learn provides us with a large number algorithms for data science and data mining. So it is not necessary for you to know the algorithms and it is very easy. But having a knowledge in it is definitely helpful.
Enough said... Let us train the five models using our algorithm and check their accuracy against the testing set.
def create_mnb_classifier(trainingset, testingset): # Multinomial Naive Bayes Classifier print("\nMultinomial Naive Bayes classifier is being trained and created...") MNB_classifier = SklearnClassifier(MultinomialNB()) MNB_classifier.train(trainingset) accuracy = nltk.classify.accuracy(MNB_classifier, testingset)*100 print("MultinomialNB Classifier accuracy = " + str(accuracy)) return MNB_classifier def create_bnb_classifier(trainingset, testingset): # Bernoulli Naive Bayes Classifier print("\nBernoulli Naive Bayes classifier is being trained and created...") BNB_classifier = SklearnClassifier(BernoulliNB()) BNB_classifier.train(trainingset) accuracy = nltk.classify.accuracy(BNB_classifier, testingset)*100 print("BernoulliNB accuracy percent = " + str(accuracy)) return BNB_classifier def create_logistic_regression_classifier(trainingset, testingset): # Logistic Regression classifier print("\nLogistic Regression classifier is being trained and created...") LogisticRegression_classifier = SklearnClassifier(LogisticRegression()) LogisticRegression_classifier.train(trainingset) print("Logistic Regression classifier accuracy = "+ str((nltk.classify.accuracy(LogisticRegression_classifier, testingset))*100)) return LogisticRegression_classifier def create_sgd_classifier(trainingset, testingset): print("\nSGD classifier is being trained and created...") SGDClassifier_classifier = SklearnClassifier(SGDClassifier()) SGDClassifier_classifier.train(trainingset) print("SGD Classifier classifier accuracy = " + str((nltk.classify.accuracy(SGDClassifier_classifier, testingset))*100)) return SGDClassifier_classifier def create_nb_classifier(trainingset, testingset): # Naive Bayes Classifier print("\nNaive Bayes classifier is being trained and created...") NB_classifier = nltk.NaiveBayesClassifier.train(trainingset) accuracy = nltk.classify.accuracy(NB_classifier, testingset)*100 print("Naive Bayes Classifier accuracy = " + str(accuracy)) NB_classifier.show_most_informative_features(20) return NB_classifier
See I told you it is that easy.

Now let us create and call a main function that integrates and calls the above modules systematically. To do that-
def main(): """ this function is used to show how to use this program. the models can be pickled if wanted or needed. i have used 4 mails to check if my models are working correctly. """ word_features, featureset, trainingset, testingset = create_training_testing() NB_classifier = create_nb_classifier(trainingset, testingset) MNB_classifier = create_mnb_classifier(trainingset, testingset) BNB_classifier = create_bnb_classifier(trainingset, testingset) LR_classifier = create_logistic_regression_classifier(trainingset, testingset) SGD_classifier = create_sgd_classifier(trainingset, testingset) mails = ["Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's",\ "Hello Ward, It has been almost 3 months since i have written you. Hope you are well.", \ "FREE FREE FREE Get a chance to win 10000 $ for free. Also get a chance to win a car and a house",\ "Hello my friend, How are you? It is has been 3 months since we talked. Hope you are well. Can we meet at my place?"] print("\n") print("Naive Bayes") print("-----------") for mail in mails: feature = find_feature(word_features, mail) print(NB_classifier.classify(feature)) print("\n") print("Multinomial Naive Bayes") print("-----------") for mail in mails: feature = find_feature(word_features, mail) print(MNB_classifier.classify(feature)) print("\n") print("Bernoulli Naive Bayes") print("-----------") for mail in mails: feature = find_feature(word_features, mail) print(BNB_classifier.classify(feature)) print("\n") print("Logistic Regression") print("-----------") for mail in mails: feature = find_feature(word_features, mail) print(LR_classifier.classify(feature)) print("\n") print("Stochastic Gradient Descent") print("-----------") for mail in mails: feature = find_feature(word_features, mail) print(SGD_classifier.classify(feature)) main()

What I am doing here is that, I have taken 4 mails/messages to check which of them are spam and which of them are ham. To do that-

  1. Take each mail at a time.
  2. Find the feature set for it.
  3. Use the feature set with different classifiers to see what the message is i.e spam or ham.
So the whole program looks something like this now-

import nltk import random import os from nltk.corpus import stopwords from nltk.classify.scikitlearn import SklearnClassifier from sklearn.naive_bayes import MultinomialNB,BernoulliNB from sklearn.linear_model import LogisticRegression,SGDClassifier import string import warnings warnings.simplefilter(action='ignore', category=FutureWarning) # For clearing the screen if os.name == 'nt': clear_screen = "cls" else: clear_screen = "clear" os.system(clear_screen) def find_feature(word_features, message): # find features of a message feature = {} for word in word_features: feature[word] = word in message.lower() return feature def create_mnb_classifier(trainingset, testingset): # Multinomial Naive Bayes Classifier print("\nMultinomial Naive Bayes classifier is being trained and created...") MNB_classifier = SklearnClassifier(MultinomialNB()) MNB_classifier.train(trainingset) accuracy = nltk.classify.accuracy(MNB_classifier, testingset)*100 print("MultinomialNB Classifier accuracy = " + str(accuracy)) return MNB_classifier def create_bnb_classifier(trainingset, testingset): # Bernoulli Naive Bayes Classifier print("\nBernoulli Naive Bayes classifier is being trained and created...") BNB_classifier = SklearnClassifier(BernoulliNB()) BNB_classifier.train(trainingset) accuracy = nltk.classify.accuracy(BNB_classifier, testingset)*100 print("BernoulliNB accuracy percent = " + str(accuracy)) return BNB_classifier def create_logistic_regression_classifier(trainingset, testingset): # Logistic Regression classifier print("\nLogistic Regression classifier is being trained and created...") LogisticRegression_classifier = SklearnClassifier(LogisticRegression()) LogisticRegression_classifier.train(trainingset) print("Logistic Regression classifier accuracy = "+ str((nltk.classify.accuracy(LogisticRegression_classifier, testingset))*100)) return LogisticRegression_classifier def create_sgd_classifier(trainingset, testingset): print("\nSGD classifier is being trained and created...") SGDClassifier_classifier = SklearnClassifier(SGDClassifier()) SGDClassifier_classifier.train(trainingset) print("SGD Classifier classifier accuracy = " + str((nltk.classify.accuracy(SGDClassifier_classifier, testingset))*100)) return SGDClassifier_classifier def create_nb_classifier(trainingset, testingset): # Naive Bayes Classifier print("\nNaive Bayes classifier is being trained and created...") NB_classifier = nltk.NaiveBayesClassifier.train(trainingset) accuracy = nltk.classify.accuracy(NB_classifier, testingset)*100 print("Naive Bayes Classifier accuracy = " + str(accuracy)) NB_classifier.show_most_informative_features(20) return NB_classifier def create_training_testing(): """ function that creates the feature set, training set, and testing set """ with open('SMSSpamCollection') as f: messages = f.read().split('\n') print("Creating bag of words....") all_messages = [] # stores all the messages along with their classification all_words = [] # bag of words for message in messages: if message.split('\t')[0] == "spam": all_messages.append([message.split('\t')[1], "spam"]) else: all_messages.append([message.split('\t')[1], "ham"]) for s in string.punctuation: # Remove punctuations if s in message: message = message.replace(s, " ") stop = stopwords.words('english') for word in message.split(" "): # Remove stopwords if not word in stop: all_words.append(word.lower()) print("Bag of words created.") random.shuffle(all_messages) random.shuffle(all_messages) random.shuffle(all_messages) all_words = nltk.FreqDist(all_words) word_features = list(all_words.keys())[:2000] # top 2000 words are our features print("\nCreating feature set....") featureset = [(find_feature(word_features, message), category) for (message, category) in all_messages] print("Feature set created.") trainingset = featureset[:int(len(featureset)*3/4)] testingset = featureset[int(len(featureset)*3/4):] print("\nLength of feature set ", len(featureset)) print("Length of training set ", len(trainingset)) print("Length of testing set ", len(testingset)) return word_features, featureset, trainingset, testingset def main(): """ this function is used to show how to use this program. the models can be pickled if wanted or needed. i have used 4 mails to check if my models are working correctly. """ word_features, featureset, trainingset, testingset = create_training_testing() NB_classifier = create_nb_classifier(trainingset, testingset) MNB_classifier = create_mnb_classifier(trainingset, testingset) BNB_classifier = create_bnb_classifier(trainingset, testingset) LR_classifier = create_logistic_regression_classifier(trainingset, testingset) SGD_classifier = create_sgd_classifier(trainingset, testingset) mails = ["Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's",\ "Hello Ward, It has been almost 3 months since i have written you. Hope you are well.", \ "FREE FREE FREE Get a chance to win 10000 $ for free. Also get a chance to win a car and a house",\ "Hello my friend, How are you? It is has been 3 months since we talked. Hope you are well. Can we meet at my place?"] print("\n") print("Naive Bayes") print("-----------") for mail in mails: feature = find_feature(word_features, mail) print(NB_classifier.classify(feature)) print("\n") print("Multinomial Naive Bayes") print("-----------") for mail in mails: feature = find_feature(word_features, mail) print(MNB_classifier.classify(feature)) print("\n") print("Bernoulli Naive Bayes") print("-----------") for mail in mails: feature = find_feature(word_features, mail) print(BNB_classifier.classify(feature)) print("\n") print("Logistic Regression") print("-----------") for mail in mails: feature = find_feature(word_features, mail) print(LR_classifier.classify(feature)) print("\n") print("Stochastic Gradient Descent") print("-----------") for mail in mails: feature = find_feature(word_features, mail) print(SGD_classifier.classify(feature)) main()
Your code due to some reason does not look like this? That is because I have added some extra lines to make the output look clearer and remove the warnings by Sklearn.

That's it for this tutorial.

Source Code

Thursday 21 September 2017

Spam Detection using Machine Learning in Python Part 2 - Feature Extraction

Hope your computer has been now fully setup. If not see my previous post here. In this part we will be learning the steps that will be followed to create our spam detection system, what features are and how they can be extracted from sentences. In that process we might also learn something about machine learning as well.

Steps that will be followed

  1. Get a good dataset which contains a lot of "spam" and "ham" messages.
  2. Get each and every message and then create a bag of words
  3. Extract the features from the bag of words
  4. Fill up the feature set
  5. Train and test a model
  6. Store the model for later use(optional)

Step 1:- Get a "spam" and "ham" dataset

Since in machine learning we need to teach our model which message is "spam" and which message is "ham", we need a get dataset that exactly has that. In my case I have used the dataset provided in here https://archive.ics.uci.edu/ml/machine-learning-databases/00228/. Here each message is classified as either spam or ham. Extract it to a folder and you will find a file called SMSSpamCollection. The format of the classification is like this-
<classification><tab><message>

Step 2:- Creating a bag of words

The raw dataset cannot be fed to the algorithm which will train our model. Hence, we need to create a bag of words from which we will create our feature set. But let us first get our messages and the bag of words:-

import nltk from nltk.corpus import stopwords import string with open('SMSSpamCollection') as f: messages = f.read().split('\n') print("Creating bag of words....") all_messages = [] # stores all the messages along with their classification all_words = [] # bag of words for message in messages: if message.split('\t')[0] == "spam": all_messages.append([message.split('\t')[1], "spam"]) else: all_messages.append([message.split('\t')[1], "ham"]) for s in string.punctuation: # Remove punctuations if s in message: message = message.replace(s, " ") stop = stopwords.words('english') for word in message.split(" "): # Remove stopwords if not word in stop: all_words.append(word.lower()) print("Bag of words created.")

Ok. This might be a lot to take in all at once. Here's a breakdown:-
  1. Line 1-3 is for necessary imports.
  2. Line 5-6 reads the SMSSpamCollection file and stores each message in the messages list. Each message is in the format <classification><tab><message>
  3. Line 9-10 defines 2 empty list all_messages and all_words that will contain all the messages along with their classification and all the words except English stopwords respectively.
  4. Line 11-15 stores each message in all_messages along with the message's classification i.e spam or ham
  5. Line 17-19 removes all the punctuation in each message
  6. Line 21-24 takes each word from each message, converts it to lowercase and then appends the word to all_words if the word is not a stopword.

Step 3:- Extracting features

Now that we have the bag of words, we can now extract features from it. So what are these features. Features can be considered as properties of the sentence, in this case. So here the features of a sentence are the words in it. But since, every sentence has different words in it, it is useless to take every word in every sentence as it will make our feature set unnecessarily complicated. Hence, the best way to choose our features are to take the top used words i.e. the words that are used the most. 
Fortunately NLTK, provides us with just that so that we don't have to write functions for finding the top used words. Lets use it to find our features which will be our top 2000 words.

all_words = nltk.FreqDist(all_words) word_features = list(all_words.keys())[:2000] # top 2000 words are our features

all_words
now contains the words in the bag of words in descending order according to their frequency.
word_features contains the top 2000 words.

End of Part 2....

Now that we have extracted our features successfully, we can now proceed to the next part where we fill up our feature data set and train and test few models. Goodbye for now......

Wednesday 20 September 2017

Spam Detection using Machine Learning in Python Part 1 - Setting up your computer

How Spam Detection works?

A spam, according to Google is an "irrelevant or unsolicited messages sent over the Internet, typically to a large number of users, for the purposes of advertising, phishing, spreading malware, etc.". We receive these messages on our mail boxes almost daily. But they do not stop there. They keep on coming to our inbox until we either respond to them or put them in our spam box which Google Mail or Yahoo Mail or whatever you use, learns about it and puts them in the spam box as soon as another spam mail comes to us.
So how does the mail service provider know if it is a spam mail or not? The answer is machine learning. In machine learning we program a computer such that it can get into a self learning mode, so that when the computer gets a new data it can learn, adapt and grow from it.

A newbie's guide to Machine Learning in Python

Though machine learning is a highly advanced topic, I will still try my level best to keep this as newbie friendly as possible. So how do you setup your computer? Follow these simple steps-
  1. Install Python in your PC. Look here for complete steps https://evilporthacker.blogspot.in/2017/09/installing-python-in-your-windows.html
  2. Install a good text editor like Sublime Text 3 https://www.sublimetext.com/3
  3. Open up command prompt.
  4. Type the following commands to install the important modules in Python-
    1. pip install nltk
    2. pip install numpy
    3. pip install scipy
    4. pip install sklearn
  5. Though a proper, real life machine learning + natural language processing project requires more modules, for this project you can install these 4 only.

Brief explanation of the work of the libraries

  • NLTK (Natural Language Toolkit) is a library that has awesome tools for human language processing. It provides easy-to-use interfaces to over 50 corpora and lexical resources. In simple words, it a Swiss army knife for Natural Language Processing.
  • Sklearn (Scikit Learn) is a library that has simple yet efficient tools for data analysis and mining. I consider it to be the greatest library for machine learning in Python.
  • Numpy provides great tools for numerical operations like matrix manipulation, fourier transform, linear algebra etc.
  • Scipy is a Python-based ecosystem of open-source software for mathematics, science, and engineering. (Could not get words to describe it. So just copied it from the websiteπŸ˜…).

Conclusion

With your computer now fully setup, you can start this project by going to the second part where I will be showing you how to get the training data for the models and all other stuff we will be training. I included all the links to the libraries just in case you want to know something more about the modules. Goodbye for now.......πŸ˜™

Installing Python in your Windows computer

I will dive straight into the topic without writing much. Follow these steps : -

1.     Download the latest Python installer from the official site https://www.python.org/downloads/.
2.     Make sure you use the same architecture of the installer as of your PC. If you are not sure about your system architecture, then use the 32-bit version.
3.     After you have downloaded it double click the installer to execute it.
4.     Click on Next.
5.     Make sure the following are ticked-
1.     Associate files with Python
2.     Create shortcuts for installed applications
3.     Add Python to environment variables
4.     Precompile standard library
           6.     Wait for installation to complete.

Writing your first Python program

After installation is complete write your first python program to check if installation is completed.
Open up a text file and write the following line-
print("Hello World!!")

Save the file as hello.py and run it from command prompt using python hello.py. If Hello World!! is displayed correctly then your Python installation is correctly running and complete. 

Conclusion

Congrats!! Now you have installed Python in your PC. You are now one step closer to becoming a Python Guru.

Tuesday 19 September 2017

Introduction to Myself

Who am I?

My name is Dibakar Saha (you can call me EvilPort). You might know me from my YouTube channel "Evil Port" which was recently shutdown by YouTube. "Why was it shutdown?" you might ask. I will answer that later.
As of today i.e. 20/09/2017, I am a Final Year, B. Tech student in Computer Science and Engineering studying in Bengal College of Engineering and Technology, Durgapur, India. I am not a good security expert but I know a lot about programming and carry some knowledge about Maths. I have used a good number of programming languages like C, C++, Java, Android, VB.NET, VB6, Python, Javascript, HTML (not a huge fan of web designing), a little bit about PHP and Javascript. Also,  I have been programming literally half of my life. 😎

What happened to my YouTube channel?

If you know me from my YouTube channel then by now you already know that my channel has been shutdown. My channel had 20 videos (all on hacking) and 812 subscribers. It was because my YouTube channel had so much awesome contents that YouTube was unable hold it. JK.πŸ˜…πŸ˜†. 
This what happened. I had posted a video on hacking a facebook account by reverse engineering the Facebook Android app. The video was a great hit among all my fans. There were a total of 3 videos about this topic. The first strike was imposed on July. The second strike was on the second video on August. The third strike was imposed on the third video in September. (All of this happened in the same year of 2017). That's how my channel got terminated.

What can you expect from this blog?

Since, I am a programmer. You can obviously expect a lot of programs in different languages and some hacking stuff. I might touch some advanced topics like Machine Learning, AI, neural networks (still learning it now) etc. I really have little to no idea of what I will be posting in the future.
Also I might share some of the things that are going on in my life. Though I am not sure about it.

Contact me

If you want to contact me for any reason you can find me here-
Facebook -  https://facebook.com/dibakar.saha.750
Github - github.com/EvilPort2/(Check Out my awesome projects)
Twitter - https://twitter.com/EvilPort2
Email - dibakarsaha1234@gmail.com

Gesture driven Virtual Keyboard using OpenCV + Python

Hello Readers, long time no see. In this tutorial, I will be teaching you how to create a gesture driven Virtual Keyboard with OpenCV and P...