Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create spam classification tutorial #112

Merged
merged 48 commits into from
Jan 5, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
3bf2dd2
initial creation of spam tutorial and update of data download script
bkmgit Aug 24, 2020
aa5cec5
Update spam/tutorial.md
bkmgit Aug 28, 2020
0236200
Update spam/tutorial.md
bkmgit Aug 28, 2020
65d1df2
Update spam/tutorial.md
bkmgit Aug 28, 2020
9b04377
Update spam/tutorial.md
bkmgit Aug 28, 2020
943266c
Update spam/tutorial.md
bkmgit Aug 28, 2020
ea562e7
Update spam/tutorial.md
bkmgit Aug 28, 2020
1586e9b
Update spam/tutorial.md
bkmgit Aug 28, 2020
0c6ba9c
Update spam/tutorial.md
bkmgit Sep 17, 2020
11a07b7
Update spam/tutorial.md
bkmgit Sep 17, 2020
d7c4891
Update spam/tutorial.md
bkmgit Sep 17, 2020
1ed293c
Update spam/tutorial.md
bkmgit Sep 17, 2020
b4df918
Update spam/tutorial.md
bkmgit Sep 17, 2020
1e57f69
Update spam/tutorial.md
bkmgit Sep 17, 2020
da82a46
Update spam/tutorial.md
bkmgit Sep 17, 2020
7e5de02
Update tutorial.md
bkmgit Sep 17, 2020
3884cfc
add tutorial as a bash script
bkmgit Sep 17, 2020
a2ce18a
Merge branch 'mlpack:master' into master
bkmgit May 23, 2021
5c0ff28
update example lists in README
bkmgit May 23, 2021
4cd27f5
Merge branch 'master' of https://github.com/bkmgit/examples-1
bkmgit May 23, 2021
486eab5
minor update of spam classification tutorial
bkmgit May 23, 2021
3dba2c7
update to download spam dataset
bkmgit May 23, 2021
9013136
Update spam/spam_classification.sh
bkmgit Jul 16, 2021
e7b27b2
Update spam/spam_classification.sh
bkmgit Jul 16, 2021
6f4e7f3
Update spam/spam_classification.sh
bkmgit Jul 16, 2021
5b1205d
Update spam/spam_classification.sh
bkmgit Jul 16, 2021
62e1044
Update spam/tutorial.md
bkmgit Jul 16, 2021
710c017
Update spam/spam_classification.sh
bkmgit Jul 16, 2021
19f3803
fix conflict
bkmgit Jul 16, 2021
6cc39bd
Merge branch 'mlpack-master'
bkmgit Jul 16, 2021
c495d4b
remove tutorial.sh
bkmgit Jul 16, 2021
fd0d1e4
Merge branch 'mlpack:master' into master
bkmgit Jul 30, 2021
a1219a1
Test whether SPAM example runs
bkmgit Jul 30, 2021
a38bce1
implement @zoqs' suggestion
bkmgit Jul 30, 2021
8132776
Improve comment formatting
bkmgit Jul 30, 2021
0677b0a
Update version of Ubuntu
bkmgit Jul 30, 2021
a58a6f4
check if build will work without build script
bkmgit Jul 30, 2021
9d9c401
update script permissions
bkmgit Jul 30, 2021
72460e6
remove spam pre-processing in CI
bkmgit Aug 1, 2021
745e953
fix error in ordering of commands
bkmgit Aug 1, 2021
fef1858
enable building of command line executables
bkmgit Aug 1, 2021
4285f0e
remove example builds due to time constraint
bkmgit Aug 1, 2021
a518cb2
temporarily disable dataset download, travis
bkmgit Aug 1, 2021
2db8885
update data files
bkmgit Aug 1, 2021
1ced319
Merge branch 'master' into master
bkmgit Nov 17, 2021
32fec35
remove file as it can be pre-processed
bkmgit Nov 17, 2021
cbd31c3
remove file as it can be pre-processed
bkmgit Nov 17, 2021
f483f4e
skip processing of spam
bkmgit Nov 17, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
271 changes: 271 additions & 0 deletions spam/tutorial.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,271 @@
# Spam Classification with ML-Pack on the command line
bkmgit marked this conversation as resolved.
Show resolved Hide resolved

## Introduction

In this tutorial, the ML-Pack command line interface will
bkmgit marked this conversation as resolved.
Show resolved Hide resolved
be used to train a machine learning model to classify
SMS spam. It will be assumed that ML-Pack has been
bkmgit marked this conversation as resolved.
Show resolved Hide resolved
successfully installed on your machine. The tutorial has
been tested in a linux environment.

bkmgit marked this conversation as resolved.
Show resolved Hide resolved

## Example

As an example, we will train some machine learning models to classify spam SMS messages. We will use an example spam dataset in Indonesian provided by Yudi Wibisono
bkmgit marked this conversation as resolved.
Show resolved Hide resolved


bkmgit marked this conversation as resolved.
Show resolved Hide resolved
We will try to classify a message as spam or ham by the number of occurences of a word in a message. We first change the file line endings, remove line 243 which is missing a label and then remove the header from the dataset. Then, we split our data into two files, labels and messages. Since the labels are at the end of the message, the message is reversed and then the label removed and placed in one file. The message is then removed and placed in another file.

```
tr '\r' '\n' < dataset_sms_spam_v1.csv > dataset.txt
sed '243d' dataset.txt > dataset1.csv
sed '1d' dataset1.csv > dataset.csv
rev dataset.csv | cut -c1 | rev > labels.txt
rev dataset.csv | cut -c2- | rev > messages.txt
rm dataset.csv
rm dataset1.csv
rm dataset.txt
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's nice to see other people who do data science with sed, awk, rev, tr, and grep too! 😄


Machine learning works on numeric data, so we will use labels to 1 for ham and 0 for spam. The dataset contains thre labels, 0, normal sms (ham), 1, fraud (spam) and 2, promotion (spam). We will label all spam as 1, so promotions
bkmgit marked this conversation as resolved.
Show resolved Hide resolved
and fraud will be labelled as 1.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

Remove the extra line here.


```
tr '2' '1' < labels.txt > labels.csv
rm labels.txt
```

The next step is to convert all text in the messages to lower case and for simplicity remove punctuation and any symbols that are not spaces, line endings or in the range a-z (one would need expand this range of symbols for production use)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I get the last sentence.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps it can be reworded as:

To enable easy comparison of words which will be used as the features, only letters a-z, line endings \n and spaces are used as features. A larger feature set can be helpful, but for small data sets the occurrences of other symbols are not frequent enough to help in classification.


```
tr '[:upper:]' '[:lower:]' < messages.txt > messagesLower.txt
tr -Cd 'abcdefghijklmnopqrstuvwxyz \n' < messagesLower.txt > messagesLetters.txt
rm messagesLower.txt
```

We now obtain a sorted list of unique words used (this step may take a few minutes, so use nice to give it a low priority while you continue with other tasks on your computer).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I would remove nice as the default behaviour, we could mention it on the side.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On a low end laptop nice is quite useful to enable other work. On a more powerful machine, the effect will not be to drastic, so that in both cases the code works.


```
nice -20 xargs -n1 < messagesLetters.txt > temp.txt
sort temp.txt > temp2.txt
uniq temp2.txt > words.txt
rm temp.txt
rm temp2.txt
```

We then create a matrix, where for each message, the frequency of word occurences is counted (more on this on Wikipedia, [here](https://en.wikipedia.org/wiki/Tf–idf) and [here](https://en.wikipedia.org/wiki/Document-term_matrix)). This requires a few lines of code, so the full script, which should be saved as 'makematrix.sh' is below
bkmgit marked this conversation as resolved.
Show resolved Hide resolved

bkmgit marked this conversation as resolved.
Show resolved Hide resolved

```
#!/bin/bash
declare -a words=()
declare -a letterstartind=()
declare -a letterstart=()
letter=" "
i=0
lettercount=0
while IFS= read -r line; do
labels[$((i))]=$line
let "i++"
done < labels.csv
i=0
while IFS= read -r line; do
words[$((i))]=$line
firstletter="$( echo $line | head -c 1 )"
if [ "$firstletter" != "$letter" ]
then
letterstartind[$((lettercount))]=$((i))
letterstart[$((lettercount))]=$firstletter
letter=$firstletter
let "lettercount++"
fi
let "i++"
done < words.txt
letterstartind[$((lettercount))]=$((i))
echo "Created list of letters"

touch wordfrequency.txt
rm wordfrequency.txt
touch wordfrequency.txt
messagecount=0
messagenum=0
messages="$( wc -l messages.txt )"
i=0
while IFS= read -r line; do
let "messagenum++"
declare -a wordcount=()
declare -a wordarray=()
read -r -a wordarray <<< "$line"
let "messagecount++"
words=${#wordarray[@]}
for word in "${wordarray[@]}"; do
startletter="$( echo $word | head -c 1 )"
j=-1
while [ $((j)) -lt $((lettercount)) ]; do
let "j++"
if [ "$startletter" == "${letterstart[$((j))]}" ]
then
mystart=$((j))
fi
done
myend=$((mystart))+1
j=${letterstartind[$((mystart))]}
jend=${letterstartind[$((myend))]}
while [ $((j)) -le $((jend)) ]; do
wordcount[$((j))]=0
if [ "$word" == "${words[$((j))]}" ]
then
wordcount[$((j))]="$( echo $line | grep -o $word | wc -l )"
fi
let "j++"
done
done
for j in "${!wordcount[@]}"; do
wordcount[$((j))]=$(echo " scale=4; $((${wordcount[$((j))]})) / $((words))" | bc)
done
wordcount[$((words))+1]=$((words))
echo "${wordcount[*]}" >> wordfrequency.txt
echo "Processed message ""$messagenum"
let "i++"
done < messagesLetters.txt
# Create csv file
tr ' ' ',' < wordfrequency.txt > data.csv
```

Since [Bash](https://www.gnu.org/software/bash/) is an interpreted language, this simple implementation can take upto 30 minutes to complete. If using the above Bash script on your primary workstation, run it as a task with low priority so that you can continue with other work while you wait:
bkmgit marked this conversation as resolved.
Show resolved Hide resolved
bkmgit marked this conversation as resolved.
Show resolved Hide resolved

```
nice -20 bash makematrix.sh
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option, if you like, would be to write a utility C++ program, and then have the users in this tutorial compile it. However, I suppose that we are not guaranteed that the user has a compiler available, since they are just using the command-line bindings. Let me know what you think.

(Also, we have some TF-IDF support coming into mlpack, so maybe the bash script above could be replaced in the future with that! It will be a lot faster too. 👍)


Once the script has finished running, split the data into testing (30%) and training (70%) sets:

```
mlpack_preprocess_split \
--input_file data.csv \
--input_labels_file labels.csv \
--training_file train.data.csv \
--training_labels_file train.labels.csv \
--test_file test.data.csv \
--test_labels_file test.labels.csv \
--test_ratio 0.3 \
--verbose
bkmgit marked this conversation as resolved.
Show resolved Hide resolved
```

Now train a [Logistic regression model](https://mlpack.org/doc/mlpack-3.3.1/cli_documentation.html#logistic_regression):

```
mlpack_logistic_regression --training_file train.data.csv --labels_file train.labels.csv --lambda 0.1 --output_model_file lr_model.bin
```
bkmgit marked this conversation as resolved.
Show resolved Hide resolved

Finally we test our model by producing predictions,

```
mlpack_logistic_regression --input_model_file lr_model.bin --test_file test.data.csv --output_file lr_predictions.csv
```

and comparing the predictions with the exact results,

```
export incorrect=$(diff -U 0 lr_predictions.csv test.labels.csv | grep '^@@' | wc -l)
export tests=$(wc -l < lr_predictions.csv)
echo "scale=2; 100 * ( 1 - $((incorrect)) / $((tests)))" | bc
```

This gives approximately 90% validation rate, similar to that obtained [here](https://towardsdatascience.com/spam-detection-with-logistic-regression-23e3709e522).

The dataset is composed of approximately 50% spam messages, so the validation rates are quite good without doing much parameter tuning.
In typical cases, datasets are unbalanced with many more entries in some categories than in others. In these cases a good validation
rate can be obtained by mispredicting the class with a few entries.
Thus to better evaluate these models, one can compare the number of misclassifications of spam, and the number of misclassifications of ham.
Of particular importance in applications is the number of false positive spam results as these are typically not transmitted. The script below produces a confusion matrix which gives a better indication of misclassification.
Save it as 'confusion.sh'
bkmgit marked this conversation as resolved.
Show resolved Hide resolved

```
#!/bin/bash
declare -a labels
declare -a lr
i=0
while IFS= read -r line; do
labels[i]=$line
let "i++"
done < test.labels.csv
i=0
while IFS= read -r line; do
lr[i]=$line
let "i++"
done < lr_predictions.csv
TruePositiveLR=0
FalsePositiveLR=0
TrueZerpLR=0
FalseZeroLR=0
Positive=0
Zero=0
for i in "${!labels[@]}"; do
if [ "${labels[$i]}" == "1" ]
then
let "Positive++"
if [ "${lr[$i]}" == "1" ]
then
let "TruePositiveLR++"
else
let "FalseZeroLR++"
fi
fi
if [ "${labels[$i]}" == "0" ]
then
let "Zero++"
if [ "${lr[$i]}" == "0" ]
then
let "TrueZeroLR++"
else
let "FalsePositiveLR++"
fi
fi

done
echo "Logistic Regression"
echo "Total spam" $Positive
echo "Total ham" $Zero
echo "Confusion matrix"
echo " Predicted class"
echo " Ham | Spam "
echo " ---------------"
echo " Actual| Ham | " $TrueZeroLR "|" $FalseZeroLR
echo " class | Spam | " $FalsePositiveLR " |" $TruePositiveLR
echo ""

```

then run the script

```
bash confusion.sh
```

You should get output similar to

Logistic Regression
Total spam 183
Total ham 159
Confusion matrix
Predicted class
Ham | Spam
---------------
Actual| Ham | 128 | 26
class | Spam | 31 | 157


which indicates a reasonable level of classification.
Other methods you can try in ML-Pack for this problem include
bkmgit marked this conversation as resolved.
Show resolved Hide resolved
* [Naive Bayes](https://mlpack.org/doc/mlpack-3.3.1/cli_documentation.html#nbc)
* [Random forest](https://mlpack.org/doc/mlpack-3.3.1/cli_documentation.html#random_forest)
* [Decision tree](https://mlpack.org/doc/mlpack-3.3.1/cli_documentation.html#decision_tree)
* [AdaBoost](https://mlpack.org/doc/mlpack-3.3.1/cli_documentation.html#adaboost)
* [Perceptron](https://mlpack.org/doc/mlpack-3.3.1/cli_documentation.html#perceptron)

To improve the error rating, you can try other pre-processing methods on the initial data set.
Neural networks can give upto 99.95% validation rates, see for example [here](https://thesai.org/Downloads/Volume11No1/Paper_67-The_Impact_of_Deep_Learning_Techniques.pdf), [here](https://www.kaggle.com/kredy10/simple-lstm-for-text-classification) and [here](https://www.kaggle.com/xiu0714/sms-spam-detection-bert-acc-0-993). However, using these techniques with ML-Pack is best covered in another tutorial.
bkmgit marked this conversation as resolved.
Show resolved Hide resolved

This tutorial is an adaptation of one that first appeared in the [Fedora Magazine](https://fedoramagazine.org/spam-classification-with-ml-pack/).
11 changes: 10 additions & 1 deletion tools/download_data_set.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,16 @@ def iris_dataset():
tar.extractall()
tar.close()
clean()


def spam_dataset():
print("Downloading spam dataset...")
spam = requests.get("https://www.mlpack.org/datasets/dataset_sms_spam_bhs_indonesia_v1.tar.gz")
progress_bar("dataset_sms_spam_bhs_indonesia_v1.tar.gz", spam)
tar = tarfile.open("dataset_sms_spam_bhs_indonesia_v1.tar.gz", "r:gz")
tar.extractall()
tar.close()
clean()

def all_datasets():
mnist_dataset()
electricity_consumption_dataset()
Expand Down