
Introduction
Artificial intelligence has long been in the grasp of efficient programmers and language nerds who love to write code and play with it. With the onset of several visual drag-n-drop type IDEs, newbies don’t need to write heavy chunks of codes. Instead, these analytics platforms have enabled a broad range of general people to jump into the world of machine learning without learning to write code and deal with tedious programming syntax.
Widget Environment
Anaconda provides Orange as an open-source visual analytics platform where user can do machine learning with minimal coding experience. In this article, I will go through the spam email classifier prepared in Orange. The following figure is the total demonstration for this classifier problem. I have taken Naive Bayes, Random Forest and SVM algorithm and evaluate their performances for this dataset.
Orange provides a drag-n-drop widget environment without any requirement of coding as below.

First we need to load the csv file in the File widget. When we load the csv file, we will ignore any unwanted columns. For the purpose of text classifier in this article, we only need the text and the label for that text. Here the label column should have the role as ‘target’.

Feature and Target
Features are the base data on which the model will get trained to correctly label the data mentioned in the target. In the Select Columns widget, I have distinguished between the target column and the feature column. We have text in only one column and therefore we have only one feature here.

The required data can be visualized in the Data widget to make sure that there is unwanted feature.

Next all the three classifier widgets are placed in the working area and connected with the Data Table. I have kept the default settings in all the classifier widgets and changed the training and testing procedure in the Test and Score widget.
Test Score

On the left, it is evident that random sampling was used with 80% data for training. Here interpretation of the data on the right panel is important. The table compares the performances among the three algorithms chosen. AUC stands for "Area Under the Curve" and it is used as a capability rubric for a model to differentiate between classes. Higher AUC indicates better model performance in general but there are some other performance parameters to take into consideration. CA is "Classification Accuracy". A refresher of precision and recall was discussed in the Part-1 article. F1 score is another performance parameter and takes both precision and recall into the calculation. Some model may yield high precision but low recall and vice versa. Therefore, some analyst are more interested in F1 score which is defined as

Well, our table show that SVM and Random Forest both have high accuracy over Naive Bayes. All the scores here are taken as an average over the classes.
ML Classifier Performance Comparison for Spam Emails Detection- Part 1
Finally we need to check the confusion matrix for each of the classifiers. The Confusion Matrix widget provides all matric for these classifiers.

Conclusion
In this article, I have demonstrated a straightforward way to implement simple text classifiers using Orange. This open source analytics platform has a lot more beyond what is discussed here. The widget environment of Orange is attractive to efficiently execute ML classifiers without any coding.