Confusion Matrix & Cyber Crime

  • For the 2 prediction classes of classifiers, the matrix is of 2*2 table, for 3 classes, it is 3*3 table, and so on.
  • The matrix is divided into two dimensions: predicted values and actual values along with the total number of predictions.
  • Predicted values are those values predicted by the model, and actual values are the true values for the given observations.
  • It looks like the below table:
  • True Negative: Model has given prediction No, and the real or actual value was also No.
  • True Positive: The model has predicted yes, and the actual value was also true.
  • False Negative: The model has predicted no, but the actual value was Yes, it is also called a Type-II error.
  • False Positive: The model has predicted Yes, but the actual value was No. It is also called a Type-I error.

Calculations using Confusion Matrix:

  • Misclassification rate: It is also termed as Error rate, and it defines how often the model gives the wrong predictions. The value of the error rate can be calculated as the number of incorrect predictions to all number of the predictions made by the classifier. The formula is given below:
  • Precision: It can be defined as the number of correct outputs provided by the model or out of all positive classes that have been predicted correctly by the model and how many of them were actually true. It can be calculated using the below formula:
  • Recall: It is defined as the out-of-total positive classes, how our model predicted correctly. The recall must be as high as possible.
  • F-measure: If two models have low precision and high recall or vice versa, it is difficult to compare these models. So, for this purpose, we can use F-score. This score helps us to evaluate the recall and precision at the same time. The F-score is maximum if the recall is equal to the precision. It can be calculated using the below formula:

Other important terms used in Confusion Matrix:

  • Null Error rate: It defines how often our model would be incorrect if it always predicted the majority class. As per the accuracy paradox, it is said that “the best classifier has a higher error rate than the null error rate.
  • ROC Curve: The ROC is a graph displaying a classifier’s performance for all possible thresholds. The graph is plotted between the true positive rate (on the Y-axis) and the false Positive rate (on the x-axis).

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store