Loading

Tuning the False Positive Rate / False Negative Rate with Phishing Detection Models
Sailee Dalvi1, Gilad Gressel2, Krishna Shree Achuthan3
1Sailee Dalvi, Department of Cybersecurity, Systems and Networks, Amrita Vishwa Vidyapeetham, Coimbatore (Tamil Nadu), India.
2Gilad Gressel, Georgia Institute of Technology, Atlanta, USA.
3Dr. Krishna Shree Achuthan, Department of Cybersecurity Systems and Networks Amrita Vishwa Vidyapeetham, Coimbatore (Tamil Nadu), India.
Manuscript received on 23 November 2019 | Revised Manuscript received on 17 December 2019 | Manuscript Published on 30 December 2019 | PP: 7-13 | Volume-9 Issue-1S5 December 2019 | Retrieval Number: A10021291S52019/19©BEIESP | DOI: 10.35940/ijeat.A1002.1291S519
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Phishing attacks have risen by 209% in the last 10 years according to the Anti Phishing Working Group (APWG) statistics [19]. Machine learning is commonly used to detect phishing attacks. Researchers have traditionally judged phishing detection models with either accuracy or F1-scores, however in this paper we argue that a single metric alone will never correlate to a successful deployment of machine learning phishing detection model. This is because every machine learning model will have an inherent trade-off between it’s False Positive Rate (FPR) and False Negative Rate (FNR). Tuning the trade-off is important since a higher or lower FPR/FNR will impact the user acceptance rate of any deployment of a phishing detection model. When models have high FPR, they tend to block users from accessing legitimate webpages, whereas a model with a high FNR will allow the users to inadvertently access phishing webpages. Either one of these extremes may cause a user base to either complain (due to blocked pages) or fall victim to phishing attacks. Depending on the security needs of a deployment (secure vs relaxed setting) phishing detection models should be tuned accordingly. In this paper, we demonstrate two effective techniques to tune the trade-off between FPR and FNR: varying the class distribution of the training data and adjusting the probabilistic prediction threshold. We demonstrate both techniques using a data set of 50,000 phishing and 50,000 legitimate sites to perform all experiments using three common machine learning algorithms for example, Random Forest, Logistic Regression, and Neural Networks. Using our techniques we are able to regulate a model’s FPR/FNR. We observed that among the three algorithms we used, Neural Networks performed best; resulting in an higher F1-score of 0.98 with corresponding FPR/FNR values of 0.0003 and 0.0198 respectively.
Keywords: Machine Learning, Phishing Detection, Model Tuning, Cyber-security.
Scope of the Article: Probabilistic Models and Methods