Illicit Darkweb Classification via Natural-language Processing: Classifying Illicit Content of Webpages based on Textual Information

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

1 Citation (Scopus)

Abstract

This work aims at expanding previous works done in the context of illegal activities classification, performing three different steps. First, we created a heterogeneous dataset of 113995 onion sites and dark marketplaces. Then, we compared pre-trained transferable models, i.e., ULMFit (Universal Language Model Fine-tuning), Bert (Bidirectional Encoder Representations from Transformers), and RoBERTa (Robustly optimized BERT approach) with a traditional text classification approach like LSTM (Long short-term memory) neural networks. Finally, we developed two illegal activities classification approaches, one for illicit content on the Dark Web and one for identifying the specific types of drugs. Results show that Bert obtained the best approach, classifying the dark web’s general content and the types of Drugs with 96.08% and 91.98% of accuracy.

Original languageEnglish
Title of host publicationSECRYPT 2022 - Proceedings of the 19th International Conference on Security and Cryptography
EditorsSabrina De Capitani di Vimercati, Pierangela Samarati
PublisherScience and Technology Publications, Lda
Pages620-626
Number of pages7
ISBN (Print)9789897585906
DOIs
Publication statusPublished - 2022
Event19th International Conference on Security and Cryptography, SECRYPT 2022 - Lisbon, Portugal
Duration: 11 Jul 202213 Jul 2022

Publication series

NameProceedings of the International Conference on Security and Cryptography
Volume1
ISSN (Print)2184-7711

Conference

Conference19th International Conference on Security and Cryptography, SECRYPT 2022
Country/TerritoryPortugal
CityLisbon
Period11/07/2213/07/22

Keywords

  • AI
  • Bert
  • DarkWeb
  • LSTM
  • Machine Learning
  • Natural-language Processing
  • RoBERTA
  • ULMFit

Fingerprint

Dive into the research topics of 'Illicit Darkweb Classification via Natural-language Processing: Classifying Illicit Content of Webpages based on Textual Information'. Together they form a unique fingerprint.

Cite this