Search This Blog

Powered by Blogger.

Blog Archive

Labels

Hackers and Cybercriminals Use Dark Web Data to Train DarkBert AI

A multilingual cybersecurity model called DarkBERT is exclusively trained on the Dark Web.

 


There is a paper released by a team of South Korean researchers describing how they developed a machine-learning model from a large dark web corpus collected by crawling Tor's network. It was obvious that there were many shady sites included in the data. These sites were from the crypto community, pornography, hackers, weapons, and other categories. Despite this, the team decided not to use the data in the manner it came due to ethical concerns. 

DarkBERT was trained with a pre-training corpus, which was polished through filtering before feeding to the model through dark learning so that sensitive data would not be included in training since bad actors could extract sensitive data from it.

Some think that DarkBERT would sound like a nightmare, but the researchers say that it is a promising project that will do more than help combat cybercrime; it will also contribute to the advancement of technology in the field, which has grown a lot through natural language processing.

The team used the Tor network to connect their model to the dark web by using the DarkBERT language model. This system allows access to the dark web without logging in. In the process, it created a raw database of the data it found and then put it into a search engine. 

There has been a recent explosion of large language models available in the marketplace, and more are appearing with each passing day. It is well known that most of the linguistic giants, such as OpenAI's ChatGPT and Google's Bard, are trained by examining text data from all over the internet including websites, articles, books, you name it   they train their new algorithms using that data. As such, their output consists of various geniuses that overlap. 

The researchers published a paper about their findings in the journal "DarkBERT: A Language Model for the Dark Side of the Internet." Using the Tor network as a launching point for their model, they collected raw data and created a database using the raw data collected. 

As of yet, no peer review has been conducted on this paper. DarkBERT is named after the LLM based on the Roberta architecture, which is where DarkBERT originated. Developed by Facebook researchers in 2019, this is an empirical model based on converters. 

The General Language Understanding Evaluation (GLUE) NLP benchmark produced state-of-the-art results due to Facebook's optimization method, as it is a benchmark that tests the general language understanding capabilities of NLP systems. 

Meta described Roberta as an "outstanding algorithm for pretraining natural language processing (NLP) systems that are robustly optimized", an improvement upon BERT, which Google released in 2018 for NLP pretraining. LLM was made open-source by Google, which led Meta to improve its performance. 

It has now been demonstrated that the South Korean researchers behind DarkBERT can accomplish even more because Roberta was released with inadequate training. Over 16 days, the researchers supplied Roberta with raw data from the dark web. They preprocessed data from the dark web and obtained DarkBERT from that information. 

They improved their original model by feeding Korean researchers dark web data over 15 days. This resulted in DarkBERT, an advanced research model. A top-level machine consisting of four NVIDIA A110 80GB GPUs and an Intel Xeon Gold 6348 CPU is included in the research paper as it is revealed that this machine was used to conduct the study.

How does DarkBERT work?


While DarkBERT's name may imply the opposite, DarkBERT is a system designed to protect and enforce the law. It is not intended to be used for evil purposes. 

Often, hackers and ransomware groups upload sensitive data to the dark web in hopes of selling it to other parties for profit. DarkBERT has been shown in a research paper to be useful to security researchers when it comes to automatically identifying such websites using automatic algorithms. In addition to crawling through the dark web forums, it can also be used to monitor any exchanges of illegal information that may be taking place on these forums. 

The public cannot access DarkBERT. DarkBERT was trained on sensitive data  but was not allowed to be released in its preprocessed form, which the researchers say is planned. However, they did not specify a date for when will it happen. 

It does not matter whether DarkBERT represents an artificial intelligence future where AI models are taught from targeted data so that they can be tailored to targeted tasks. As opposed to ChatGPT and Google Bard, both of which can perform multiple functions, DarkBERT is a weapon specifically designed for thwarting hackers and one that can be used by anyone. 

Even though there are numerous artificial intelligence chatbots out there, you need to be careful when using them. You may get a malware infection from fake ChatGPT applications or even risk exposing sensitive data like Samsung employees did recently. 

This is because when using these popular AI chatbots, you want to be sure you are getting to the right website, not just a random one. The software companies OpenAI, Microsoft, and Google have yet to release official apps for AI chatbots. This means you cannot use ChatGPT, Bing Chat, and Google Bard.
Share it:

ChatGPT

Cyber Attacks

cybercriminals

Dark Web

DarkBERT

Hackers