Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Chat-GPT. Show all posts

Rising Email Security Threats: Here’s All You Need to Know

 

A recent study highlights the heightened threat posed by spam and phishing emails due to the proliferation of generative artificial intelligence (AI) tools such as Chat-GPT and the growing popularity of cloud services.

According to a fresh report from VIPRE Security Group, the surge in cloud usage has correlated with an uptick in hacker activity. In this quarter, 58% of malicious emails were found to be delivering malware through links, while the remaining 42% relied on attachments.

Furthermore, cloud storage services have emerged as a prominent method for delivering malicious spam (malspam), accounting for 67% of such delivery in the quarter, as per VIPRE's findings. The remaining 33% utilized legitimate yet manipulated websites.

The integration of generative AI tools has made it significantly harder to detect spam and phishing emails. Traditionally, grammatical errors, misspellings, or unusual formatting were red flags that tipped off potential victims to the phishing attempt, enabling them to avoid downloading attachments or clicking on links.

However, with the advent of AI tools like Chat-GPT, hackers are now able to craft well-structured, linguistically sophisticated messages that are virtually indistinguishable from benign correspondence. This necessitates victims to adopt additional precautions to thwart the threat.

In the third quarter of this year alone, VIPRE's tools identified a staggering 233.9 million malicious emails. Among these, 110 million contained malicious content, while 118 million carried malicious attachments. Moreover, 150,000 emails displayed "previously unknown behaviors," indicating that hackers are continually innovating their strategies to optimize performance.

Phishing and spam persist as favored attack methods in the arsenal of every hacker. They are cost-effective to produce and deploy, and with a stroke of luck, can reach a wide audience of potential victims. Companies are advised to educate their staff about the risks associated with phishing and to meticulously scrutinize every incoming email, regardless of the sender's apparent legitimacy.

OpenAI Faces Lawsuit for Exploiting User Data to Train ChatGPT, DALL-E

OpenAI faces lawsuit

OpenAI, an acclaimed artificial intelligence research business, was recently faced with a class-action lawsuit in the United States for allegedly stealing large amounts of personal data to train its AI chatbot ChatGPT and image generator DALL-E.

According to the lawsuit filed in the Northern District of California, OpenAI secretly acquired "massive amounts of personal data" from people's social media pages, private conversations, and even medical information to train its AI models, violating multiple privacy regulations.

The lawsuit alleges OpenAI for violation of ethics

According to the lawsuit, OpenAI chose to "pursue profit at the expense of privacy, security, and ethics" by scouring the internet for troves of sensitive personal data, which it put into its large language models (LLMs) and deep language algorithms to create ChatGPT and DALL-E. 

While semi-public information such as social network postings was allegedly gathered, more sensitive information such as keystrokes, personally identifying information (PII), financial data, biometrics, patient records, and browser cookies were also allegedly harvested.

The lawsuit also claims that OpenAI has access to large amounts of unknowing patients' medical data, aided by healthcare practitioners' enthusiasm to integrate an undeveloped chatbot into their practices. When a patient provides information about their medical issues to ChatGPT, such information is returned to the chatbot's LLM. 

Actual health records are also at risk 

According to one of the complainants, she used a tool called Have I Been Trained to determine that private clinical photographs (used to document treatment for a genetic condition) had been extracted from her medical record and added to Common Crawl. This data repository boasts that it "can be accessed and analyzed by anyone." According to the lawsuit, her images were monetized without her knowledge by becoming a part of OpenAI's product offers.

Perhaps most shockingly, OpenAI took photographs of children online and used them to train DALL-E, a well-known image generator. According to reports, this data has made DALL-E popular for all the wrong reasons.

Netizens and patients worried about their data privacy

According to the lawsuit, internet users and medical patients have a reasonable expectation that their information "would not be collected by any third party looking to compile and use all of [their] information and data for commercial purposes."

The present case raises serious concerns regarding the ethics of AI development and using personal data to train AI models. It emphasizes the importance of increased transparency and responsibility in how corporations developing AI technologies utilize personal data.

As artificial intelligence (AI) evolves rapidly, we must have open and honest dialogues about the ethical implications of its research and use. This case serves as a warning that we must preserve our personal information and ensure it is not misused.