Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Security. Show all posts

Look Out For This New Emerging Threat In The World Of AI

 



As per a recent discovery, a team of researchers has surfaced a groundbreaking AI worm named 'Morris II,' capable of infiltrating AI-powered email systems, spreading malware, and stealing sensitive data. This creation, reminiscent of the notorious computer worm from 1988, poses a significant threat to users relying on AI applications such as Gemini Pro, ChatGPT 4.0, and LLaVA.

Developed by Ben Nassi, Stav Cohen, and Ron Bitton, Morris II exploits vulnerabilities in Generative AI (GenAI) models by utilising adversarial self-replicating prompts. These prompts trick the AI into replicating and distributing harmful inputs, leading to activities like spamming and unauthorised data access. The researchers explain that this approach enables the infiltration of GenAI-powered email assistants, putting users' confidential information, such as credit card details and social security numbers, at risk.

Upon discovering Morris II, the responsible research team promptly reported their findings to Google and OpenAI. While Google remained silent on the matter, an OpenAI spokesperson acknowledged the issue, stating that the worm exploits prompt-injection vulnerabilities through unchecked or unfiltered user input. OpenAI is actively working to enhance its systems' resilience and advises developers to implement methods ensuring they don't work with potentially harmful inputs.

The potential impact of Morris II raises concerns about the security of AI systems, prompting the need for increased vigilance among users and developers alike. As we delve into the specifics, Morris II operates by injecting prompts into AI models, coercing them into replicating inputs and engaging in malicious activities. This replication extends to spreading the harmful prompts to new agents within the GenAI ecosystem, perpetuating the threat across multiple systems.

To counter this threat, OpenAI emphasises the importance of implementing robust input validation processes. By ensuring that user inputs undergo thorough checks and filters, developers can mitigate the risk of prompt-injection vulnerabilities. OpenAI is also actively working to fortify its systems against such attacks, underscoring the evolving nature of cybersecurity in the age of artificial intelligence.

In essence, the emergence of Morris II serves as a stark reminder of the digital culture of cybersecurity threats within the world of artificial intelligence. Users and developers must stay vigilant, adopting best practices to safeguard against potential vulnerabilities. OpenAI's commitment to enhancing system resilience reflects the collaborative effort required to stay one step ahead of these risks in this ever-changing technological realm. As the story unfolds, it remains imperative for the AI community to address and mitigate such threats collectively, ensuring the continued responsible and secure development of artificial intelligence technologies.


Security Trends to Monitor in 2024

 

As the new year unfolds, the business landscape finds itself on the brink of a dynamic era, rich with possibilities, challenges, and transformative trends. In the realm of enterprise security, 2024 is poised to usher in a series of significant shifts, demanding careful attention from organizations worldwide.

Automation Takes Center Stage: In recent years, the integration of Artificial Intelligence (AI) and Machine Learning (ML) technologies has become increasingly evident, setting the stage for a surge in automation within the cybersecurity domain. As the threat landscape evolves, the use of AI and ML algorithms for automated threat detection is gaining prominence. This involves the analysis of vast datasets to identify anomalies and predict potential cyber attacks before they materialize.

Endpoint protection is experiencing heightened sophistication, with AI playing a pivotal role in proactively identifying and responding to real-time threats. Notably, Apple's introduction of declarative device management underscores the industry's shift towards automation, where AI integration enables endpoints to autonomously troubleshoot and resolve issues. This marks a significant step forward in reducing equipment downtime and achieving substantial cost savings.

Navigating the Dark Side of Generative AI: In 2024, the risks associated with the rapid adoption of generative AI technologies are coming to the forefront. The use of AI coding bots for code generation gained substantial traction in 2023, reaching a point where companies, including tech giant Samsung, had to impose bans on certain models like ChatGPT due to their role in writing code within office environments.

Despite the prevalence of large language models (LLMs) for code generation, concerns are rising about the integrity of the generated code. Companies, in their pursuit of agility, may deploy AI-generated code without thorough scrutiny for potential security flaws, posing a tangible risk of data breaches with severe consequences. Additionally, the year 2024 is anticipated to witness a surge in AI-driven cyber attacks, with attackers leveraging the technology to craft hyper-realistic phishing scams and automate social engineering endeavours.

Passwordless Authentication- Paradigm Shift: The persistent discourse around moving beyond traditional passwords is expected to materialize in a significant way in 2024. Biometric authentication, including fingerprint and face unlock technologies, is gaining familiarity as a promising candidate for a more secure and user-friendly authentication system.

The integration of passkeys, combining biometrics with other factors, offers several advantages, eliminating the need for users to remember passwords. This approach provides a secure and versatile user verification method across various devices and accounts. Major tech players like Google and Apple are actively introducing their own passkey solutions, signalling a collective industry push toward a password-less future. The developments in biometric authentication and the adoption of passkeys suggest that 2024 could be a pivotal year, marking a widespread shift towards more secure and user-friendly authentication methods.

Overall, the landscape of enterprise security beckons with immense potential, fueled by advancements in automation, the challenges of generative AI, and the imminent shift towards passwordless authentication. Businesses are urged to stay vigilant, adapt to these transformative trends, and navigate the evolving cybersecurity landscape for a secure and resilient future.

What are the Privacy Measures Offered by Character AI?


In the era where virtual communication has played a tremendous part in people’s lives, it has also raised concerns regarding its corresponding privacy and data security. 

When it comes to AI-based platforms like Character AI, or generative AI, privacy concerns are apparent. Online users might as well wonder if someone other than them could have access to their chats with Character AI. 

Here, we are exploring the privacy measures that Character AI provides.

Character AI Privacy: Can Other People See a User’s Chats?

The answer is: No, other people can not have access to the private conversations or chats that a user may have had with the character in Character AI. Strict privacy regulations and security precautions are usually in place to preserve the secrecy of user communications. 

Nonetheless, certain data may be analyzed or employed in a combined, anonymous fashion to enhance the functionality and efficiency of the platform. Even with the most sophisticated privacy protections in place, it is always advisable to withhold sensitive or personal information.

1. Privacy Settings on Characters

Character AI gives users the flexibility to alter the characters they create visibility. Characters are usually set to public by default, making them accessible to the larger community for discovery and enjoyment. Nonetheless, the platform acknowledges the significance of personal choices and privacy issues

2. Privacy Options for Posts

Character AI allows users to post as well. Users can finely craft a post, providing them with a plethora of options to align with the content and sharing preferences.

Public posts are available to everyone in the platform's community and are intended to promote an environment of open and sharing creativity. 

Private posts, on the other hand, offer a more private and regulated sharing experience by restricting content viewing to a specific group of recipients. With this flexible approach to post visibility, users can customize their content-sharing experience to meet their own requirements.

3. Moderation of Community-Visible Content 

Character AI uses a vigilant content monitoring mechanism to keep a respectful and harmonious online community. When any content is shared or declared as public, this system works proactively to evaluate and handle it.

The aim is to detect and address any potentially harmful or unsuitable content, hence maintaining the platform's commitment to offering a secure and encouraging environment for users' creative expression. The moderation team puts a lot of effort into making sure that users can collaborate and engage with confidence, unaffected by worries about the suitability and calibre of the content in the community.

4. Consulting the Privacy Policy

Users who are looking for a detailed insight into Character AI’s privacy framework can also check its Privacy Policy document, which caters for their requirements. The detailed document involves a detailed understanding of the different attributes of data management, user rights and responsibilities, and the intricacies of privacy settings.

To learn more about issues like default visibility settings, data handling procedures, and the scope of content moderation, users can browse the Privacy Policy. It is imperative that users remain knowledgeable about these rules in order to make well-informed decisions about their data and privacy preferences.

Character AI's community norms, privacy controls, and distinctive features all demonstrate the company's commitment to privacy. To safeguard its users' data, it is crucial that users interact with these privacy settings, stay updated on platform regulations, and make wise decisions. In the end, how users use these capabilities and Character AI's dedication to ethical data handling will determine how secure the platform is.  

Google Workspace Unveils AI-Powered Security

 

Google LLC announced today a set of new artificial intelligence-powered cyber defence controls, the majority of which will be deployed to its Workspace cloud platform later this year. Data loss prevention, often known as DLP, and data privacy controls are among the topics that are covered. 

Many of these involve a series of automated updates that use Google Drive's AI engine to continuously analyse data input. Administrators for Google's Workspace, which the company claims is used by 9 million organisations, can establish incredibly specific context-aware policy controls, such as looking for certain device locations. 

The DLP technology has already been available in Google services such as Chat and Chrome browsers, and it will be extended to Gmail later this year. The purpose is to assist business information technology managers in defending against phishing scams that may steal data and account information. 

Another set of features includes upgrades to Sovereign Controls in Workspace, which Google introduced last year. These include providing client-side encryption to mobile versions of Google Calendar, Gmail, and Meet to prevent third-party data access. 

,Another feature allows users to browse, modify, or convert Microsoft Excel files into GSheets. In addition, Google is collaborating with Thales SA, Stormshield, and Flowcrypt to keep encryption keys in their own repositories. Google has not and will not store any of the encryption keys on its own servers. 

A last set of tools can be used to combat phishing and other attacks. Many Workspace account administrators may need to set up additional authorisation factors for their accounts later this year. According to Google, it will begin with its top resellers and enterprise customers. It will also demand multiparty approvals for specific high-risk actions, such as updating authentication settings, later this year. 

Finally, the company said that clients will be able to integrate their Workspace activity and alert logs into Chronicle, Google's threat data ingestion and anomaly detection service. For example, Andy Wen, director of product management for Google Workspace, stated during a press conference that a bad actor could see the following two events: a search for active cryptocurrency wallets followed by the creation of a mail forwarding rule to an external account. This situation could be flagged as suspicious by Chronicle for further examination.

AI Image Generators: A Novel Cybersecurity Risk

 

Our culture could be substantially changed by artificial intelligence (AI) and there is a lot to look forward to if the AI tools we already have are any indication of what is to come.

A number of things are also worrying us. AI is specifically being weaponized by cybercriminals and other threat actors. AI picture generators are not impervious to misuse, and this is not just a theoretical worry. We have covered the top 4 ways threat actors use AI image generators to their advantage in this article, which can pose a severe security risk. 

Social engineering

Social engineering, including making phoney social media profiles, is one clear way threat actors use AI image generators. A scammer might create fake social media profiles using some of these tools that produce incredibly realistic photos that exactly resemble real photographs of actual individuals. Unlike real people's photos, AI-generated photos cannot be located via reverse image search, and the cybercriminal need not rely on a small number of images to trick their target—by utilising AI, they may manufacture as many as they want, building a credible online identity from scratch. 

Charity fraud 

Millions of people all across the world gave clothing, food, and money to the victims of the deadly earthquakes that hit Turkey and Syria in February 2023. 

A BBC investigation claims that scammers took advantage of this by utilising AI to produce convincing photos and request money. One con artist used AI to create images of ruins on TikTok Live and asked viewers for money. Another posted an AI-generated image of a Greek firefighter rescuing a hurt child from ruins and requested his followers to donate Bitcoin. 

Disinformation and deepfakes 

Governments, activist organisations, and think tanks have long issued warnings about deepfakes. AI picture producers add another element to this issue with how realistic their works are. Deep Fake Neighbour Wars is a comedy programme from the UK that pokes fun at strange celebrity pairings. 

This may have consequences in the real world, as it almost did in March 2022 when an internet hoax video purporting to be Ukrainian President Volodymyr Zelensky ordered Ukrainians to surrender spread, according to NPR. But that's just one instance; there are innumerable other ways a threat actor may use AI to distribute fake news, advance a false narrative, or ruin someone's reputation. 

Advertising fraud 

In 2022, researchers at TrendMicro found that con artists were utilising AI-generated material to produce deceptive adverts and peddle dubious goods. They produced photos that implied well-known celebrities were using particular goods, and they then employed those photos in advertising campaigns. 

One advertisement for a "financial advicement opportunity," for instance, featured Tesla's creator and CEO, billionaire Elon Musk. The AI-generated footage featured made it appear as though Musk was endorsing the product, which is likely what convinced unwary viewers to click the ads. Of course, Musk never actually did. 

Looking forward

Government regulators and cybersecurity specialists will likely need to collaborate in the future to combat the threat of AI-powered crimes. But, how can we control AI and safeguard common people without impeding innovation and limiting online freedoms? For many years to come, that issue will be a major concern. 

Do all you can to safeguard yourself while you wait for a response, such as thoroughly checking any information you find online, avoiding dubious websites, using safe software, keeping your gadgets up to date, and learning how to make the most of artificial intelligence.