Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Artifcial Intelligence. Show all posts

Google Gemini Exploit Enables Covert Delivery of Phishing Content

 


An AI-powered automation system in professional environments, such as Google Gemini for Workspace, is vulnerable to a new security flaw. Using Google’s advanced large language model (LLM) integration within its ecosystem, Gemini enables the use of artificial intelligence (AI) directly with a wide range of user tools, including Gmail, to simplify workplace tasks. 

A key feature of the app is the ability to request concise summaries of emails, which are intended to save users time and prevent them from becoming fatigued in their inboxes by reducing the amount of time they spend in it. Security researchers have, however identified a significant flaw in this feature which appears to be so helpful. 

As Mozilla bug bounty experts pointed out, malicious actors can take advantage of the trust users place in Gemini's automated responses by manipulating email content so that the AI is misled into creating misleading summaries by manipulating the content. As a result of the fact that Gemini operates within Google's trusted environment, users are likely to accept its interpretations without question, giving hackers a prime opportunity. This finding highlights what is becoming increasingly apparent in the cybersecurity landscape: when powerful artificial intelligence tools are embedded within widely used platforms, even minor vulnerabilities can be exploited by sophisticated social engineers. 

It is the vulnerability at the root of this problem that Gemini can generate e-mail summaries that seem legitimate but can be manipulated so as to include deceptive or malicious content without having to rely on conventional red flags, such as suspicious links or file attachments, to detect it. 

An attack can be embedded within an email body as an indirect prompt injection by attackers, according to cybersecurity researchers. When Gemini's language model interprets these hidden instructions during thesummarisationn process, it causes the AI to unintentionally include misleading messages in the summary that it delivers to the user, unknowingly. 

As an example, a summary can falsely inform the recipient that there has been a problem with their account, advising them to act right away, and subtly direct them to a phishing site that appears to be reliable and trustworthy. 

While prompt injection attacks on LLMs have been documented since the year 2024, and despite the implementation of numerous safeguards by developers to prevent these manipulations from occurring, this method continues to be effective even today. This tactic is persisting because of the growing sophistication of threat actors as well as the challenge of fully securing generative artificial intelligence systems that are embedded in critical communication platforms. 

There is also a need to be more vigilant when developing artificial intelligence and making sure users are aware of it, as traditional cybersecurity cues may no longer apply to these AI-driven environments. In order to find these vulnerabilities, a cybersecurity researcher, Marco Figueroa, identified them and responsibly disclosed them through Mozilla's 0Din bug bounty program, which specialises in finding vulnerabilities in generative artificial intelligence. 

There is a clever but deeply concerning method of exploitation demonstrated in Figueroa's proof-of-concept. The attack begins with a seemingly harmless e-mail sent to the intended victim that appears harmless at first glance. A phishing prompt disguised in white font on a white background is hidden in a secondary, malicious component of the message, which conceals benign information so as to avoid suspicion of the message.

When viewed in a standard email client, it is completely invisible to the human eye and is hidden behind benign content. The malicious message is strategically embedded within custom tags, which are not standard HTML elements, but which appear to be interpreted in a privileged manner by Gemini's summarization function, as they are not standard HTML elements. 

By activating the "Summarise this email" feature in Google Gemini, a machine learning algorithm takes into account both visible and hidden text within the email. Due to the way Gemini handles input wrapped in tags, it prioritises and reproduces the hidden message verbatim within the summary, placing it at the end of the response, as it should. 

In consequence, what appears to be a trustworthy, AI-generated summary now contains manipulative instructions which can be used to entice people to click on phishing websites, effectively bypassing traditional security measures. A demonstration of the ease with which generative AI tools can be exploited when trust in the system is assumed is demonstrated in this attack method, and it further demonstrates the importance of robust sanitisation protocols as well as input validation protocols for prompt sanitisation. 

It is alarming how effectively the exploitation technique is despite its technical simplicity. An invisible formatting technique enables the embedding of hidden prompts into an email, leveraging Google Gemini's interpretation of raw content to capitalise on its ability to comprehend the content. In the documented attack, a malicious actor inserts a command inside a span element with font-size: 0 and colour: white, effectively rendering the content invisible to the recipient who is viewing the message in a standard email client. 

Unlike a browser, which renders only what can be seen by the user, Gemini process the entire raw HTML document, including all hidden elements. As a consequence, Gemini's summary feature, which is available to the user when they invoke it, interprets and includes the hidden instruction as though it were part of the legitimate message in the generated summary.

It is important to note that this flaw has significant implications for services that operate at scale, as well as for those who use them regularly. A summary tool that is capable of analysing HTML inline styles, such as font-size:0, colour: white, and opacity:0, should be instructed to ignore or neutralise these styles, which render text visually hidden. 

The development team can also integrate guard prompts into LLM behaviour, instructing models not to ignore invisible content, for example. In terms of user education, he recommends that organisations make sure their employees are aware that AI-generated summaries, including those generated by Gemini, serve only as informational aids and should not be treated as authoritative sources when it comes to urgent or security-related instructions. 

A vulnerability of this magnitude has been discovered at a crucial time, as more and more tech companies are increasingly integrating LLMs into their platforms to automate productivity. In contrast to previous models, where users would manually trigger AI tools, the new paradigm is a shift to automated AI tools that will run in the background instead.

It is for this reason that Google introduced the Gemini side panel last year in Gmail, Docs, Sheets, and other Workspace apps to help users summarise and create content within their workflow seamlessly. A noteworthy change in Gmail's functionality is that on May 29, Google enabled automatic email summarisation for users whose organisations have enabled smart features across Gmail, Chat, Meet, and other Workspace tools by activating a default personalisation setting. 

As generative artificial intelligence becomes increasingly integrated into everyday communication systems, robust security protocols will become increasingly important as this move enhances convenience. This vulnerability exposes an issue of fundamental inadequacy in the current guardrails used for LLM, primarily focusing on filtering or flagging content that is visible to the user. 

A significant number of AI models, including the Google Gemini AI model, continue to use raw HTML markup, making them susceptible to obfuscation techniques such as zero-font text and white-on-white formatting. Despite being invisible to users, these techniques are still considered valid input to the model by the model-thereby creating a blind spot for attackers that can easily be exploited by attackers. 

Mozilla's 0Din program classified the issue as a moderately serious vulnerability by Mozilla, and said that the flaw could be exploited by hackers to harvest credential information, use vishing (voice-phishing), and perform other social engineering attacks by exploiting trust in artificial intelligence-generated content in order to gain access to information. 

In addition to the output filter, a post-processing filter can also function as an additional safeguard by inspecting artificial intelligence-generated summaries for signs of manipulation, such as embedded URLs, telephone numbers, or language that implies urgency, flagging these suspicious summaries for human review. This layered defence strategy is especially vital in environments where AI operates at scale. 

As well as protecting against individual attacks, there is also a broader supply chain risk to consider. It is clear that mass communication systems, such as CRM platforms, newsletters, and automated support ticketing services, are potential vectors for injection, according to researcher Marco Figueroa. There is a possibility that a single compromised account on any of these SaaS systems can be used to spread hidden prompt injections across thousands of recipients, turning otherwise legitimate SaaS services into large-scale phishing attacks. 

There is an apt term to describe "prompt injections", which have become the new email macros according to the research. The exploit exhibited by Phishing for Gemini significantly underscores a fundamental truth: even apparently minor, invisible code can be weaponised and used for malicious purposes. 

As long as language models don't contain robust context isolation that ensures third-party content is sandboxed or subjected to appropriate scrutiny, each piece of input should be viewed as potentially executable code, regardless of whether it is encoded correctly or not. In light of this, security teams should start to understand that AI systems are no longer just productivity tools, but rather components of a threat surface that need to be actively monitored, measured, and contained. 

The risk landscape of today does not allow organisations to blindly trust AI output. Because generative artificial intelligence is being integrated into enterprise ecosystems in ever greater numbers, organisations must reevaluate their security frameworks in order to address the emerging risks that arise from machine learning systems in the future. 

Considering the findings regarding Google Gemini, it is urgent to consider AI-generated outputs as potential threat vectors, as they are capable of being manipulated in subtle but impactful ways. A security protocol based on AI needs to be implemented by enterprises to prevent such exploitations from occurring, robust validation mechanisms for automated content need to be established, and a collaborative oversight system between development, IT, and security teams must be established to ensure this doesn't happen again. 

Moreover, it is imperative that AI-driven tools, especially those embedded within communication workflows, be made accessible to end users so that they can understand their capabilities and limitations. In light of the increasing ease and pervasiveness of automation in digital operations, it will become increasingly essential to maintain a culture of informed vigilance across all layers of the organisation to maintain trust and integrity.

AI Skills Shortage Deepens as Enterprise Demand Grows Faster Than Talent Supply

 

The shortage of skilled professionals in artificial intelligence is becoming a major concern for enterprises, as organizations race to adopt the technology without a matching increase in qualified talent. The latest Harvey Nash Digital Leadership report, released by Nash Squared in May, highlights a sharp rise in demand for AI skills across industries—faster than any previous tech trend tracked in the last 16 years. 

Based on responses from over 2,000 tech executives, the report found that more than half of IT leaders now cite a lack of AI expertise as a key barrier to progress. This marks a steep climb from just 28% a year ago. In fact, AI has jumped from the sixth most difficult skill to hire for to the number one spot in just over a year. Interest in AI adoption continues to soar, with 90% of surveyed organizations either investing in or piloting AI solutions—up significantly from 59% in 2023. Despite this enthusiasm, a majority of companies have not yet seen measurable returns from their AI projects. Many remain stuck in early testing phases, unable to deploy solutions at scale. 

Numerous challenges continue to slow enterprise AI deployment. Besides the scarcity of skilled professionals, companies face obstacles such as inadequate data infrastructure and tight budgets. Without the necessary expertise, organizations struggle to transition from proof-of-concept to full integration. Bev White, CEO of Nash Squared, emphasized that enterprises are navigating uncharted territory. “There’s no manual for scaling AI,” she explained. “Organizations must combine various strategies—formal education, upskilling of tech and non-tech teams, and hands-on experimentation—to build their AI capabilities.” She also stressed the need for operational models that naturally embed AI into daily workflows. 

The report’s findings show that the surge in AI skill demand has outpaced any other technology shift in recent memory. Sectors like manufacturing, education, pharmaceuticals, logistics, and professional services are all feeling the pressure to hire faster than the talent pool allows. Supporting this trend, job market data shows explosive growth in demand for AI roles. 

According to Indeed, postings for generative AI positions nearly tripled year-over-year as of January 2025. Unless companies prioritize upskilling and talent development, the widening AI skills gap could undermine the long-term success of enterprise AI strategies. For now, the challenge of turning AI interest into practical results remains a steep climb.

How OpenAI’s New AI Agents Are Shaping the Future of Coding

 


OpenAI is taking the challenge of bringing into existence the very first powerful AI agents designed specifically to revolutionise the future of software development. It became so advanced that it could interpret in plain language instructions and generate complex code, hoping to make it achievable to complete tasks that would take hours in only minutes. This is the biggest leap forward AI has had up to date, promising a future in which developers can have a more creative and less repetitive target while coding.

Transforming Software Development

These AI agents represent a major change in the type of programming that's created and implemented. Beyond typical coding assistants, which may use suggestions to complete lines, OpenAI's agents produce fully formed, functional code from scratch based on relatively simple user prompts. It is theoretically possible that developers could do their work more efficiently, automating repetitive coding and focusing more on innovation and problem solving on more complicated issues. The agents are, in effect, advanced assistants capable of doing more helpful things than the typical human assistant with anything from far more complex programming requirements.


Competition from OpenAI with Anthropic

As OpenAI makes its moves, it faces stiff competition from Anthropic-an AI company whose growth rate is rapidly taking over. Having developed the first released AI models focused on advancing coding, Anthropic continues to push OpenAI to even further refinement in their agents. This rivalry is more than a race between firms; it is infusing quick growth that works for the whole industry because both companies are setting new standards by working on AI-powered coding tools. As both compete, developers and users alike stand to benefit from the high-quality, innovative tools that will be implied from the given race.


Privacy and Security Issues

The AI agents also raise privacy issues. Concerns over the issue of data privacy and personal privacy arise if these agents can gain access to user devices. Secure integration of the agents will require utmost care because developers rely on the unassailability of their systems. Balancing AI's powerful benefits with needed security measures will be a key determinant of their success in adoption. Also, planning will be required for the integration of these agents into the current workflows without causing undue disruptions to the established standards and best practices in security coding.


Changing Market and Skills Environment

OpenAI and Anthropic are among the leaders in many of the changes that will remake both markets and skills in software engineering. As AI becomes more central to coding, this will change the industry and create new sorts of jobs as it requires the developer to adapt toward new tools and technologies. The extensive reliance on AI in code creation would also invite fresh investments in the tech sector and accelerate broadening the AI market.


The Future of AI in Coding

Rapidly evolving AI agents by OpenAI mark the opening of a new chapter for the intersection of AI and software development, promising to accelerate coding, making it faster, more efficient, and accessible to a wider audience of developers who will enjoy assisted coding towards self-writing complex instructions. The further development by OpenAI will most definitely continue to shape the future of this field, representing exciting opportunities and serious challenges capable of changing the face of software engineering in the foreseeable future.




Want to Make the Most of ChatGPT? Here Are Some Go-To Tips

 







Within a year and a half, ChatGPT has grown from an AI prototype to a broad productivity assistant, even sporting its text and code editor called Canvas. Soon, OpenAI will add direct web search capability to ChatGPT, putting the platform at the same table as Google's iconic search. With these fast updates, ChatGPT is now sporting quite a few features that may not be noticed at first glance but are deepening the user experience if one knows where to look.

This is the article that will teach you how to tap into ChatGPT, features from customization settings to unique prompting techniques, and not only five must-know tips will be useful in unlocking the full range of abilities of ChatGPT to any kind of task, small or big.


1. Rename Chats for Better Organisation

A new conversation with ChatGPT begins as a new thread, meaning that it will remember all details concerning that specific exchange but "forget" all the previous ones. This way, you can track the activities of current projects or specific topics because you can name your chats. The chat name that it might try to suggest is related to the flow of the conversation, and these are mostly overlooked contexts that users need to recall again. Renaming your conversations is one simple yet powerful means of staying organised if you rely on ChatGPT for various tasks.

To give a name to a conversation, tap the three dots next to the name in the sidebar. You can also archive older chats to remove them from the list without deleting them entirely, so you don't lose access to the conversations that are active.


2. Customise ChatGPT through Custom Instructions

Custom Instructions in ChatGPT is a chance to make your answers more specific to your needs because you will get to share your information and preferences with the AI. This is a two-stage personalization where you are explaining to ChatGPT what you want to know about yourself and, in addition, how you would like it to be returned. For instance, if you ask ChatGPT for coding advice several times a week, you can let the AI know what programming languages you are known in or would like to be instructed in so it can fine-tune the responses better. Or, you should be able to ask for ChatGPT to provide more verbose descriptions or to skip steps in order to make more intuitive knowledge of a topic.

To set up personal preferences, tap the profile icon on the upper right, and then from the menu, "Customise ChatGPT," and then fill out your preferences. Doing this will enable you to get responses tailored to your interests and requirements.


3. Choose the Right Model for Your Use

If you are a subscriber to ChatGPT Plus, you have access to one of several AI models each tailored to different tasks. The default model for most purposes is GPT-4-turbo (GPT-4o), which tends to strike the best balance between speed and functionality and even supports other additional features, including file uploads, web browsing, and dataset analysis.

However, other models are useful when one needs to describe a rather complex project with substantial planning. You may initiate a project using o1-preview that requires deep research and then shift the discussion to GPT-4-turbo to get quick responses. To switch models, you can click on the model dropdown at the top of your screen or type in a forward slash (/) in the chat box to get access to more available options including web browsing and image creation.


4. Look at what the GPT Store has available in the form of Mini-Apps

Custom GPTs, and the GPT Store enable "mini-applications" that are able to extend the functionality of the platform. The Custom GPTs all have some inbuilt prompts and workflows and sometimes even APIs to extend the AI capability of GPT. For instance, with Canva's GPT, you are able to create logos, social media posts, or presentations straight within the ChatGPT portal by linking up the Canva tool. That means you can co-create visual content with ChatGPT without having to leave the portal.

And if there are some prompts you often need to apply, or some dataset you upload most frequently, you can easily create your Custom GPT. This would be really helpful to handle recipes, keeping track of personal projects, create workflow shortcuts and much more. Go to the GPT Store by the "Explore GPTs" button in the sidebar. Your recent and custom GPTs will appear in the top tab, so find them easily and use them as necessary.


5. Manage Conversations with a Fresh Approach

For the best benefit of using ChatGPT, it is key to understand that every new conversation is an independent document with its "memory." It does recall enough from previous conversations, though generally speaking, its answers depend on what is being discussed in the immediate chat. This made chats on unrelated projects or topics best started anew for clarity.

For long-term projects, it might even be logical to go on with a single thread so that all relevant information is kept together. For unrelated topics, it might make more sense to start fresh each time to avoid confusion. Another way in which archiving or deleting conversations you no longer need can help free up your interface and make access to active threads easier is


What Makes AI Unique Compared to Other Software?

AI performs very differently from other software in that it responds dynamically, at times providing responses or "backtalk" and does not simply do what it is told to do. Such a property leads to some trial and error to obtain the desired output. For instance, one might prompt ChatGPT to review its own output as demonstrated by replacing single quote characters by double quote characters to generate more accurate results. This is similar to how a developer optimises an AI model, guiding ChatGPT to "think" through something in several steps.

ChatGPT Canvas and other features like Custom GPTs make the AI behave more like software in the classical sense—although, of course, with personality and learning. If ChatGPT continues to grow in this manner, features such as these may make most use cases easier and more delightful.

Following these five tips should help you make the most of ChatGPT as a productivity tool and keep pace with the latest developments. From renaming chats to playing around with Custom GPTs, all of them add to a richer and more customizable user experience.


Is Google Spying on You? EU Investigates AI Data Privacy Concerns



Google is currently being investigated in Europe over privacy concerns raised about how the search giant has used personal data to train its generative AI tools. The subject of investigation is led by Ireland's Data Protection Commission, which ensures that the giant technical company adheres to strict data protection laws within the European Union. This paper will establish whether Google adhered to its legal process, such as obtaining a Data Protection Impact Assessment (DPIA), before using people's private information to develop its intelligent machine models.

Data Collection for AI Training Causes Concerns

Generative AI technologies similar to Google's brand Gemini have emerged into the headlines because these tend to create fake information and leak personal information. This raises the question of whether Google's AI training methods, necessarily involving tremendous amounts of data through which such training must pass, are GDPR-compliant-its measures to protect privacy and rights regarding individuals when such data is used for developing AI.

This issue at the heart of the probe is if Google should have carried out a DPIA, which is an acronym for Data Protection Impact Assessment-the view of any risks data processing activities may have on the rights to privacy of individuals. The reason for conducting a DPIA is to ensure that the rights of the individuals are protected simply because companies like Google process humongous personal data so as to create such AI models. The investigation, however, is specifically focused on how Google has been using its model called PaLM2 for running different forms of AI, such as chatbots and enhancements in the search mechanism.

Fines Over Privacy Breaches

But if the DPC finds that Google did not comply with the GDPR, then this could pose a very serious threat to the company because the fine may amount to more than 4% of the annual revenue generated globally. Such a company as Google can raise billions of dollars in revenue every year; hence such can result in a tremendous amount.

Other tech companies, including OpenAI and Meta, also received similar privacy-based questions relating to their data practices when developing AI.

Other general issues revolve around the processing of personal data in this fast-emerging sphere of artificial intelligence.

Google Response to Investigation

The firm has so far refused to answer questions over specific sources of data used to train its generative AI tools. A company spokesperson said Google remains dedicated to compliance with the GDPR and will continue cooperating with the DPC throughout the course of the investigation. The company maintains it has done nothing illegal. And just because a company is under investigation, that doesn't mean there's something wrong with it; the very act of inquiring itself forms part of a broader effort to ensure that companies using technology take account of how personal information is being used.

Data Protection in the AI Era

DPC questioning of Google is part of a broader effort by the EU regulators to ensure generative AI technologies adhere to the bloc's high data-privacy standards. As concerns over how personal information is used, more companies are injecting AI into their operations. The GDPR has been among the most important tools for ensuring citizens' protection against misuse of data, especially during cases involving sensitive or personal data.

In the last few years, other tech companies have been prosecuted with regard to their data-related activities in AI development. Recently, the developers of ChatGPT, OpenAI, and Elon Musk's X (formerly Twitter), faced investigations and complaints under the law of GDPR. This indicates the growing pressure technological advancement and the seriousness in the protection of privacy are under.

The Future of AI and Data Privacy

In developing AI technologies, firms developing relevant technology need to strike a balance between innovation and privacy. The more innovation has brought numerous benefits into the world-search capabilities and more efficient processes-the more it has opened risks to light by leaving personal data not so carefully dealt with in most cases.

Moving forward, the regulators, including the DPC, would be tracking the manner in which the companies like Google are dealing with the data. It is sure to make rules much more well-defined on what is permissible usage of personal information for developing the AI that would better protect individuals' rights and freedoms in this digital age.

Ultimately, the consequences of this study may eventually shape how AI technologies are designed and implemented in the European Union; it will certainly inform tech businesses around the world.


Council of Europe Lunches First AI Treaty


The Council of Europe has launched the first legally binding international treaty on artificial intelligence (AI) to align AI usage with the principles of human rights, democracy, and the rule of law. Known as the Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law (CETS No. 225), the treaty was opened for signature during a conference of Council of Europe Ministers of Justice held in Vilnius, Lithuania.  

Countries including the UK, Israel, the US, the European Union (EU), and Council of Europe member states such as Norway, Iceland, and Georgia have signed the treaty, underscoring its broad appeal. 

In her remarks, Council of Europe Secretary General Marija Pejčinović Burić emphasized the importance of ensuring AI adheres to existing legal and ethical standards. "We must ensure that the rise of AI upholds our standards, rather than undermining them," she said. Burić expressed hope that more countries will follow suit, ratifying the treaty so it can enter into force swiftly. 

The treaty offers a comprehensive legal framework for regulating AI throughout its lifecycle, from development to deployment. It encourages technological innovation while simultaneously addressing concerns surrounding public safety, privacy, and data protection. 

The signatories are also obligated to guard against potential misuse of AI technologies, particularly in areas such as misinformation and biased decision-making. Key safeguards outlined in the treaty include the protection of human rights, particularly concerning data privacy and non-discrimination; the safeguarding of democratic processes by preventing AI from eroding public trust in institutions; and the regulation of AI risks to uphold the rule of law. 

The Framework Convention was adopted by the Council of Europe Committee of Ministers on May 17, 2024, and will take effect three months after at least five signatories, including three Council of Europe member states, ratify it. The UK’s Lord Chancellor and Justice Secretary Shabana Mahmood signed the treaty, calling it a critical step in ensuring AI is harnessed responsibly without compromising core democratic values.

Breaking the Silence: The OpenAI Security Breach Unveiled

Breaking the Silence: The OpenAI Security Breach Unveiled

In April 2023, OpenAI, a leading artificial intelligence research organization, faced a significant security breach. A hacker gained unauthorized access to the company’s internal messaging system, raising concerns about data security, transparency, and the protection of intellectual property. 

In this blog, we delve into the incident, its implications, and the steps taken by OpenAI to prevent such breaches in the future.

The OpenAI Breach

The breach targeted an online forum where OpenAI employees discussed upcoming technologies, including features for the popular chatbot. While the actual GPT code and user data remained secure, the hacker obtained sensitive information related to AI designs and research. 

While Open AI shared the information with its staff and board members last year, it did not tell the public or the FBI about the breach, stating that doing so was unnecessary because no user data was stolen. 

OpenAI does not regard the attack as a national security issue and believes the attacker was a single individual with no links to foreign powers. OpenAI’s decision not to disclose the breach publicly sparked debate within the tech community.

Breach Impact

Leopold Aschenbrenner, a former OpenAI employee, had expressed worries about the company's security infrastructure and warned that its systems could be accessible to hostile intelligence services such as China. The company abruptly fired Aschenbrenner, although OpenAI spokesperson Liz Bourgeois told the New York Times that his dismissal had nothing to do with the document.

Similar Attacks and Open AI’s Response

This is not the first time OpenAI has had a security lapse. Since its launch in November 2022, ChatGPT has been continuously attacked by malicious actors, frequently resulting in data leaks. A separate attack exposed user names and passwords in February of this year. 

In March of last year, OpenAI had to take ChatGPT completely down to fix a fault that exposed customers' payment information to other active users, including their first and last names, email IDs, payment addresses, credit card info, and the last four digits of their card number. 

Last December, security experts found that they could convince ChatGPT to release pieces of its training data by prompting the system to endlessly repeat the word "poem."

OpenAI has taken steps to enhance security since then, including additional safety measures and a Safety and Security Committee.

Tech Giants Face Backlash Over AI Privacy Concerns






Microsoft recently faced material backlash over its new AI tool, Recall, leading to a delayed release. Recall, introduced last month as a feature of Microsoft's new AI companion, captures screen images every few seconds to create a searchable library. This includes sensitive information like passwords and private conversations. The tool's release was postponed indefinitely after criticism from data privacy experts, including the UK's Information Commissioner's Office (ICO).

In response, Microsoft announced changes to Recall. Initially planned for a broad release on June 18, 2024, it will first be available to Windows Insider Program users. The company assured that Recall would be turned off by default and emphasised its commitment to privacy and security. Despite these assurances, Microsoft declined to comment on claims that the tool posed a security risk.

Recall was showcased during Microsoft's developer conference, with Yusuf Mehdi, Corporate Vice President, highlighting its ability to access virtually anything on a user's PC. Following its debut, the ICO vowed to investigate privacy concerns. On June 13, Microsoft announced updates to Recall, reinforcing its "commitment to responsible AI" and privacy principles.

Adobe Overhauls Terms of Service 

Adobe faced a wave of criticism after updating its terms of service, which many users interpreted as allowing the company to use their work for AI training without proper consent. Users were required to agree to a clause granting Adobe a broad licence over their content, leading to suspicions that Adobe was using this content to train generative AI models like Firefly.

Adobe officials, including President David Wadhwani and Chief Trust Officer Dana Rao, denied these claims and clarified that the terms were misinterpreted. They reassured users that their content would not be used for AI training without explicit permission, except for submissions to the Adobe Stock marketplace. The company acknowledged the need for clearer communication and has since updated its terms to explicitly state these protections.

The controversy began with Firefly's release in March 2023, when artists noticed AI-generated imagery mimicking their styles. Users like YouTuber Sasha Yanshin cancelled their Adobe subscriptions in protest. Adobe's Chief Product Officer, Scott Belsky, admitted the wording was unclear and emphasised the importance of trust and transparency.

Meta Faces Scrutiny Over AI Training Practices

Meta, the parent company of Facebook and Instagram, has also been criticised for using user data to train its AI tools. Concerns were raised when Martin Keary, Vice President of Product Design at Muse Group, revealed that Meta planned to use public content from social media for AI training.

Meta responded by assuring users that it only used public content and did not access private messages or information from users under 18. An opt-out form was introduced for EU users, but U.S. users have limited options due to the lack of national privacy laws. Meta emphasised that its latest AI model, Llama 2, was not trained on user data, but users remain concerned about their privacy.

Suspicion arose in May 2023, with users questioning Meta's security policy changes. Meta's official statement to European users clarified its practices, but the opt-out form, available under Privacy Policy settings, remains a complex process. The company can only address user requests if they demonstrate that the AI "has knowledge" of them.

The recent actions by Microsoft, Adobe, and Meta highlight the growing tensions between tech giants and their users over data privacy and AI development. As these companies navigate user concerns and regulatory scrutiny, the debate over how AI tools should handle personal data continues to intensify. The tech industry's future will heavily depend on balancing innovation with ethical considerations and user trust.