Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label ChatGPT. Show all posts

AI Minefield: Risks of Gen AI in Your Personal Sphere

AI Minefield: Risks of Gen AI in Your Personal Sphere

Many customers are captivated by Gen AI, employing new technologies for a variety of personal and corporate purposes. 

However, many people ignore the serious privacy implications.

Is Generative AI all sunshine and rainbows?

Consumer AI products, such as OpenAI's ChatGPT, Google's Gemini, Microsoft Copilot software, and the new Apple Intelligence, are widely available and growing. However, the programs have various privacy practices in terms of how they use and retain user data. In many circumstances, users are unaware of how their data is or may be utilized.

This is where being an informed consumer becomes critical. According to Jodi Daniels, chief executive and privacy expert of Red Clover Advisors, which advises businesses on privacy issues, the granularity of what you can regulate varies depending on the technology. Daniels explained that there is no uniform opt-out for all technologies.

Privacy concerns

The rise of AI technologies, and their incorporation into so much of what customers do on their personal computers and cellphones, makes these problems much more pressing. A few months ago, for example, Microsoft introduced its first Surface PCs with a dedicated Copilot button on the keyboard for rapid access to the chatbot, fulfilling a promise made several months previously. 

Apple, for its part, presented its AI vision last month, which centered around numerous smaller models that operate on the company's devices and chips. Company officials have spoken publicly about the significance of privacy, which can be an issue with AI models.

Here are many approaches for consumers to secure their privacy in the new era of generative AI.

1. Use opt-outs provided by OpenAI and Google

Each generation AI tool has its own privacy policy, which may include opt-out choices. Gemini, for example, lets customers choose a retention time and erase certain data, among other activity limits.

ChatGPT allows users to opt out of having their data used for model training. To do so, click the profile symbol in the bottom-left corner of the page and then pick Data Controls from the Settings header. They must then disable the feature labeled "Improve the model for everyone." According to a FAQ on OpenAI's website, if this is disabled, fresh talks will not be utilized to train ChatGPT's models.

2. Opt-in, but for good reasons

Companies are incorporating modern AI into personal and professional solutions, like as Microsoft Copilot. Opt-in only for valid reasons. Copilot for Microsoft 365, for example, integrates with Word, Excel, and PowerPoint to assist users with activities such as analytics, idea development, and organization.

Microsoft claims that it does not share consumer data with third parties without permission, nor does it utilize customer data to train Copilot or other AI features without consent. 

Users can, however, opt in if they like by logging into the Power Platform admin portal, selecting settings, and tenant settings, and enabling data sharing for Dynamics 365 Copilot and Power Platform Copilot AI Features. They facilitate data sharing and saving.

3. Gen AI search: Setting retention period

Consumers may not think much before seeking information using AI, treating it like a search engine to create information and ideas. However, looking for specific types of information utilizing gen AI might be intrusive to a person's privacy, hence there are best practices for using such tools. Hoffman-Andrews recommends setting a short retention period for the generation AI tool. 

And, if possible, erase chats once you've gathered the desired information. Companies still keep server logs, but they can assist lessen the chance of a third party gaining access to your account, he explained. It may also limit the likelihood of sensitive information becoming part of the model training. "It really depends on the privacy settings of the particular site."

Investing in AI? Don’t Forget the Cyber Locks! VCs Advice.


The OpenAI Data Breach: A Wake-Up Call for Seed VCs

Security breaches are common in the current industry of artificial intelligence (AI) and machine learning (ML). However, when a prominent player like OpenAI falls victim to such an incident, it sends shockwaves through the tech community. This blog post delves into the recent OpenAI data breach and explores its impact on seed venture capitalists (VCs).

The Incident

OpenAI, known for its cutting-edge research in AI and its development of powerful language models, recently disclosed a security breach. Hackers gained unauthorized access to some of OpenAI’s internal systems, raising concerns about data privacy and security. While OpenAI assured users that no sensitive information was compromised, the incident highlights the vulnerability of AI companies to cyber threats.

Seed VCs on High Alert

Seed VCs, who invest in early-stage startups, should pay close attention to this breach. Here’s why:

Dependency on AI Companies

Seed VCs often collaborate with AI companies, providing funding and mentorship. As AI technologies become integral to various industries, VCs increasingly invest in startups leveraging AI/ML. The OpenAI breach underscores the need for due diligence when partnering with such firms.

Data Privacy Risks

Startups working with AI models generate and handle vast amounts of data. Seed VCs must assess the data security practices of their portfolio companies. A breach could harm the startup and impact the VC’s reputation and relationships with other investors.

Intellectual Property Concerns

Seed VCs invest in innovative ideas and technologies. If a startup’s IP is compromised due to lax security practices, it affects the VC’s investment. VCs should encourage startups to prioritize security and protect their intellectual assets.

Mitigating Risks: Seed VCs can take proactive steps

1. Due Diligence: Before investing, thoroughly evaluate a startup’s security protocols. Understand how they handle data, who has access, and their response plan in case of a breach.

2. Collaboration with AI Firms: Engage in open conversations with AI companies about security measures. VCs can influence best practices by advocating for robust security standards.

3. Education: Educate portfolio companies about security hygiene. Regular audits and training sessions can help prevent breaches.

OpenAI Hack Exposes Hidden Risks in AI's Data Goldmine


A recent security incident at OpenAI serves as a reminder that AI companies have become prime targets for hackers. Although the breach, which came to light following comments by former OpenAI employee Leopold Aschenbrenner, appears to have been limited to an employee discussion forum, it underlines the steep value of data these companies hold and the growing threats they face.

The New York Times detailed the hack after Aschenbrenner labelled it a “major security incident” on a podcast. However, anonymous sources within OpenAI clarified that the breach did not extend beyond an employee forum. While this might seem minor compared to a full-scale data leak, even superficial breaches should not be dismissed lightly. Unverified access to internal discussions can provide valuable insights and potentially lead to more severe vulnerabilities being exploited.

AI companies like OpenAI are custodians of incredibly valuable data. This includes high-quality training data, bulk user interactions, and customer-specific information. These datasets are crucial for developing advanced models and maintaining competitive edges in the AI ecosystem.

Training data is the cornerstone of AI model development. Companies like OpenAI invest vast amounts of resources to curate and refine these datasets. Contrary to the belief that these are just massive collections of web-scraped data, significant human effort is involved in making this data suitable for training advanced models. The quality of these datasets can impact the performance of AI models, making them highly coveted by competitors and adversaries.

OpenAI has amassed billions of user interactions through its ChatGPT platform. This data provides deep insights into user behaviour and preferences, much more detailed than traditional search engine data. For instance, a conversation about purchasing an air conditioner can reveal preferences, budget considerations, and brand biases, offering invaluable information to marketers and analysts. This treasure trove of data highlights the potential for AI companies to become targets for those seeking to exploit this information for commercial or malicious purposes.

Many organisations use AI tools for various applications, often integrating them with their internal databases. This can range from simple tasks like searching old budget sheets to more sensitive applications involving proprietary software code. The AI providers thus have access to critical business information, making them attractive targets for cyberattacks. Ensuring the security of this data is paramount, but the evolving nature of AI technology means that standard practices are still being established and refined.

AI companies, like other SaaS providers, are capable of implementing robust security measures to protect their data. However, the inherent value of the data they hold means they are under constant threat from hackers. The recent breach at OpenAI, despite being limited, should serve as a warning to all businesses interacting with AI firms. Security in the AI industry is a continuous, evolving challenge, compounded by the very AI technologies these companies develop, which can be used both for defence and attack.

The OpenAI breach, although seemingly minor, highlights the critical need for heightened security in the AI industry. As AI companies continue to amass and utilise vast amounts of valuable data, they will inevitably become more attractive targets for cyberattacks. Businesses must remain vigilant and ensure robust security practices when dealing with AI providers, recognising the gravity of the risks and responsibilities involved.


Breaking the Silence: The OpenAI Security Breach Unveiled

Breaking the Silence: The OpenAI Security Breach Unveiled

In April 2023, OpenAI, a leading artificial intelligence research organization, faced a significant security breach. A hacker gained unauthorized access to the company’s internal messaging system, raising concerns about data security, transparency, and the protection of intellectual property. 

In this blog, we delve into the incident, its implications, and the steps taken by OpenAI to prevent such breaches in the future.

The OpenAI Breach

The breach targeted an online forum where OpenAI employees discussed upcoming technologies, including features for the popular chatbot. While the actual GPT code and user data remained secure, the hacker obtained sensitive information related to AI designs and research. 

While Open AI shared the information with its staff and board members last year, it did not tell the public or the FBI about the breach, stating that doing so was unnecessary because no user data was stolen. 

OpenAI does not regard the attack as a national security issue and believes the attacker was a single individual with no links to foreign powers. OpenAI’s decision not to disclose the breach publicly sparked debate within the tech community.

Breach Impact

Leopold Aschenbrenner, a former OpenAI employee, had expressed worries about the company's security infrastructure and warned that its systems could be accessible to hostile intelligence services such as China. The company abruptly fired Aschenbrenner, although OpenAI spokesperson Liz Bourgeois told the New York Times that his dismissal had nothing to do with the document.

Similar Attacks and Open AI’s Response

This is not the first time OpenAI has had a security lapse. Since its launch in November 2022, ChatGPT has been continuously attacked by malicious actors, frequently resulting in data leaks. A separate attack exposed user names and passwords in February of this year. 

In March of last year, OpenAI had to take ChatGPT completely down to fix a fault that exposed customers' payment information to other active users, including their first and last names, email IDs, payment addresses, credit card info, and the last four digits of their card number. 

Last December, security experts found that they could convince ChatGPT to release pieces of its training data by prompting the system to endlessly repeat the word "poem."

OpenAI has taken steps to enhance security since then, including additional safety measures and a Safety and Security Committee.

AI-Generated Exam Answers Outperform Real Students, Study Finds

 

In a recent study, university exams taken by fictitious students using artificial intelligence (AI) outperformed those by real students and often went undetected by examiners. Researchers at the University of Reading created 33 fake students and employed the AI tool ChatGPT to generate answers for undergraduate psychology degree module exams.

The AI-generated responses scored, on average, half a grade higher than those of actual students. Remarkably, 94% of the AI essays did not raise any suspicion among markers, with only a 6% detection rate, which the study suggests is likely an overestimate. These findings, published in the journal Plos One, highlight a significant concern: "AI submissions robustly gained higher grades than real student submissions," indicating that students could use AI to cheat undetected and achieve better grades than their honest peers.

Associate Professor Peter Scarfe and Professor Etienne Roesch, who led the study, emphasized the need for educators globally to take note of these findings. Dr. Scarfe noted, "Many institutions have moved away from traditional exams to make assessment more inclusive. Our research shows it is of international importance to understand how AI will affect the integrity of educational assessments. We won’t necessarily go back fully to handwritten exams - but the global education sector will need to evolve in the face of AI."

In the study, the AI-generated answers and essays were submitted for first-, second-, and third-year modules without the knowledge of the markers. The AI students outperformed real undergraduates in the first two years, but in the third-year exams, human students scored better. This result aligns with the idea that current AI struggles with more abstract reasoning. The study is noted as the largest and most robust blind study of its kind to date.

Academics have expressed concerns about the impact of AI on education. For instance, Glasgow University recently reinstated in-person exams for one course. Additionally, a study reported by the Guardian earlier this year found that most undergraduates used AI programs to assist with their essays, but only 5% admitted to submitting unedited AI-generated text in their assessments.

From Siri to 5G: AI’s Impact on Telecommunications

From Siri to 5G: AI’s Impact on Telecommunications

The integration of artificial intelligence (AI) has significantly transformed the landscape of mobile phone networks. From optimizing network performance to enhancing user experiences, AI plays a pivotal role in shaping the future of telecommunications. 

In this blog post, we delve into how mobile networks embrace AI and its impact on consumers and network operators.

1. Apple’s AI-Powered Operating System

Apple, a tech giant known for its innovation, recently introduced “Apple Intelligence,” an AI-powered operating system. The goal is to make iPhones more intuitive and efficient by integrating AI capabilities into Siri, the virtual assistant. Users can now perform tasks more quickly, receive personalized recommendations, and interact seamlessly with their devices.

2. Network Optimization and Efficiency

Telecom companies worldwide are leveraging AI to optimize mobile phone networks. Here’s how:

  • Dynamic Frequency Adjustment: Network operators dynamically adjust radio frequencies to optimize service quality. AI algorithms analyze real-time data to allocate frequencies efficiently, ensuring seamless connectivity even during peak usage.
  • Efficient Cell Tower Management: AI helps manage cell towers more effectively. During low-demand periods, operators can power down specific towers, reducing energy consumption without compromising coverage.

3. Fault Localization and Rapid Resolution

AI-driven network monitoring has revolutionized fault localization. For instance:

  • Korea Telecom’s Quick Response: In South Korea, Korea Telecom uses AI algorithms to pinpoint network faults within minutes. This rapid response minimizes service disruptions and enhances customer satisfaction.
  • AT&T’s Predictive Maintenance: AT&T in the United States relies on predictive AI models to anticipate network issues. By identifying potential problems before they escalate, they maintain network stability.

4. AI Digital Twins for Real-Time Monitoring

Network operators like Vodafone create AI digital twins—virtual replicas of real-world equipment such as masts and antennas. These digital twins continuously monitor network performance, identifying anomalies and suggesting preventive measures. As a result, operators can proactively address issues and maintain optimal service levels.

5. Data Explosion and the Role of 5G

The proliferation of AI generates massive data. Consequently, investments in 5G Standalone (SA) networks have surged. Here’s why:

  • Higher Speeds and Capacity: 5G SA networks offer significantly higher speeds and capacity compared to the older 4G system. This is essential for handling the data influx from AI applications.
  • Edge Computing: 5G enables edge computing, where AI processing occurs closer to the user. This reduces latency and enhances real-time applications like autonomous vehicles and augmented reality.

6. Looking Ahead: The Quest for 6G

Despite 5G advancements, experts predict that AI’s demands will eventually outstrip its capabilities. Anticipating this, researchers are already exploring 6G technology, expected around 2028. 6G aims to provide unprecedented speeds, ultra-low latency, and seamless connectivity, further empowering AI-driven applications.

Researchers Find ChatGPT’s Latest Bot Behaves Like Humans

 

A team led by Matthew Jackson, the William D. Eberle Professor of Economics in the Stanford School of Humanities and Sciences, used psychology and behavioural economics tools to characterise the personality and behaviour of ChatGPT's popular AI-driven bots in a paper published in the Proceedings of the National Academy of Sciences on June 12. 

This study found that the most recent version of the chatbot, version 4, was indistinguishable from its human counterparts. When the bot picked less common human behaviours, it behaved more cooperatively and altruistic.

“Increasingly, bots are going to be put into roles where they’re making decisions, and what kinds of characteristics they have will become more important,” stated Jackson, who is also a senior fellow at the Stanford Institute for Economic Policy Research. 

In the study, the research team presented a widely known personality test to ChatGPT versions 3 and 4 and asked the chatbots to describe their moves in a series of behavioural games that can predict real-world economic and ethical behaviours. The games included pre-determined exercises in which players had to select whether to inform on a partner in crime or how to share money with changing incentives. The bots' responses were compared to those of over 100,000 people from 50 nations. 

The study is one of the first in which an artificial intelligence source has passed a rigorous Turing test. A Turing test, named after British computing pioneer Alan Turing, can consist of any job assigned to a machine to determine whether it performs like a person. If the machine seems to be human, it passes the test. 

Chatbot personality quirks

The researchers assessed the bots' personality qualities using the OCEAN Big-5, a popular personality exam that evaluates respondents on five fundamental characteristics that influence behaviour. In the study, ChatGPT's version 4 performed within normal ranges for the five qualities but was only as agreeable as the lowest third of human respondents. The bot passed the Turing test, but it wouldn't have made many friends. 

Version 4 outperformed version 3 in terms of chip and motherboard performance. The previous version, with which many internet users may have interacted for free, was only as appealing to the bottom fifth of human responders. Version 3 was likewise less open to new ideas and experiences than all but a handful of the most stubborn people. 

Human-AI interactions 

Much of the public's concern about AI stems from their failure to understand how bots make decisions. It can be difficult to trust a bot's advice if you don't know what it's designed to accomplish. Jackson's research shows that even when researchers cannot scrutinise AI's inputs and algorithms, they can discover potential biases by meticulously examining outcomes. 

As a behavioural economist who has made significant contributions to our knowledge of how human social structures and interactions influence economic decision-making, Jackson is concerned about how human behaviour may evolve in response to AI.

“It’s important for us to understand how interactions with AI are going to change our behaviors and how that will change our welfare and our society,” Jackson concluded. “The more we understand early on—the more we can understand where to expect great things from AI and where to expect bad things—the better we can do to steer things in a better direction.”

Meta Addresses AI Chatbot's YouTube Training Data Assertion

 


Eventually, artificial intelligence systems like ChatGPT will run out of the tens of trillions of words people have been writing and sharing on the web, which keeps them smarter. In a new study released on Thursday by Epoch AI, researchers estimate that tech companies will exhaust the available training data for AI language models sometime between 2026 and 2032 if the industry is to be expected to use public training data in the future. 

It is more open than Meta that the Meta AI chatbot will share its training data with me. It is widely known that Meta, formerly known as Facebook, has been trying to move into the generative AI space since last year. The company was aiming to keep up with the public's interest sparked by the launch of OpenAI's ChatGPT in late 2022. In April of this year, Meta AI was expanded to include a chat and image generator feature on all its apps, including Instagram and WhatsApp. However, much information about how Meta AI was trained has not been released to date. 

A series of questions were asked by Business Insider of Meta AI regarding the data it was trained on and the method by which Meta obtained such data. In the interview with Business Insider, Meta AI revealed that it had been trained on a large dataset of transcriptions from YouTube videos, as reported by Business Insider. Furthermore, it said that Meta has its web scraper bot, referred to as "MSAE" (Meta Scraping and Extraction), which scrapes a huge amount of information off the web to use for the training of AI systems. This scraper was never disclosed to Meta previously. 

The terms of service of YouTube do not allow users to collect their data by using bots and scrapers, nor can they use such data without permission from YouTube. As a result of this, OpenAI has recently come under scrutiny for purportedly using such data. According to a Meta spokesman, Meta AI has given correct answers regarding its scraper and training data. However, the spokesman suggested that Meta AI may be wrong in the process. 

A spokesperson from Intel explained that creative AI requires a large amount of data to be effectively trained, so data from a wide variety of sources is utilised for training, including publicly available information online as well as data that has been annotated. As part of its initial training, Meta AI said that 3.7 million YouTube videos had been transcribed by a third party. It was confirmed by Meta AI's chatbot that it did not use its scraper bot to scrape YouTube videos directly. In response to further questions on Meta AI's YouTube training data, Meta AI replied that another dataset with transcriptions from 6 million YouTube videos was also compiled by a third party as part of its training data set.

Besides the 1.5 million YouTube transcriptions and subtitles included in its training dataset, the company also added two more sets of YouTube subtitles, one with 2.5 million subtitles and another with 1.5 million subtitles, as well as several transcriptions from 2,500 YouTube stories showcasing TED Talks. In Meta AI's opinion, all of the data sets were compiled by third parties after they had been collected by them. According to Meta's chatbot, the company takes steps to ensure that it does not gather copyrighted information on its users. However, from my understanding, Meta AI in some form scrapes the web in an ongoing manner. 

As a result of several queries, results displayed sources including NBC News, CNN, and The Financial Times among others. In most cases, Meta AI does not include sources for its responses, unless specifically requested to provide such information. A new paid deal with Meta AI would provide Meta AI with access to more AI training data, which could improve the results of Meta AI in the future, according to BI reporting. As well as respecting robots.txt, Meta AI said it abides by the robots.txt protocol, a set of guidelines that website owners can use to ostensibly prevent bots from scraping pages for training AI. 

Meta used a large language model called Llama to develop the chatbot. Meta AI has yet to release an accompanying paper for the new model or disclose the training data used for the model, even though Llama 3 was released in April, around the time Meta AI was expanded. It was Meta's blog post that revealed that the huge set of 15 trillion tokens used to train Llama 3 was sourced from public sources, meaning "publicly available sources." Web scrapers can extract almost all available content that is accessible on the web, and they can do so effectively with tools such as OpenAI's GPTBot, Google's GoogleBot, and Common Crawl's CCBot. 

The content is stored in massive datasets fed into LLMs and often regurgitated by generative AI tools like ChatGPT. Several ongoing lawsuits concern owned and copyrighted content being freely absorbed by the world's biggest tech companies. The US Copyright Office is expected to release new guidance on acceptable uses for AI companies later this year. 

The content is stored in extensive datasets that are incorporated into large language models (LLMs) and frequently reproduced by generative AI tools such as ChatGPT. Multiple ongoing lawsuits address the issue of proprietary and copyrighted material being utilized without permission by major technology companies. The United States Copyright Office is anticipated to issue new guidelines later this year regarding the permissible use of such content by AI companies. 

Risks of Generative AI for Organisations and How to Manage Them

 

Employers should be aware of the potential data protection issues before experimenting with generative AI tools like ChatGPT. You can't just feed human resources data into a generative AI tool because of the rise in privacy and data protection laws in the US, Europe, and other countries in recent years. After all, employee data—including performance, financial, and even health data—is often quite sensitive.

Obviously, this is an area where companies should seek legal advice. It's also a good idea to consult with an AI expert regarding the ethics of utilising generative AI (to ensure that you're acting not only legally, but also ethically and transparently). But, as a starting point, here are two major factors that employers should be aware of. 

Feeding personal data

As I previously stated, employee data is often highly sensitive and sensitive. It is precisely the type of data that, depending on your jurisdiction, is usually subject to the most stringent forms of legal protection.

This makes it highly dangerous to feed such data into a generative AI tool. Why? Because many generative AI technologies use the information provided to fine-tune the underlying language model. In other words, it may use the data you provide for training purposes, and it may eventually expose that information to other users. So, suppose you employ a generative AI tool to generate a report on employee salary based on internal employee information. In the future, the AI tool can employ the data to generate responses for other users (outside of your organisation). Personal information could easily be absorbed by the generative AI tool and reused. 

This isn't as shady as it sounds. Many generative AI programmes' terms and conditions explicitly specify that data provided to the AI may be utilised for training and fine-tuning or revealed when users request cases of previously submitted inquiries. As a result, when you agree to the terms of service, always make sure you understand exactly what you're getting yourself into. Experts urge that any data given to a generative AI service be anonymised and free of personally identifiable information. This is frequently referred to as "de-identifying" the data.

Risks of generative AI outputs 

There are risks associated with the output or content developed by generative AIs, in addition to the data fed into them. In particular, there is a possibility that the output from generative AI technologies will be based on personal data acquired and handled in violation of data privacy laws. 

For example, suppose you ask a generative AI tool to provide a report on average IT salary in your area. There is a possibility that the programme will scrape personal data from the internet without your authorization, violating data protection rules, and then serve it to you. Employers who exploit personal data provided by a generative AI tool may be held liable for data protection violations. For the time being, it is a legal grey area, with the generative AI provider likely bearing the most or all of the duty, but the risk remains. 

Cases like this are already appearing. Indeed, one lawsuit claims that ChatGPT was trained on "massive amounts of personal data," such as medical records and information about children, that was accessed without consent. You do not want your organisation to become unwittingly involved in a litigation like this. Essentially, we're discussing an "inherited" risk of violating data protection regulations. However, there is a risk involved. 

The way forward

Employers must carefully evaluate the data protection and privacy consequences of utilising generative AI and seek expert assistance. However, don't let this put you off adopting generative AI altogether. Generative AI, when used properly and within the bounds of the law, can be an extremely helpful tool for organisations.

Shadow IT Surge Poses Growing Threat to Corporate Data Security

 


It was recently found that 93% of cybersecurity leaders have deployed generative artificial intelligence in their organizations, yet 34% of those implementing the technology have not taken steps to minimize security risks, according to a recent survey conducted by cybersecurity firm Splunk, which was previously reported by CFO Dive. 

In the coming years, digital transformation and cloud migration will become increasingly commonplace in every sector of the economy, raising the amount of data businesses must store, process and manage, as well as the amount of data they must manage. Even though external threats such as hacking, phishing, and ransomware are given a great deal of attention, it is equally critical for companies to manage their data internally to ensure data security is maintained. 

In an organization, shadow data is information that is not approved by the organization or overseen by it. An employee's use of applications, services, or devices that their employer has not approved can be considered a feature (or a bug?) of the modern workplace. Whether it is a cloud storage account, an unofficial collaboration tool, or an unsanctioned SaaS application, shadow data can be generated from a variety of sources. 

In general, shadow data is not accounted for in the security and compliance frameworks of organizations, which leaves a glaring blind spot in data protection strategies, which is why it poses the biggest challenge. A report by Splunk says, “Such thoughtful policies can help minimize data leakage and new vulnerabilities, but they cannot necessarily prevent a complete breach.” However, they can help minimize these risks. 

According to the study by Cyberhaven, AI adoption has been so rapid that knowledge workers are now putting more corporate data into AI tools on a Saturday and Sunday than they were putting into the AI tools during the middle of last year's workweek on average. This could mean that workers are using AI tools early on in the adoption cycle, even before the IT department is formally instructed to purchase them. 

The result would be the so-called 'shadow AI,' or the use of AI tools by employees through their accounts that are not sanctioned by the company, and maybe no one is even aware of it. Using AI in the workplace is gaining traction. The amount of corporate data workers are putting into AI tools has jumped by 485% from March 2023 to March 2024, and the trend is accelerating. There are 23.6% of tech workers in March 2024 who use AI tools for their work (the highest rate of any industry). 

It is estimated that only 4.7% of employees in the financial sector, 2.8% in the pharmaceuticals industry, and 0.6% in manufacturing industries use AI tools. The use of risky "shadow AI" accounts is growing as end users outpace corporate IT. There are 73.8% of ChatGPT users who use the application through non-corporate accounts. 

However, unlike enterprise versions of ChatGPT, the enterprise versions incorporate whatever information you share in public models as well. According to the data, the percentage of non-corporate accounts is even higher for Gemini (94.4%) and Bard (95.9%). AI products from the big three: OpenAI, Google, and Microsoft accounted for 96.0% of AI use at work. Research and development materials created by artificial intelligence-generated tools have been used in potentially risky ways currently. 

In March 2024, 3.4% of the materials were created by artificial intelligence-generated tools, which could potentially create a risk if patented materials were included. As a result, 3.2% of the insertions of source code are being generated by AI outside of traditional coding tools (which are equipped with enterprise-approved copilots for coding), which can potentially place the development of vulnerabilities at risk. 

In terms of graphics and design, 3.0% of the content is generated using AI. The problem here is that AI can be used to produce trademarked material which can pose a problem. IT administrators, security teams, and the protocols that are designed to ensure security are unable to see shadow data due to its invisibility. The fact that shadow data exists outside of the networks and systems that have been approved for data protection means that it can be bypassed easily by any protection measures put in place. 

The risk of a breach or leak when data is left unmonitored increases and does not only complicate compliance with regulations such as GDPR or HIPAA but also makes compliance with data protection laws harder. As such, an organization is not able to effectively manage all of its data assets due to an absence of visibility, resulting in a loss of efficiency and a risk of data redundancy. Shadow data poses various security risks, which include unauthorized access to sensitive data, breaches in data security, and the potential for sensitive information to be exfiltrated. 

Shadow data can be a threat from a compliance standpoint because it only requires a minimal amount of protection from inadequacies in data security. Furthermore, there is an additional risk of data loss when data is stored in unofficial locations, since such personal data may not be backed up or protected against deletion if it is accidentally deleted. The surge in Shadow IT poses significant risks to organizations, with potential repercussions that include financial penalties, reputational damage, and operational disruptions. 

It is crucial to understand the distinctions between Shadow IT and Shadow Data to effectively address these threats. Shadow IT refers to the unauthorized use of tools and technologies within an organization. These tools, often implemented without the knowledge or approval of the IT department, can create substantial security and compliance challenges. Conversely, shadow data pertains to the information assets that these unauthorized tools generate and manage.

This data, regardless of its source or storage location, introduces its own set of risks and requires separate strategies for protection. Addressing Shadow IT necessitates robust control and monitoring mechanisms to manage the use of unauthorized technologies. This involves implementing policies and systems to detect and regulate non-sanctioned IT tools, ensuring that all technological resources align with the organization's security and compliance standards. 

On the other hand, managing shadow data requires a focus on identifying and safeguarding the data itself. This involves comprehensive data governance practices that protect sensitive information, ensuring it is secure, regardless of how it is created or stored. Effective management of shadow data demands a thorough understanding of where this data resides, how it is accessed, and the potential vulnerabilities it may introduce. Recognizing the nuanced differences between Shadow IT and Shadow Data is essential for developing effective governance and security strategies. 

By clearly delineating between the tools and the data they produce, organizations can better tailor their approaches to mitigate the risks associated with each. This distinction allows for more targeted and efficient protection measures, ultimately enhancing the organization's overall security posture and compliance efforts.

OpenAI and Stack Overflow Partnership: A Controversial Collaboration

OpenAI and Stack Overflow Partnership: A Controversial Collaboration

The Partnership Details

OpenAI and Stack Overflow are collaborating through OverflowAPI access to provide OpenAI users and customers with the correct and validated data foundation that AI technologies require to swiftly solve an issue, allowing engineers to focus on critical tasks. 

OpenAI will additionally share validated technical knowledge from Stack Overflow directly in ChatGPT, allowing users to quickly access trustworthy, credited, correct, and highly technical expertise and code backed by millions of developers who have contributed to the Stack Overflow platform over the last 15 years.

User Protests and Concerns

However, several Stack Overflow users were concerned about this partnership since they felt it was unethical for OpenAI to profit from their content without authorization.

Following the news, some users wished to erase their responses, including those with the most votes. However, StackCommerce does not often enable the deletion of posts if the question has any answers.

Ben, Epic Games' user interface designer, stated that he attempted to change his highest-rated responses and replace them with a message criticizing the relationship with OpenAI.

Stack Overflow won't let you erase questions with acceptable answers and high upvotes because this would remove knowledge from the community. Ben posted on Mastodon.

Instead, he changed his top-rated responses to a protest message. Within an hour, the moderators had changed the questions and banned Ben's account for seven days.

Ben, however, uploaded a screenshot showing Stack Overflow suspending his account after rolling back the modified messages to the original response. 

Stack Overflow’s Stance

Moderators on Stack Overflow clarified in an email that Ben shared that users are not able to remove posts because they negatively impact the community as a whole.

It is not appropriate to remove posts that could be helpful to others unless there are particular circumstances. The basic principle of Stack Exchange is that knowledge is helpful to others who might encounter similar issues in the future, even if the post's original author can no longer use it, replied Stack Exchange moderators to users on mail.

GDPR Considerations:

Article 17 of the GDPR rules grants users in the EU the “right to be forgotten,” allowing them to request the removal of personal data.

However, Article 17(3) states that websites have the right not to delete data necessary for “exercising the right of freedom of expression and information.”

Stack Overflow cited this provision when explaining why they do not allow users to remove posts

The partnership between OpenAI and Stack Overflow has sparked controversy, with users expressing concerns about data usage and freedom of expression. Stack Overflow’s decision to suspend users who altered their answers in protest highlights the challenges of balancing privacy rights and community knowledge

Is ChatGPT Secure? Risks, Data Safety, and Chatbot Privacy Explained

 

You've employed ChatGPT to make your life easier when drafting an essay or doing research. Indeed, the chatbot's ability to accept massive volumes of data, break down it in seconds, and answer in natural language is incredibly valuable. But does convenience come at a cost, and can you rely on ChatGPT to safeguard your secrets? It's a significant topic to ask because many of us lose our guard around chatbots and computers in general. So, in this article, we will ask and answer a simple question: Is ChatGPT safe?

Is ChatGPT safe to use?

Yes, ChatGPT is safe because it will not bring any direct harm to you or your laptop. Sandboxing is a safety system used by both online browsers and smartphone operating systems, such as iOS. This means ChatGPT can't access the rest of your device. You don't have to worry about your system being hacked or infected with malware when you use the official ChatGPT app or website. 

Having said that, ChatGPT has the potential to be harmful in other ways, such as privacy and secrecy. We'll go into more detail about this in the next section, but for now, remember that your conversations with the chatbot aren't private, even if they only surface when you log into your account. 

The final aspect of safety worth analysing is the overall existence of ChatGPT. Several IT giants have criticised modern chatbots and their developers for aggressively advancing without contemplating the potential risks of AI. Computers can now replicate human speech and creativity so perfectly that it's nearly impossible to tell the difference. For example, AI image generators may already generate deceptive visuals that have the potential to instigate violence and political unrest. Does this imply that you shouldn't use ChatGPT? Not necessarily, but it's an unsettling insight into what the future may hold. 

How to safely use ChatGPT

Even though OpenAI claims to store user data on American soil, we can't presume their systems are secure. We've seen higher-profile organisations suffer security breaches, regardless of their location or affiliations. So, how can you use ChatGPT safely? We've compiled a short list of tips: 

Don't share any private information that you don't want the world to know about. This includes trade secrets, proprietary code from the company for which you work, credit card data, and addresses. Some organisations, like Samsung, have prohibited their staff from using the chatbot for this reason. 

Avoid using third-party apps and instead download the official ChatGPT app from the App Store or Play Store. Alternatively, you can access the chatbot through a web browser. 

If you do not want OpenAI to utilise your talks for training, you may turn off data collection by toggling a toggle in Settings > Data controls > Improve the model for everyone. 

Set a strong password for your OpenAI account so that others cannot see your ChatGPT chat history. Periodically delete your conversation history. In this manner, even if someone tries to break into your account, they will be unable to view any of your previous chats.

Assuming you follow these guidelines, you should not be concerned about utilising ChatGPT to assist with everyday, tedious tasks. After all, the chatbot enjoys the backing of major industry companies such as Microsoft, and its core language model supports competing chatbots such as Microsoft Copilot.

Harnessing AI and ChatGPT for Eye Care Triage: Advancements in Patient Management

 

In a groundbreaking study conducted by Dr. Arun Thirunavukarasu, a former University of Cambridge researcher, artificial intelligence (AI) emerges as a promising tool for triaging patients with eye issues. Dr. Thirunavukarasu's research highlights the potential of AI to revolutionize patient management in ophthalmology, particularly in identifying urgent cases that require immediate specialist attention. 

The study, conducted in collaboration with Cambridge University academics, evaluated the performance of ChatGPT 4, an advanced language model, in comparison to expert ophthalmologists and medical trainees. Remarkably, ChatGPT 4 exhibited a scoring accuracy of 69% in a simulated exam setting, outperforming previous iterations of the program and rival language models such as ChatGPT 3.5, Llama, and Palm2. 

Utilizing a vast dataset comprising 374 ophthalmology questions, ChatGPT 4 demonstrated its capability to analyze complex eye symptoms and signs, providing accurate recommendations for patient triage. When compared to expert clinicians, trainees, and junior doctors, ChatGPT 4 proved to be on par with experienced ophthalmologists in processing clinical information and making informed decisions. 

Dr. Thirunavukarasu emphasizes the transformative potential of AI in streamlining patient care pathways. He envisions AI algorithms assisting healthcare professionals in prioritizing patient cases, distinguishing between emergencies requiring immediate specialist intervention and those suitable for primary care or non-urgent follow-up. 

By leveraging AI-driven triage systems, healthcare providers can optimize resource allocation and ensure timely access to specialist services for patients in need. Furthermore, the integration of AI technologies in primary care settings holds promise for enhancing diagnostic accuracy and expediting treatment referrals. ChatGPT 4 and similar language models could serve as invaluable decision support tools for general practitioners, offering timely guidance on eye-related concerns and facilitating prompt referrals to specialist ophthalmologists. 

Despite the remarkable advancements in AI-driven healthcare, Dr. Thirunavukarasu underscores the indispensable role of human clinicians in patient care. While AI technologies offer invaluable assistance and decision support, they complement rather than replace the expertise and empathy of healthcare professionals. Dr. Thirunavukarasu reaffirms the central role of doctors in overseeing patient management and emphasizes the collaborative potential of AI-human partnerships in delivering high-quality care. 

As the field of AI continues to evolve, propelled by innovative research and technological advancements, the integration of AI-driven triage systems in clinical practice holds immense promise for enhancing patient outcomes and optimizing healthcare delivery in ophthalmology and beyond. Dr. Thirunavukarasu's pioneering work exemplifies the transformative impact of AI in revolutionizing patient care pathways and underscores the imperative of embracing AI-enabled solutions to address the evolving needs of healthcare delivery.

Assessing ChatGPT Impact: Memory Loss, Student Procrastination

 


In a study published in the International Journal of Educational Technology in Higher Education, researchers concluded that students are more likely to use ChatGPT, an artificial intelligence tool based on generative artificial intelligence when overwhelmed with academic work. The study also revealed that ChatGPT is correlated with procrastination, memory loss, and a decrease in academic performance, as well as a concern about the future. 

 Using generative AI in education has been shown to have a profound impact in terms of its widespread use and potential drawbacks. The very fact that advanced AI programs have been available in public for only a short while has already raised a great deal of concern. AI has created a lot of dangers in the past few years, from people using the programs to produce work that was not their own, and taking credit for it, to AI impersonating celebrities with no consent of the celebrity. 

The legislature is finding it hard to keep up. AI software programs like ChatGPT, however, have been found to have negative psychological effects on students, including memory loss, which is an unfortunate new side effect that has yet to be discovered. A study has shown that students who use artificial intelligence software such as ChatGPT are more likely to perform poorly academically, suffer memory loss, and procrastinate more frequently, according to the study. 

It has been found that 32% of university students already use the AI chatbot ChatGPT every week, and it can generate convincing answers to simple text prompts. Several new studies have found that university students who use ChatGPT to complete assignments fall into a vicious circle where they don’t give themselves enough time to complete their assignments, they need to rely on ChatGPT to complete them, and their ability to remember facts gradually weakens over time. 

A study by the University of Oxford found that students who had heavy workloads and a lot of time pressure were more likely to use ChatGPT than those who had less sensitive rewards. They did, however, find a correlation between the degree to which a student reflects on their conscientiousness regarding work quality and the extent to which they use ChatGPT. This study found that students who frequently used ChatGPT procrastinated more than students who rarely used ChatGPT. 

This study was conducted in two phases, allowing the researchers to gain a better understanding of these dynamics. As part of the study, a scale was developed and validated to assess the use of ChatGPT as an academic tool by university students. Following expert evaluations of content validity, the original 12 items were reduced to 10 after the initial set of 12 items had been generated. 

Eventually, the final selection of eight items was made through exploratory factor analysis and reliability testing, which resulted in an effective measure of the extent to which ChatGPT has been used for academic purposes. Researchers conducted three surveys of students to determine who is most likely to use ChatGPT, and how easily users experience the consequences. 

To investigate whether ChatGPT is having any beneficial effects, the researchers asked a series of questions. A thesis was published that stated students who rely on AI because they feel overwhelmed by all of the work they have to do probably do so in a bid to save time as they feel overwhelmed by all of their work. Hence it might have been concluded from the results that ChatGPT may have been a tool that would be used mainly by students who had already been struggling academically. 

The advancement of artificial intelligence can be amazing, as exemplified by its recent use to recreate Marilyn Monroe's personality, but the dangers of a system allowing for super-intelligence cannot be ignored. There is no doubt that artificial intelligence is becoming more advanced every day. At the end of the research, the researchers found that high use of ChatGPT was linked to detrimental outcomes for the participants. 

ChatGPT has been reported to be a cause of memory loss in students and a lower overall GPA in these students. Researchers advise that educators should assign students activities, assignments, or projects that cannot be completed by ChatGPT so students are actively engaged in critical thinking and problem-solving activities, the study's authors recommend. To mitigate ChatGPT's adverse effects on a student's learning journey and mental capabilities, this can be said to be a beneficial factor.

From Personal Computer to Innovation Enabler: Unveiling the Future of Computing

 


The use of artificial intelligence (AI) has been largely invisible until now, automating processes and improving performance in the background. Despite the unprecedented adoption curve of generative AI, which is transforming the way humans interact with technology through natural language and conversation, it represents a reversible paradigm that will change the face of technology for a lifetime. 

According to a study of generative artificial intelligence's economic potential, if it is adopted to its full potential, artificial intelligence could increase the global economy by about 4 trillion dollars and the UK economy by about 31 billion pounds in GDP within the next decade. 

There is also a predicted $11 trillion increase in global GDP from non-generative AI as well as other forms of automation, which will also contribute to these figures. With great change comes great opportunity - so caution is justified but excitement is also justified - because great change comes great opportunity. 

In the past, most users have only encountered generative AI via web browsers and the cloud, a platform that carries inherent challenges related to reliability, speed, and privacy, since it is an online-only, open-access, corporate-owned platform. If users were to apply generative AI to their PC, they would benefit from the full advantages, without any of the disadvantages that come with it. 

The reason why artificial intelligence is redefining the role of a personal computer as dramatically and fundamentally as the internet did has to do with its ability to empower individuals to become creators of technology, rather than just consumers of it. The AI PC will give people the opportunity to become creative technologists rather than just consumers. 

By leveraging engineering strengths, people might be able to develop powerful new machines optimized for the use of local AI models—while still being able to utilize hybrid cloud computing and local inference to work offline while leveraging engineering strengths. By using local inference and data processing, data processing can also be performed in a faster and more efficient manner, resulting in lower latency and stronger data privacy and security, in addition to improved energy efficiency and access costs. 

By using this kind of technology, users' PCs can become intelligent personal companions who can keep their information secure while also performing tasks for them, as on-device artificial intelligence can access all of the same specific emails, reports, and spreadsheets the same way they do. While AI is being used to improve the performance and functionality of PCs, the research in using it will continue to accelerate. AI-powered cameras that can be optimized automatically for better video calls with AI-enhanced sound reduction, voice enhancement, and framing will be used to improve the hybrid work experience. 

As a result of enterprise-grade security and privacy, users' PCs will become a personal companion that offers personalized generative AI solutions. On-device AI allows users to trust that their information is safe while doing tasks for them because it has access to all the same, specific emails, presentations, reports, and spreadsheets that they do. 

By using ChatGPT-4, performance is significantly enhanced, which has been demonstrated to be over 25% faster, 40% faster in human ratings, and 12 per cent faster at completing tasks. If users can integrate their internal company information as well as their personalized working data into their companion, and yet still be able to quickly analyze vast amounts of public information, one can imagine the possible use cases with the two combined to produce the best and most relevant of both worlds.

Security Flaws Discovered in ChatGPT Plugins

 


Recent research has surfaced serious security vulnerabilities within ChatGPT plugins, raising concerns about potential data breaches and account takeovers. These flaws could allow attackers to gain control of organisational accounts on third-party platforms and access sensitive user data, including Personal Identifiable Information (PII).

According to Darren Guccione, CEO and co-founder of Keeper Security, the vulnerabilities found in ChatGPT plugins pose a significant risk to organisations as employees often input sensitive data, including intellectual property and financial information, into AI tools. Unauthorised access to such data could have severe consequences for businesses.

In November 2023, ChatGPT introduced a new feature called GPTs, which function similarly to plugins and present similar security risks, further complicating the situation.

In a recent advisory, the Salt Security research team identified three main types of vulnerabilities within ChatGPT plugins. Firstly, vulnerabilities were found in the plugin installation process, potentially allowing attackers to install malicious plugins and intercept user messages containing proprietary information.

Secondly, flaws were discovered within PluginLab, a framework for developing ChatGPT plugins, which could lead to account takeovers on third-party platforms like GitHub.

Lastly, OAuth redirection manipulation vulnerabilities were identified in several plugins, enabling attackers to steal user credentials and execute account takeovers.

Yaniv Balmas, vice president of research at Salt Security, emphasised the growing popularity of generative AI tools like ChatGPT and the corresponding increase in efforts by attackers to exploit these tools to gain access to sensitive data.

Following coordinated disclosure practices, Salt Labs worked with OpenAI and third-party vendors to promptly address these issues and reduce the risk of exploitation.

Sarah Jones, a cyber threat intelligence research analyst at Critical Start, outlined several measures that organisations can take to strengthen their defences against these vulnerabilities. These include:


1. Implementing permission-based installation: 

This involves ensuring that only authorised users can install plugins, reducing the risk of malicious actors installing harmful plugins.

2. Introducing two-factor authentication: 

By requiring users to provide two forms of identification, such as a password and a unique code sent to their phone, organisations can add an extra layer of security to their accounts.

3. Educating users on exercising caution with code and links: 

It's essential to train employees to be cautious when interacting with code and links, as these can often be used as vectors for cyber attacks.

4. Monitoring plugin activity constantly: 

By regularly monitoring plugin activity, organisations can detect any unusual behaviour or unauthorised access attempts promptly.

5. Subscribing to security advisories for updates:

Staying informed about security advisories and updates from ChatGPT and third-party vendors allows organisations to address vulnerabilities and apply patches promptly.

As organisations increasingly rely on AI technologies, it becomes crucial to address and mitigate the associated security risks effectively.


OpenAI Bolsters Data Security with Multi-Factor Authentication for ChatGPT

 

OpenAI has recently rolled out a new security feature aimed at addressing one of the primary concerns surrounding the use of generative AI models such as ChatGPT: data security. In light of the growing importance of safeguarding sensitive information, OpenAI's latest update introduces an additional layer of protection for ChatGPT and API accounts.

The announcement, made through an official post by OpenAI, introduces users to the option of enabling multi-factor authentication (MFA), commonly referred to as 2FA. This feature is designed to fortify security measures and thwart unauthorized access attempts.

For those unfamiliar with multi-factor authentication, it's essentially a security protocol that requires users to provide two or more forms of verification before gaining access to their accounts. By incorporating this additional step into the authentication process, OpenAI aims to bolster the security posture of its platforms. Users are guided through the process via a user-friendly video tutorial, which demonstrates the steps in a clear and concise manner.

To initiate the setup process, users simply need to navigate to their profile settings by clicking on their name, typically located in the bottom left-hand corner of the screen. From there, it's just a matter of selecting the "Settings" option and toggling on the "Multi-factor authentication" feature.

Upon activation, users may be prompted to re-authenticate their account to confirm the changes or redirected to a dedicated page titled "Secure your Account." Here, they'll find step-by-step instructions on how to proceed with setting up multi-factor authentication.

The next step involves utilizing a smartphone to scan a QR code using a preferred authenticator app, such as Google Authenticator or Microsoft Authenticator. Once the QR code is scanned, users will receive a one-time code that they'll need to input into the designated text box to complete the setup process.

It's worth noting that multi-factor authentication adds an extra layer of security without introducing unnecessary complexity. In fact, many experts argue that it's a highly effective deterrent against unauthorized access attempts. As ZDNet's Ed Bott aptly puts it, "Two-factor authentication will stop most casual attacks dead in their tracks."

Given the simplicity and effectiveness of multi-factor authentication, there's little reason to hesitate in enabling this feature. Moreover, when it comes to safeguarding sensitive data, a proactive approach is always preferable. 

Microsoft and OpenAI Reveal Hackers Weaponizing ChatGPT

 

In a digital landscape fraught with evolving threats, the marriage of artificial intelligence (AI) and cybercrime has become a potent concern. Recent revelations from Microsoft and OpenAI underscore the alarming trend of malicious actors harnessing advanced language models (LLMs) to bolster their cyber operations. 

The collaboration between these tech giants has shed light on the exploitation of AI tools by state-sponsored hacking groups from Russia, North Korea, Iran, and China, signalling a new frontier in cyber warfare. According to Microsoft's latest research, groups like Strontium, also known as APT28 or Fancy Bear, notorious for their role in high-profile breaches including the hacking of Hillary Clinton’s 2016 presidential campaign, have turned to LLMs to gain insights into sensitive technologies. 

Their utilization spans from deciphering satellite communication protocols to automating technical operations through scripting tasks like file manipulation and data selection. This sophisticated application of AI underscores the adaptability and ingenuity of cybercriminals in leveraging emerging technologies to further their malicious agendas. The Thallium group from North Korea and Iranian hackers of the Curium group have followed suit, utilizing LLMs to bolster their capabilities in researching vulnerabilities, crafting phishing campaigns, and evading detection mechanisms. 

Similarly, Chinese state-affiliated threat actors have integrated LLMs into their arsenal for research, scripting, and refining existing hacking tools, posing a multifaceted challenge to cybersecurity efforts globally. While Microsoft and OpenAI have yet to detect significant attacks leveraging LLMs, the proactive measures undertaken by these companies to disrupt the operations of such hacking groups underscore the urgency of addressing this evolving threat landscape. Swift action to shut down associated accounts and assets coupled with collaborative efforts to share intelligence with the defender community are crucial steps in mitigating the risks posed by AI-enabled cyberattacks. 

The implications of AI in cybercrime extend beyond the current landscape, prompting concerns about future use cases such as voice impersonation for fraudulent activities. Microsoft highlights the potential for AI-powered fraud, citing voice synthesis as an example where even short voice samples can be utilized to create convincing impersonations. This underscores the need for preemptive measures to anticipate and counteract emerging threats before they escalate into widespread vulnerabilities. 

In response to the escalating threat posed by AI-enabled cyberattacks, Microsoft spearheads efforts to harness AI for defensive purposes. The development of a Security Copilot, an AI assistant tailored for cybersecurity professionals, aims to empower defenders in identifying breaches and navigating the complexities of cybersecurity data. Additionally, Microsoft's commitment to overhauling software security underscores a proactive approach to fortifying defences in the face of evolving threats. 

The battle against AI-powered cyberattacks remains an ongoing challenge as the digital landscape continues to evolve. The collaborative efforts between industry leaders, innovative approaches to AI-driven defence mechanisms, and a commitment to information sharing are pivotal in safeguarding digital infrastructure against emerging threats. By leveraging AI as both a weapon and a shield in the cybersecurity arsenal, organizations can effectively adapt to the dynamic nature of cyber warfare and ensure the resilience of their digital ecosystems.

ChatGPT Evolved with Digital Memory: Enhancing Conversational Recall

 


The ChatGPT software is getting a major upgrade – users will be able to get more customized and helpful replies to their previous conversations by storing them in memory. The memory feature is being tested with a small number of free as well as premium users at the moment. Added memory to the ChatGPT software is an important step in reducing the amount of repetition in conversations. 

It is not uncommon for users to have to explain their preferences regarding things like email formatting regularly whenever they request the service of ChatGPT. It is however possible for the bot to remember past choices and make those choices again when it has memory enabled. 

The artificial intelligence company OpenAI, behind ChatGPT, is currently testing a version of ChatGPT that can remember previous interactions users had with the chatbot. According to the company's website, that information can now be used by the bot in future conversations.  

Despite the fact that AI bots are very good at assisting with a variety of questions, one of their biggest drawbacks is that they do not remember who the users are or what they asked previously, and as a result of this design, it does not remember this. This is by design, for privacy reasons, but it hinders the technology from actually becoming a true digital assistant that can help users. 

Currently, OpenAI is working hard to fix this problem, and it is finally adding a memory feature to ChatGPT as part of its effort to solve this problem. With this feature, the bot will be able to retain important personal details from previous conversations and apply them to the current conversation in context. 

In addition to GPT bots, the new memory feature will also be available to builders, who will be able to enable it or leave it disabled. To interact with a memory-enabled GPT, users need to have Memory enabled, but users' memories are not shared with builders when they interact with a memory-enabled GPT. There will be no sharing of memories with individual bots or vice versa since each GPT will have its memory. 

Additionally, ChatGPT has introduced a new feature called Temporary Chat, which allows users to chat without using Memory, which means that they will not appear in their chat history, will not be used to train OpenAI's LLMs, and will not be used for training.

To get rid of those fungal cream ads users are getting on YouTube after they have opened an incognito tab to search for the weird symptoms users are experiencing. They can use it as an alternative to the normal tab. Despite all of the benefits on offer, there are also quite a few issues that must be addressed to make the process safe and effective. 

As part of the upgrade, the company also stated that users will be able to control what information will be retained and what information can be fed back to the system so that it can be better trained by the user. OpenAI states that users will be able to control how the system will remember certain sensitive topics, as well as that the system has been trained not to automatically remember certain sensitive topics, such as health data, so users can manage their use of the software. 

As per the company, users can simply tell the bot that they don't want it to remember something and it will do so. A Manage Memory tab can also be found in the settings, where more detailed adjustments can be made to memory management. Users can choose to turn off the feature completely if they find the whole concept unappealing. 

A beta version of this service has been rolled out this week to a "small number" of ChatGPT free users to test it. An upcoming broader release of the software is planned by the company, and plans will be shared shortly. This is a beta service, for now, and is rolling out to a “small number” of ChatGPT free users this week. The company will share plans for a broader release in the future.