Search This Blog

Showing posts with label AI Tool. Show all posts

Three Ways AI-Powered Patch Management is Influencing Cybersecurity's Future

 

Approaches to patch management that aren't data-driven are breaches just waiting to happen. Security teams delay prioritising patch management until a breach occurs, which allows attackers to weaponize CVEs that are several years old.

More contextual knowledge about which CVEs are most vulnerable is now a part of the evolving cyber attacker tradecraft. As a result, unsecured attack surfaces with exploitable memory conflicts are left behind when patch management is done manually or endpoints are overloaded with agents. 

Attackers continue to hone their skills while weaponizing vulnerabilities with cutting-edge methods and tools that can elude detection and undermine manual patch management systems.

Up to 71% of all detections indexed by the CrowdStrike Threat Graph, according to CrowdStrike's 2023 Global Threat Report, are caused by intrusive activities without the use of malware. Security flaws that had not yet been patched were at blame for 47% of breaches. Remediating security vulnerabilities manually is done by 56% of organisations. 

Consider this if you need any additional evidence that relying on manual patching techniques is ineffective: 20% of endpoints are still not up to date on all patches after remediation, making them vulnerable to breaches once more.

A prime example of how AI can be used in cybersecurity is to automate patch management while utilising various datasets and integrating it into an RBVM platform. The most advanced AI-based patch management systems can translate vulnerability assessment telemetry and rank risks according to patch type, system, and endpoint. Nearly every vendor in this sector is advancing AI and machine learning quickly due to risk-based scoring.

When prioritising and automating patching operations, vulnerability risk rating and scoring based on AI and machine learning provide the knowledge security teams need. The following three examples highlight how AI-driven patch management is revolutionising cybersecurity: 

Real time detection 

To overpower endpoint perimeter-based protection, attackers rely on machine-based exploitation of patch vulnerabilities and flaws. Attack patterns are identified and added to the algorithms' knowledge base via supervised machine learning techniques that have been trained on data. As a result of the fact that machine identities now outweigh human identities by a factor of 45, attackers look for vulnerable endpoints, systems, and other assets that are not patched up to date.

In a recent interview, Ivanti's Mukkamala described how he sees patch management evolving into a more automated process with AI copilots supplying more contextual intelligence and forecast accuracy. 

“With more than 160,000 vulnerabilities currently identified, it is no wonder that IT and security professionals overwhelmingly find patching overly complex and time-consuming,” Mukkamala explained. “This is why organizations need to utilize AI solutions … to assist teams in prioritizing, validating and applying patches. The future of security is offloading mundane and repetitive tasks suited for a machine to AI copilots so that IT and security teams can focus on strategic initiatives for the business.” 

Automating remediation decisions 

Machine learning algorithms continuously analyse and learn from telemetry data to increase prediction accuracy and automate remediation decisions. The quick evolution of the Exploit Prediction Scoring System (EPSS) machine learning model, developed with the combined knowledge of 170 professionals, is one of the most exciting aspects of this breakthrough field.

The EPSS is designed to aid security teams in managing the rising tide of software vulnerabilities and spotting the most perilous ones. The model now in its third iteration outperforms earlier iterations by 82%. 

“Remediating vulnerabilities by faster patching is costly and can lead astray the most active threats,” writes Gartner in its report Tracking the Right Vulnerability Management Metrics (client access required). “Remediating vulnerabilities via risk-based patching is more cost-effective and targets the most exploitable, business-critical threats.” 

Contextual understanding of endpoint assets 

Another noteworthy aspect of AI-based patch management innovation is the speed with which providers are enhancing their usage of AI and machine learning to discover, inventory, and patch endpoints that require updates. Each vendor's approach is unique, but they all strive to replace the outmoded, error-prone, manual inventory-based method. Patch management and RBVM platform suppliers are rushing out new updates that improve prediction accuracy and the capacity to determine which endpoints, machines, and systems need to be patched.

Bottom line

The first step is to automate patch management updates. Following that, patch management systems and RBVM platforms are integrated to improve application-level version control and change management. Organisations will acquire more contextual information as supervised and unsupervised machine learning algorithms assist models discover potential abnormalities early and fine-tune their risk-scoring accuracy. Many organisations are still playing catch-up when it comes to patch management. To realise their full potential, organisations must leverage these technologies to manage whole lifecycles.

AI Knife Detection System Fails at Hundreds of US Schools

 

A security company that provides AI weapons scanners to schools is facing new doubts about its technology after a student was assaulted with a knife that the $3.7 million system failed to identify.

Last Halloween, Ehni Ler Htoo was strolling in the corridor of his school in Utica, New York, when another student approached him and attacked him with a knife. The victim's lawyer told BBC that the 18-year-old received many stab wounds to his head, neck, face, shoulder, back, and hand. 

Despite a multimillion-dollar weapons detection system built by a company called Evolv Technology, the knife used in the attack was carried inside Proctor High School. 

Evolv claims that its scanner "combines powerful sensor technology with proven artificial intelligence" to detect weapons rather than just detecting metal. The system issues an alert when it discovers a concealed weapon, such as knives, bombs, or weapons. It previously promised that its scanners might aid in the creation of "weapons-free zones" and has openly asserted that their equipment is very accurate. 

According to Peter George, the company's chief executive, its systems "have the signatures for all the weapons that are out there." Knives, explosives, and firearms are among the weapons that the system can locate, according to earlier news releases. 

After Evolv's scanner missed 42% of large knives in 24 walk-throughs, a BBC investigation conducted last year discovered that testing proved the technology could not reliably detect large blades. 

Major American stadiums as well as the Manchester Arena in the United Kingdom employ the system. According to the testers, Evolv should alert prospective customers. Despite this, the company has been growing in the educational sector and currently claims to be present in hundreds of schools across the US. 

Stabbing incident

The Utica Schools Board purchased the weapons scanning system from Evolv in March 2022 for 13 schools. Over the summer break, it was erected.

The attacker who attacked Ehni Ler Htoo was seen on CCTV entering Proctor High School and going through the Evolv weapons detectors on October 31.

"When we viewed the horrific video, we all asked the same question. How did the student get the knife into the school?" stated Brian Nolan, Superintendent of Utica Schools.

The knife employed in the stabbing was more than 9in (22.8cm) long. The attack prompted the school system in Utica to conduct an internal investigation.

"Through investigation it was determined the Evolv Weapon Detection System… was not designed to detect knives," Mr Nolan added. 

Ten metal detectors have taken the place of the scanners at Proctor High School. The remaining 12 schools in the district, though, are still using the scanners.

According to Mr. Nolan, the district cannot afford to remove Evolv's system from its remaining schools. Since that attack, three additional knives have been discovered on kids at different schools in the district where the Evolv systems are still in use. 

One of the knives measured 7 inches. Another had a blade with finger holes that was bent. There was also a pocket knife. According to Mr. Nolan, none of them were discovered by the weapons scanner; instead, all of them were discovered because staff members reported them. 

Evolv's stance 

The language on Evolv's website was altered following the stabbing. 

Evolv had a title on its homepage that bragged about having "Weapons-Free Zones" up until October of last year. The corporation afterwards modified the language to "Safe Zones" and omitted that phrase. Now it says "Safer Zones" after another modification. 

The company asserts that its system locates firearms using cutting-edge AI technology. However, its detractors claim that not enough is understood about the system's operation or how well this technology detects various kinds of weaponry. 

Evolv has overstated the effectiveness of the device, according to Conor Healy of IPVM, a company that evaluates security technology. 

"There's an epidemic of schools buying new technology based on audacious marketing claims, then finding out it has hidden flaws, often millions of dollars later. Evolv is one of the worst offenders. School officials are not technical experts on weapons detection, and companies like Evolv profit from their ignorance."

Ethical Issues Mount as AI Takes Bigger Decision-Making Role in Multiple Sectors

 

Even if we don't always acknowledge it, artificial intelligence (AI) has ingrained itself so deeply into our daily lives that it's difficult to resist. 

While ChatGPT and the use of algorithms in social media have received a lot of attention, law is a crucial area where AI has the potential to make a difference. Even though it may seem far-fetched, we must now seriously examine the possibility of AI determining guilt in courtroom procedures. 

The reason for this is that it calls into question whether using AI in trials can still be done fairly. To control the use of AI in criminal law, the EU has passed legislation.

There are already algorithms in use in North America that facilitate fair trials. The Pre-Trial Risk Assessment Instrument (PTRA), the Public Safety Assessment (PSA), and Compas are a few of these. The employment of AI technology in the UK criminal justice system was examined in a study produced by the House of Lords in November 2022. 

Empowering algorithms

On the one hand, it would be intriguing to observe how AI can greatly improve justice over time, for example by lowering the cost of court services or conducting judicial proceedings for small violations. AI systems are subject to strict restrictions and can avoid common psychological pitfalls. Some may even argue that they are more impartial than human judges.

Algorithms can also produce data that can be used by lawyers to find case law precedents, streamline legal processes, and assist judges. 

On the other hand, routine automated judgements made by algorithms might result in a lack of originality in legal interpretation, which might impede or halt the advancement of the legal system. 

The artificial intelligence (AI) technologies created for use in a trial must adhere to a variety of European law documents that outline requirements for upholding human rights. Among them are the Procedural European Commission for the Efficiency of Justice, the 2018 European Ethical Charter on the use of Artificial Intelligence in Judicial Systems and their Environment, and other laws passed in previous years to create an effective framework on the use and limitations of AI in criminal justice. We also need effective supervision tools, though, including committees and human judges. 

Controlling and regulating AI is difficult and involves many different legal areas, including labour law, consumer protection law, competition law, and data protection legislation. The General Data Protection Regulation, which includes the fundamental principle for justice and accountability, for instance, directly applies to choices made by machines.

The GDPR has rules to stop people from being subject to decisions made entirely by machines with no human input. This principle has also been discussed in other legal disciplines. The problem is already here; in the US, "risk-assessment" technologies are used to support pre-trial determinations of whether a defendant should be freed on bond or detained pending trial.

Sociocultural reforms in mind? 

Given that law is a human science, it is important that AI technologies support judges and solicitors rather than taking their place. Justice follows the division of powers, exactly like in contemporary democracies. This is the guiding principle that establishes a distinct division between the legislative branch, which creates laws, and the judicial branch, which consists of the system of courts. This is intended to defend against tyranny and protect civil freedoms. 

By questioning human laws and the decision-making process, the use of AI in courtroom decisions may upend the balance of power between the legislative and the judiciary. As a result, AI might cause a shift in our values. 

Additionally, as all forms of personal data may be used to predict, analyse, and affect human behaviour, using AI may redefine what is and is not acceptable activity, sometimes without any nuance.

Also simple to envision is the evolution of AI into a collective intelligence. In the world of robotics, collective AI has silently emerged. In order to fly in formation, drones, for instance, can communicate with one another. In the future, we might envision an increasing number of machines interacting with one another to carry out various jobs. 

The development of an algorithm for fair justice may indicate that we value an algorithm's abilities above those of a human judge. We could even be willing to put our own lives in this tool's hands. Maybe one day we'll develop into a civilization like that shown in Isaac Asimov's science fiction book series The Robot Cycle, where robots have intelligence on par with people and manage many facets of society. 

Many individuals are afraid of a world where important decisions are left up to new technology, maybe because they think that it might take away what truly makes us human. However, AI also has the potential to be a strong tool for improving our daily lives. 

Intelligence is not a state of perfection or flawless rationality in human reasoning. For instance, mistakes play a significant part in human activity. They enable us to advance in the direction of real solutions that advance our work. It would be prudent to keep using human thinking to control AI if we want to expand its application in our daily lives.

ClearML Launches First Generative AI Platform to Surpasses Enterprise ChatGPT Challenges

 

Earlier this week, ClearGPT, the first secure, industry-grade generative AI platform in the world, was released by ClearML, the leading open source, end-to-end solution for unleashing AI in the enterprise. Modern LLMs may be implemented and used in organisations safely and at scale thanks to ClearGPT. 

This innovative platform is designed to fit the specific needs of an organisation, including its internal data, special use cases, and business processes. It operates securely on its own network and offers full IP, compliance, and knowledge protection. 

With ClearGPT, businesses can use AI to drive innovation, productivity, and efficiency at a massive scale, as well as to develop new internal and external products faster, outsmart the competition, and generate new revenue streams. This allows them to capitalise on the creativity of ChatGPT-like LLMs. 

Many companies recognise ChatGPT's potential but are unable to utilise it within their own enterprise security boundaries due to its inherent limitations, including security, performance, cost, and data governance difficulties.

By solving the following corporate issues, ClearGPT eliminates these obstacles and dangers of utilising LLMs to spur business innovation. 

Security & compliance: Businesses rely on open APIs to access generative AI models and xGPT solutions, which exposes them to privacy risks and data leaks, jeopardising their ownership of intellectual property (IP) and highly sensitive data exchanged with third parties. You can maintain data security within your network using ClearGPT while having complete control and no data leakage. 

Performance and cost: ClearGPT offers enterprise customers unmatched model performance with live feedback and customisation at lower running costs than rival xGPT solutions, where GPT performance is a static black box. 

Governance: Other solutions can't be used to limit access to sensitive information within an organisation. Using role-based access, data governance across business units, and ClearGPT, you can uphold privacy and access control within the company while still adhering to legal requirements. 

Data: Avoid letting xGPT solutions possess or divulge your company's data to rivals. With ClearGPT's comprehensive corporate IP protection, you can preserve company knowledge, produce AI models, and keep your competitive edge. 

Customization and flexibility: These two features are lacking in other xGPT solutions. Gain unrivalled capabilities with human reinforcement feedback loops and constant fresh data, giving AI that entirely ignores model and multimodal bias while learning and adapting to each enterprise's unique DNA. Businesses may quickly adapt and employ any open-source LLM with the help of ClearGPT. 

Enterprises can now explore, generate, analyse, search, correlate, and act upon predictive business information (internal and external data, benchmarks, and market KPIs) in a way that is safer, more legal, more efficient, more natural, and more effective than ever before with the help of ClearGPT. Enjoy an out-of-the-box platform for enterprise-grade LLMs that is independent of the type of model being used, without the danger of costly, time-consuming maintenance. 

“ClearGPT is designed for the most demanding, secure, and compliance-driven enterprise environments to transform their AI business performance, products, and innovation out of the box,” stated Moses Guttmann, Co-founder and CEO of ClearML. “ClearGPT empowers your existing enterprise data engineering and data science teams to fully utilize state-of-the-art LLM models agnostically, removing vendor lock-ins; eliminating corporate knowledge, data, and IP leakage; and giving your business a competitive advantage that fits your organization’s custom AI transformation needs while using your internal enterprise data and business insights.”

Homeland Security Employs AI to Analyze Social Media of Citizens and Refugees

 

The Customs and Border Protection (CBP) division of the US Department of Homeland Security (DHS) is using intrusive AI-powered systems to screen visitors coming into and leaving the nation, according to a document obtained by Motherboard through a freedom of information request this week. 

According to this study, the CBP keeps track of US citizens, migrants, and asylum seekers and, in some instances, uses artificial intelligence (AI) to connect people's social media posts to their Social Security numbers and location information. 

AI-Powered government surveillance tool 

Babel X is the name of the monitoring technology that the government department uses. Users can enter details, such as a target's name, email address, or phone number, about someone they want to learn more about.

The algorithm then provides a wealth of additional information about that person, including what they may have posted on social media, their employment history, and any related IP addresses. 

Software dubbed Babel X, created by a company called Babel Street, combines data that is both publicly and commercially available in more than 200 languages and is allegedly AI-enabled.

In fact, Babel Street announced plans to purchase AI text analysis business Rosette in November of last year. The company said that this would aid its Babel X tool with "identity resolution," which might improve national security and the battle against financial crime. 

Freedom activists concerned 

Babel data will be used/captured/stored in support of CBP targeting, vetting, operations, and analysis, according to the paper made public by CBP, and will be kept on the organisation's computer systems for 75 years. 

According to senior staff attorney at the Knight First Amendment Institute Carrie DeCell, "the US government's ever-expanding social media dragnet is certain to chill people from engaging in protected speech and association online."

“And CBP’s use of this social media surveillance technology is especially concerning in connection with existing rules requiring millions of visa applicants each year to register their social media handles with the government. As we’ve argued in a related lawsuit, the government simply has no legitimate interest in collecting and retaining such sensitive information on this immense scale." 

Patrick Toomey, the ACLU's deputy project director for the national security project, told Motherboard that the document "raises a number of questions about what specific purposes CBP is using social media monitoring for and how that monitoring is actually conducted" in addition to providing important new information. 

How AI is Helping Threat Actors to Launch Cyber Attacks

 

Artificial intelligence offers great promise, and while many tech enthusiasts are enthusiastic about it, hackers are also looking to this technology to aid their illicit activities. The field of artificial intelligence is interesting, but it may also make us nervous. Therefore, how might AI support online criminals? 

Social engineering 

Every week, social engineering, a form of cybercrime, claims countless victims and is a big issue worldwide. In this technique, the victim is coerced into complying with the attacker's demands through manipulation, frequently without being aware that they are the target. 

By creating the text that appears in fraudulent communications like phishing emails and SMS, AI could aid in social engineering attempts. It wouldn't be impossible, even with today's level of AI development, to instruct a chatbot to create a compelling or persuasive script, which the cybercriminal could then employ against their victims. People have taken notice of this threat and are already worried about the dangers that lie ahead.

In this way, by correcting typos and grammatical errors, AI might potentially assist in making hostile communications appear more formal and professional. Therefore, it might be advantageous for cybercriminals if they can write their social engineering content more clearly and effectively. Such errors are frequently described as potential indicators of malicious activity. 

Analysing stolen data

Data is worth as much as gold. Sensitive information is currently regularly sold on dark web markets, and some dangerous actors are willing to pay a very high price for the information if it is sufficiently valuable. 

But data must first be stolen in order for it to appear on these marketplaces. Small-scale data theft is undoubtedly possible, particularly when an attacker targets single victims. However, larger hacks may lead to the theft of sizable databases. The cybercriminal must now decide whatever information in this database is worthwhile. 

A malicious actor would spend less time deciding what is worthwhile to sell or, on the other hand, directly exploit by hand if the process of identifying valuable information were to be expedited with AI. Since learning is the foundation of artificial intelligence, it might someday be simple to use an AI-powered tool to detect sensitive information that is valuable. 

Malware writing 

Some people would not be surprised to learn that malware can be created using artificial intelligence because this is a sophisticated form of technology. A combination of the words "malicious" and "software," malware refers to the various types of malicious software used in hacking. 

Malware must first be written, though, in order to be used. Cybercriminals aren't all skilled programmers; others just don't want to spend the time learning how to write new programmes. AI may prove useful in this situation. 

It was discovered that ChatGPT might be used to create malware for nefarious activities in the early 2023. An AI infrastructure supports OpenAI's wildly popular ChatGPT. Despite the fact that this chatbot is being used by hackers, it can perform many important tasks. 

In one particular instance, a user claimed in a forum for hackers that ChatGPT had been used to write a Python-based malware programme. Writing malicious software could be efficiently automated with ChatGPT. This makes it easier for novice cybercriminals with limited technical knowledge to operate. 

Instead of writing sophisticated code that poses serious hazards, ChatGPT (or at least its most recent version) is only capable of producing simple, occasionally problematic malware programmes. This does not preclude the employment of AI to create malicious software, either. Given that a modern AI chatbot is already capable of writing simple malicious programmes, it might not be long before we start to notice more heinous malware coming from AI systems. 

Bottom line 

Artificial intelligence has been and will continue to be abused by cybercriminals, as is the case with the majority of technological advancements. It's absolutely impossible to predict how hackers will be able to progress their attacks utilising this technology in the near future given that AI already has certain dubious skills. Cybersecurity companies may also use AI more frequently to combat similar threats, but only time will tell how this one develops.

This AI Tool Can Crack Your Password in Sixty Seconds; Here's How to Protect Yourself

 

Even though ChatGPT may be the AI that everyone is thinking about right now, chatbots aren't the only AI tool that has emerged in recent times. DALL•E 2 and Runway Gen 2 are just two examples of AI picture and video creators. Sadly, some AI password crackers exist as well, such as PassGAN. 

PassGAN is actually not that new, at least not in the grand scheme of things. The most recent GitHub update was six years ago, and it made its debut back in 2017. In other words, this isn't a brand-new hacking tool developed in response to the ChatGPT revolution. But when it was recently put to the test by cybersecurity research company Home Security Heroes, the results were startling. PassGAN can break any — yes, any — seven-character password in six minutes or less, according to the Home Security Heroes study. It can quickly crack passwords of seven characters or fewer, regardless of whether they contain symbols, capital letters, or numbers. 

Modus operandi 

PassGAN combines Password with the Generative Adversarial Network (GAN), much like ChatGPT combines Chat with the Generative Pre-trained Transformer (GPT). In essence, the deep learning model that the AI is trained on is GAN, similar to GPT.

In this case, the model's objective is to provide password guesses based on real-world passwords that it has been given as input. In order to train PassGAN, a popular tool for studies like these, Home Security Heroes used the RockYou dataset that resulted from the 2009 RockYou data breach. PassGAN was given the data set by the organisation, and it then generated passwords in an effort to properly guess sample passwords. 

In the end, it was possible to quickly break a wide range of passwords. Home Security Heroes then had an AI tool trained on actual passwords that could instantly crack passwords after using PassGAN to train on the RockYou dataset. 

Should I be alarmed about PassGAN?

The good news is that, for the time being at least, you don't really need to panic about PassGAN. Security Editor for Ars Technica Dan Goodin claimed in an opinion piece that PassGAN was "mostly hype." This is because while the AI tool can fairly easily crack passwords, it doesn't do it any more quickly than other non-AI password crackers. 

In example, Goodin quotes Yahoo Senior Principal Engineer Jeremi Gosney, who claimed that using standard password-cracking methods, they could quickly accomplish similar results and decrypt 80% of passwords used in the RockYou breach. For his part, Gosney characterised the study's findings as "neither impressive nor exciting." And after taking a closer look at the results, you might not be as impressed as you were when you first heard that "50% of common passwords can be cracked in less than a minute." These passwords rarely include capital letters, lowercase letters, digits, and symbols and are primarily made up of numbers with a character count of seven or less. 

This means that all it takes to fool PassGAN is a password of at least 11 characters, made up of a mixture of uppercase and lowercase letters, numbers, and symbols. If you can do that, you can make a password that PassGAN will need 365 years to figure out. If you make that number 11 characters long, it becomes 30,000 years. And the finest password managers make it simple to create these kinds of passwords. 

But let's say you don't want to use a password manager because you don't trust that they won't be vulnerable to data breaches, like the LastPass compromise in August 2022. It's a legitimate concern. Fortunately, using a passphrase—a password created by combining several words—will likely still be enough to fool PassGAN. Home Security Heroes estimates that it would still take PassGAN on average 890 years to crack a 15-character password made up entirely of lowercase letters. That timeline could jump to a staggering 47 million years if only one capital letter were added, long after our AI overloads have already dominated the world. 

However, always keep it in mind that no password is ever completely secure. Despite your best efforts, data breaches might still leave you exposed, and by pure dumb luck, a password cracker might guess your password earlier than planned. But as long as you follow the best practises for password security, you have nothing to worry about with PassGAN or any other rogue actor.

ChatGPT's Cybersecurity Threats and How to Mitigate Them

 

The development of ChatGPT (Generative Pre-trained Transformer) technology marks the beginning of a new age in communication. This ground-breaking technology provides incredibly personalised interactions that can produce responses in natural language that are adapted to the user's particular context and experience. 

Despite the fact that this technology is extremely strong, it also poses serious cybersecurity threats that must be addressed in order to safeguard consumers and their data. In this article, we'll cover five of ChatGPT's most prevalent cybersecurity issues as well as some top security tips. 

Data leak 

When using ChatGPT technology, data leakage is a common worry. Data from ChatGPT systems can be exposed or stolen with ease, whether it's because of poor configuration or criminal actors. Strong access controls must be put in place to ensure that only authorised users have access to the system and its resources in order to guard against this threat. For the purpose of quickly identifying any suspect behaviour or incidents, regular monitoring of all system activities is also necessary. 

Finally, creating frequent backups of all the data kept in the system will guarantee that, even if a breach does happen, you can still swiftly retrieve any lost data. Users may be exposed to attacks if an interface is insecure. Ensure your ChatGPT platform's front end is safe and consistently updated with the most recent security updates to mitigate this risk. 

Bot hack 

A bot takeover occurs when a malicious actor manages to take over ChatGPT and exploit it for their own ends. It is possible to accomplish this by either guessing the user's password or by taking advantage of code weaknesses. While ChatGPT bots are excellent for automating specific tasks, they can also be used as a point of entry by remote attackers to take over the bots. Strong authentication procedures and regular software patching are crucial for system security in order to guard against this threat. 

For instance, to keep your passwords secure, you should frequently update them and utilise multi-factor authentication wherever available. Additionally, it's critical to stay up to current on security patches and fix any newly identified software vulnerabilities. 

Unapproved access

Install security features like strong password requirements and two-factor authentication to ensure that only authorised users may access the system. Because of ChatGPT's highly developed phishing capabilities, this is particularly crucial. Consider a scenario where you are utilising ChatGPT to communicate with your clients and one of them unintentionally clicks on a malicious link. 

Once inside the system, the attacker might do harm or take the information. You can lessen the possibility of this happening by forcing all users to use strong passwords and two-factor authentication. To make sure no unauthorised users are accessing the system, you should also routinely audit user accounts. 

Limitations and information overload

Some systems might be unable to handle the strain at times due to the sheer volume of information that ChatGPT generates. Make certain your system has the resources to handle high traffic volumes without becoming overloaded. As a further option for assisting in the management of the data overload problem, think about applying analytics tools and other artificial intelligence technology. 

Privacy & confidentiality issues  

Systems using ChatGPT may not be sufficiently secured, making them susceptible to privacy and confidentiality problems. Be careful to encrypt any sensitive data being stored on the server and to utilise a secure communication protocol (SSL/TLS) in order to guarantee that user data remains private. Set up restrictions on who can access and use the data as well, for example, by making access requests subject to user authentication. 

Bottom line 

There are many other hazards that must be considered when creating or utilising this kind of platform; these are only some of the most prevalent cyber security risks related to ChatGPT technology. 

Working with a knowledgeable group of cybersecurity experts may ensure that all possible risks are dealt with before they become a problem. To keep your data secure and safeguard your company's reputation, you must invest in reliable cybersecurity solutions. Future time and money can be saved by taking the required actions today.

Sundar Pichai Promises the Release of an Upgraded Bard AI Chatbot Soon

 

Sundar Pichai, CEO of Alphabet and Google, has announced that the company will soon offer more competent AI models in response to criticism of his ChatGPT rival, Bard. 

According to Pichai, Bard is now competing with "more powerful automobiles" like a "souped-up Civic," but Google has "more capable models" that will be made available in the upcoming days.

He made these comments in an interview with the NYT's Hard Fork podcast. "We knew when we were putting Bard out we wanted to be careful," Pichai stated. "Since this was the first time we were putting out, we wanted to see what type of queries we would get. We obviously positioned it carefully." 

More powerful PaLM (Pathways Language Model) versions of the Bard chatbot will be released "over the course of next week," l Google CEO added. That will imply that Bard significantly improves in various areas, including reasoning and coding.

Calculative approach 

Pichai's general attitude was a mix of caution over trying out what Bard could achieve and enthusiasm regarding where it might ultimately lead. These "very, very strong technologies" may be tailored to businesses and individuals, according to Pichai.

The Google CEO also addressed questions about data protection and the rapid advancement of AI engines like Bard and ChatGPT. The development of artificial intelligence should be put on hold for six months, according to some of the biggest names in technology. 

Pichai said in the podcast that he supports these kinds of debates and wants to see governments enact laws because AI is too crucial an area not to control. Moreover, the area is too crucial to lack proper regulation. I'm delighted that these discussions are starting now. 

This most recent podcast interview exemplifies the multitude of important questions surrounding AI at the moment, including how it will affect data protection, the types of professions it may eliminate, the effect it will have on publishers if Google and Bing become one-stop shops, and so forth. 

To be fair to Pichai, he handled those issues in a very thoughtful manner, but that does not necessarily mean that all of our concerns about AI will be allayed. We're in the midst of a significant change in the way we live our lives and access information online. 

Pichai acknowledged that the technology "has the capacity to bring harm in a deep sense" but is also "going to be incredibly beneficial". While it's important to recognise this, businesses like Google are more motivated by financial success than by any sense of moral obligation.

Users' Private Info Accidentally Made Public by ChatGPT Bug

 

After taking ChatGPT offline on Monday, OpenAI has revealed additional information, including the possibility that some users' financial information may have been compromised. 

A redis-py bug, which led to a caching problem, caused certain active users to potentially see the last four numbers and expiration date of another user's credit card, along with their first and last name, email address, and payment address, the business claims in a post. Users might have also viewed tidbits of other people's communication histories. 

It's not the first time that cache problems have allowed users to view each other's data; in a famous instance, on Christmas Day in 2015, Steam users were sent pages containing data from other users' accounts. It is quite ironic that OpenAI devotes a lot of attention and research to determining the potential security and safety repercussions of its AI, yet it was taken by surprise by a fairly well-known security flaw. 

The firm claimed that 1.2 percent of ChatGPT Plus subscribers who used the service on March 20 between 4AM and 1PM ET may have been impacted by the payment information leak. 

According to OpenAI, there are two situations in which payment information might have been exposed to an unauthorised user. During that time, if a user visited the My account > Manage subscription page, they might have seen information about another ChatGPT Plus customer who was actively utilising the service. Additionally, the business claims that certain membership confirmation emails sent during the event were sent to the incorrect recipient and contained the final four digits of a user's credit card information. 

The corporation claims it has no proof that either of these events actually occurred before January 20th, though it is plausible that both of them did. Users who may have had their payment information compromised have been contacted by OpenAI. 

It appears that caching had a role in how this whole thing came about. The short version is that the company uses a programme called Redis to cache user information. In some cases, a Redis request cancellation would result in damaged data being delivered for a subsequent request, which wasn't supposed to happen. The programme would typically get the data, declare that it was not what it had requested, and then raise an error.

Yet, the software determined everything was good and presented it to them if the other user was requesting for the same type of data — for example, if they were trying to view their account page and the data was someone else's account information. 

Users were being fed cache material that was originally intended to go to someone else but didn't because of a cancelled request, which is why they could see other users' payment information and conversation history. It also only affected individuals who were actively using the system for that reason. The software wouldn't cache any data for users who weren't actively using it. 

What made matters worse was that, on the morning of March 20, OpenAI made a change to their server that unintentionally increased the amount of Redis queries that were aborted, increasing the likelihood that the issue would return an irrelevant cache to someone.

As per OpenAI, the fault that only affected a very specific version of Redis has been addressed, and the team members have been "great collaborators." It also claims that it is changing its own software and procedures to ensure that something similar doesn't occur again. Changes include adding "redundant checks" to ensure that the data being served actually belongs to the user making the request and decreasing the likelihood that its Redis cluster will experience errors when under heavy load.