Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI. Show all posts

AI's Role in Averting Future Power Outages

 

Amidst an ever-growing demand for electricity, artificial intelligence (AI) is stepping in to mitigate power disruptions.

Aseef Raihan vividly recalls a chilling night in February 2021 in San Antonio, Texas, during winter storm Uri. As temperatures plunged to -19°C, Texas faced an unprecedented surge in electricity demand to combat the cold. 

However, the state's electricity grid faltered, with frozen wind turbines, snow-covered solar panels, and precautionary shutdowns of nuclear reactors leading to widespread power outages affecting over 4.5 million homes and businesses. Raihan's experience of enduring cold nights without power underscored the vulnerability of our electricity systems.

The incident in Texas highlights a global challenge as countries witness escalating electricity demands due to factors like the rise in electric vehicle usage and increased adoption of home appliances like air conditioners. Simultaneously, many nations are transitioning to renewable energy sources, which pose challenges due to their variable nature. For instance, electricity production from wind and solar sources fluctuates based on weather conditions.

To bolster energy resilience, countries like the UK are considering the construction of additional gas-powered plants. Moreover, integrating large-scale battery storage systems into the grid has emerged as a solution. In Texas, significant strides have been made in this regard, with over five gigawatts of battery storage capacity added within three years following the storm.

However, the effectiveness of these batteries hinges on their ability to predict optimal charging and discharging times. This is where AI steps in. Tech companies like WattTime and Electricity Maps are leveraging AI algorithms to forecast electricity supply and demand patterns, enabling batteries to charge during periods of surplus energy and discharge when demand peaks. 

Additionally, AI is enhancing the monitoring of electricity infrastructure, with companies like Buzz Solutions employing AI-powered solutions to detect damage and potential hazards such as overgrown vegetation and wildlife intrusion, thus mitigating the risk of power outages and associated hazards like wildfires.

AI Integration in Cybersecurity Challenges

 

In the ongoing battle against cyber threats, government and corporate heads are increasingly turning to artificial intelligence (AI) and machine learning (ML) for a stronger defense. However, the companies are facing a trio of significant hurdles. 

Firstly, the reliance on an average of 45 distinct cybersecurity tools per company presents a complex landscape. This abundance leads to gaps in protection, configuration errors, and a heavy burden of manual labor, making it challenging to maintain robust security measures. 

Additionally, the cybersecurity sector grapples with a shortage of skilled professionals. This scarcity makes it difficult to recruit, train, and retain experts capable of managing the array of security tools effectively. 

Furthermore, valuable data remains trapped within disparate cybersecurity tools, hindering comprehensive risk management. This fragmentation prevents companies from harnessing insights that could enhance their overall cybersecurity posture. 

The key to maximizing AI for cybersecurity lies in platformization, which streamlines integration and interoperability among security solutions. This approach addresses challenges faced by CISOs, such as tool complexity and data fragmentation. 

Platformization: Maximizing AI for Cybersecurity Integration Explore how platformization revolutionizes cybersecurity by fostering seamless integration and interoperability among various security solutions. 

Unified Operations: Enforcing Consistent Policies Across Security Infrastructure Delve into the benefits of unified management and operations, enabling organizations to establish and enforce policies consistently across their entire security ecosystem. 

Enhanced Insights: Contextual Understanding and Real-Time Attack Prevention Learn how integrating data from diverse sources provides a deeper understanding of security events, facilitating real-time detection and prevention of advanced threats. 

Data Integration: Fueling Effective AI with Comprehensive Datasets Discover the importance of integrating data from multiple sources to empower AI models with comprehensive datasets, enhancing their performance and effectiveness in cybersecurity. 

Strategic Alignment: Modernizing Security to Combat Evolving Threats Examine the imperative for companies to prioritize aligning their security strategies and modernizing legacy systems to effectively mitigate the ever-evolving landscape of cyber threats. 

Unveiling Zero-Day Vulnerabilities: AI enhances detection by analyzing code and behavior for key features like API calls and control flow patterns. 

Harnessing Predictive Insights: AI predicts future events by learning from past data, using models like regression or neural networks. 

Empowering User Authentication: AI strengthens authentication by analyzing behavior patterns, using methods like keystroke dynamics, to go beyond passwords. 

In the world of cybersecurity, we are discovering how AI can help us in many ways, like quickly spotting unusual activities and stopping new kinds of attacks. However, proper training and smart work is important to be adopted by companies to prevent unusual activities in the network.

Enterprise AI Adoption Raises Cybersecurity Concerns

 




Enterprises are rapidly embracing Artificial Intelligence (AI) and Machine Learning (ML) tools, with transactions skyrocketing by almost 600% in less than a year, according to a recent report by Zscaler. The surge, from 521 million transactions in April 2023 to 3.1 billion monthly by January 2024, underscores a growing reliance on these technologies. However, heightened security concerns have led to a 577% increase in blocked AI/ML transactions, as organisations grapple with emerging cyber threats.

The report highlights the developing tactics of cyber attackers, who now exploit AI tools like Language Model-based Machine Learning (LLMs) to infiltrate organisations covertly. Adversarial AI, a form of AI designed to bypass traditional security measures, poses a particularly stealthy threat.

Concerns about data protection and privacy loom large as enterprises integrate AI/ML tools into their operations. Industries such as healthcare, finance, insurance, services, technology, and manufacturing are at risk, with manufacturing leading in AI traffic generation.

To mitigate risks, many Chief Information Security Officers (CISOs) opt to block a record number of AI/ML transactions, although this approach is seen as a short-term solution. The most commonly blocked AI tools include ChatGPT and OpenAI, while domains like Bing.com and Drift.com are among the most frequently blocked.

However, blocking transactions alone may not suffice in the face of evolving cyber threats. Leading cybersecurity vendors are exploring novel approaches to threat detection, leveraging telemetry data and AI capabilities to identify and respond to potential risks more effectively.

CISOs and security teams face a daunting task in defending against AI-driven attacks, necessitating a comprehensive cybersecurity strategy. Balancing productivity and security is crucial, as evidenced by recent incidents like vishing and smishing attacks targeting high-profile executives.

Attackers increasingly leverage AI in ransomware attacks, automating various stages of the attack chain for faster and more targeted strikes. Generative AI, in particular, enables attackers to identify vulnerabilities and exploit them with greater efficiency, posing significant challenges to enterprise security.

Taking into account these advancements, enterprises must prioritise risk management and enhance their cybersecurity posture to combat the dynamic AI threat landscape. Educating board members and implementing robust security measures are essential in safeguarding against AI-driven cyberattacks.

As institutions deal with the complexities of AI adoption, ensuring data privacy, protecting intellectual property, and mitigating the risks associated with AI tools become paramount. By staying vigilant and adopting proactive security measures, enterprises can better defend against the growing threat posed by these cyberattacks.

EU AI Act to Impact US Generative AI Deployments

 



In a move set to reshape the scope of AI deployment, the European Union's AI Act, slated to come into effect in May or June, aims to impose stricter regulations on the development and use of generative AI technology. The Act, which categorises AI use cases based on associated risks, prohibits certain applications like biometric categorization systems and emotion recognition in workplaces due to concerns over manipulation of human behaviour. This legislation will compel companies, regardless of their location, to adopt a more responsible approach to AI development and deployment.

For businesses venturing into generative AI adoption, compliance with the EU AI Act will necessitate a thorough evaluation of use cases through a risk assessment lens. Existing AI deployments will require comprehensive audits to ensure adherence to regulatory standards and mitigate potential penalties. While the Act provides a transition period for compliance, organisations must gear up to meet the stipulated requirements by 2026.

This isn't the first time US companies have faced disruption from overseas tech regulations. Similar to the impact of the GDPR on data privacy practices, the EU AI Act is expected to influence global AI governance standards. By aligning with EU regulations, US tech leaders may find themselves better positioned to comply with emerging regulatory mandates worldwide.

Despite the parallels with GDPR, regulating AI presents unique challenges. The rollout of GDPR witnessed numerous compliance hurdles, indicating the complexity of enforcing such regulations. Additionally, concerns persist regarding the efficacy of fines in deterring non-compliance among large corporations. The EU's proposed fines for AI Act violations range from 7.5 million to 35 million euros, but effective enforcement will require the establishment of robust regulatory mechanisms.

Addressing the AI talent gap is crucial for successful implementation and enforcement of the Act. Both the EU and the US recognize the need for upskilling to attend to the complexities of AI governance. While US efforts have focused on executive orders and policy initiatives, the EU's proactive approach is poised to drive AI enforcement forward.

For CIOs preparing for the AI Act's enforcement, understanding the tools and use cases within their organisations is imperceptible. By conducting comprehensive inventories and risk assessments, businesses can identify areas of potential non-compliance and take corrective measures. It's essential to recognize that seemingly low-risk AI applications may still pose significant challenges, particularly regarding data privacy and transparency.

Companies like TransUnion are taking a nuanced approach to AI deployment, tailoring their strategies to specific use cases. While embracing AI's potential benefits, they exercise caution in deploying complex, less explainable technologies, especially in sensitive areas like credit assessment.

As the EU AI Act reshapes the regulatory landscape, CIOs must proactively adapt their AI strategies to ensure compliance and mitigate risks. By prioritising transparency, accountability, and ethical considerations, organisations can navigate the evolving regulatory environment while harnessing the transformative power of AI responsibly.



Cyber Extortion Stoops Lowest: Fake Attacks, Whistleblowing, Cyber Extortion

Cyber Extortion

Recently, a car rental company in Europe fell victim to a fake cyberattack, the hacker used ChatGPT to make it look like the stolen data was legit. It makes us think why would threat actors claim a fabricated attack? We must know the workings of the cyber extortion business to understand why threat actors do what they do.

Mapping the Evolution of Cyber Extortion

Threats have been improving their ransomware attacks for years now. Traditional forms of ransomware attacks used encryption of stolen data. After successful encryption, attackers demanded ransom in exchange for a decryption key. This technique started to fail as businesses could retrieve data from backups.

To counter this, attackers made malware that compromised backups. Victims started paying, but FBI recommendations suggested they not pay.

The attackers soon realized they would need something foolproof to blackmail victims. They made ransomware that stole data without encryption. Even if victims had backups, attackers could still extort using stolen data, threatening to leak confidential data if the ransom wasn't paid.

Making matters even worse, attackers started "milking" the victims and further profiting from the stolen data. They started selling the stolen data to other threat actors who would launch repeated attacks (double and triple extortion attacks). Even if the victims' families and customers weren't safe, attackers would even go to the extent of blackmailing plastic surgery patients in clinics.

Extortion: Poking and Pressure Tactics

Regulators and law enforcement organizations cannot ignore this when billions of dollars are on the line. The State Department is offering a $10 million prize for the head of a Hive ransomware group, like to a scenario from a Wild West film. 

Businesses are required by regulatory bodies to disclose “all material” connected to cyber attacks. Certain regulations must be followed to avoid civil lawsuits, criminal prosecution, hefty fines and penalties, cease-and-desist orders, and the cancellation of securities registration.

Cyber-swatting is another strategy used by ransomware perpetrators to exert pressure. Extortionists have used swatting attacks to threaten hospitals, schools, members of the C-suite, and board members. Artificial intelligence (AI) systems are used to mimic voices and alert law enforcement to fictitious reports of a hostage crisis, bomb threat, or other grave accusation. EMS, fire, and police are called to the victim's house with heavy weapons.

What Businesses Can Do To Reduce The Risk Of Cyberattacks And Ransomware

What was once a straightforward phishing email has developed into a highly skilled cybercrime where extortionists use social engineering to steal data and conduct fraud, espionage, and infiltration. These are some recommended strategies that businesses can use to reduce risks.

1. Educate Staff: It's critical to have a continuous cybersecurity awareness program that informs staff members on the most recent attacks and extortion schemes used by criminals.

2. Pay Attention To The Causes Rather Than The Symptoms: Ransomware is a symptom, not the cause. Examine the methods by which ransomware infiltrated the system. Phishing, social engineering, unpatched software, and compromised credentials can all lead to ransomware.

3. Implement Security Training: Technology and cybersecurity tools by themselves are unable to combat social engineering, which modifies human nature. Employees can develop a security intuition by participating in hands-on training exercises and using phishing simulation platforms.

4. Use Phishing-Resistant MFA and a Password Manager: Require staff members to create lengthy, intricate passwords. To prevent password reuse, sign up for a paid password manager (not one built into your browser). Use MFA that is resistant to phishing attempts to lower the risk of corporate account takeovers and identity theft.

5. Ensure Employee Preparedness: Employees should be aware of the procedures to follow in the case of a cyberattack, as well as the roles and duties assigned to incident responders and other key players.


From Personal Computer to Innovation Enabler: Unveiling the Future of Computing

 


The use of artificial intelligence (AI) has been largely invisible until now, automating processes and improving performance in the background. Despite the unprecedented adoption curve of generative AI, which is transforming the way humans interact with technology through natural language and conversation, it represents a reversible paradigm that will change the face of technology for a lifetime. 

According to a study of generative artificial intelligence's economic potential, if it is adopted to its full potential, artificial intelligence could increase the global economy by about 4 trillion dollars and the UK economy by about 31 billion pounds in GDP within the next decade. 

There is also a predicted $11 trillion increase in global GDP from non-generative AI as well as other forms of automation, which will also contribute to these figures. With great change comes great opportunity - so caution is justified but excitement is also justified - because great change comes great opportunity. 

In the past, most users have only encountered generative AI via web browsers and the cloud, a platform that carries inherent challenges related to reliability, speed, and privacy, since it is an online-only, open-access, corporate-owned platform. If users were to apply generative AI to their PC, they would benefit from the full advantages, without any of the disadvantages that come with it. 

The reason why artificial intelligence is redefining the role of a personal computer as dramatically and fundamentally as the internet did has to do with its ability to empower individuals to become creators of technology, rather than just consumers of it. The AI PC will give people the opportunity to become creative technologists rather than just consumers. 

By leveraging engineering strengths, people might be able to develop powerful new machines optimized for the use of local AI models—while still being able to utilize hybrid cloud computing and local inference to work offline while leveraging engineering strengths. By using local inference and data processing, data processing can also be performed in a faster and more efficient manner, resulting in lower latency and stronger data privacy and security, in addition to improved energy efficiency and access costs. 

By using this kind of technology, users' PCs can become intelligent personal companions who can keep their information secure while also performing tasks for them, as on-device artificial intelligence can access all of the same specific emails, reports, and spreadsheets the same way they do. While AI is being used to improve the performance and functionality of PCs, the research in using it will continue to accelerate. AI-powered cameras that can be optimized automatically for better video calls with AI-enhanced sound reduction, voice enhancement, and framing will be used to improve the hybrid work experience. 

As a result of enterprise-grade security and privacy, users' PCs will become a personal companion that offers personalized generative AI solutions. On-device AI allows users to trust that their information is safe while doing tasks for them because it has access to all the same, specific emails, presentations, reports, and spreadsheets that they do. 

By using ChatGPT-4, performance is significantly enhanced, which has been demonstrated to be over 25% faster, 40% faster in human ratings, and 12 per cent faster at completing tasks. If users can integrate their internal company information as well as their personalized working data into their companion, and yet still be able to quickly analyze vast amounts of public information, one can imagine the possible use cases with the two combined to produce the best and most relevant of both worlds.

User Privacy: Reddit Discloses FTC Probe into AI Data Licensing Ahead of IPO


In a surprising turn of events, Reddit, the popular social media platform, has revealed that it is under investigation by the Federal Trade Commission (FTC) regarding its practices related to AI data licensing. The disclosure comes just before Reddit's highly anticipated initial public offering (IPO), raising important questions about user privacy and the responsible use of data in the age of artificial intelligence.

The Investigation 

The FTC's inquiry focuses on Reddit's handling of user-generated content, particularly its sale, licensing, or sharing with third parties to train AI models. While the details of the investigation remain confidential, the fact that it is non-public suggests that the agency is taking the matter seriously. As Reddit prepares to go public, this scrutiny could have significant implications for the company's reputation and future growth.

User Privacy at Stake

At the heart of this issue lies the delicate balance between innovation and user privacy. Reddit, like many other platforms, collects vast amounts of data from its users—posts, comments, upvotes, and more. This data is a goldmine for AI developers seeking to improve algorithms, personalize recommendations, and enhance user experiences. However, the challenge lies in ensuring that this data is used ethically and transparently.

Transparency Matters

Reddit's disclosure sheds light on the need for greater transparency in data practices. Users entrust platforms with their personal information, assuming it will be used responsibly. When data is shared with third parties, especially for commercial purposes, users deserve to know. Transparency builds trust, and any opacity in data handling can erode that trust.

Informed Consent

Did Reddit users explicitly consent to their content being used for AI training? The answer is likely buried deep within the platform's terms of service, a document few users read thoroughly. Informed consent requires clear communication about data usage, including how it benefits users and what risks are involved. The FTC's investigation will likely scrutinize whether Reddit met these standards.

The AI Black Box

AI models are often considered "black boxes." Users contribute data, but they rarely understand how it is transformed into insights or recommendations. When Reddit licenses data to third parties, users lose control over how their content is used. The investigation should prompt a broader conversation about making AI processes more transparent and accountable.

Balancing Innovation and Responsibility

Reddit's situation is not unique. Companies across industries grapple with similar challenges. AI advancements promise incredible benefits, from personalized content to medical breakthroughs, but they also raise ethical dilemmas. As we move forward, striking the right balance between innovation and responsibility becomes paramount.

Industry Standards

The FTC's investigation could set a precedent for industry standards. Companies must adopt clear guidelines for data usage, especially when AI is involved. These guidelines should prioritize user consent, data anonymization, and accountability.

User Empowerment

Empowering users is crucial. Platforms should provide accessible tools for users to manage their data, control permissions, and understand how their content contributes to AI development. Transparency dashboards and granular consent options can empower users to make informed choices.

Responsible AI Partnerships

When licensing data, companies should choose partners committed to ethical AI practices. Collaboration should align with user expectations and respect privacy rights. Responsible partnerships benefit both users and the AI ecosystem.

Private AI Chatbot Not Safe From Hackers With Encryption


AI helpers have assimilated into our daily lives in over a year and gained access to our most private information and worries. 

Sensitive information, such as personal health questions and professional consultations, is entrusted to these digital companions. While providers utilize encryption to protect user interactions, new research raises questions about how secure AI assistants may be.

Understanding the attack on AI Assistant Responses

According to a study, an attack that can predict AI assistant reactions with startling accuracy has been discovered. 

This method uses big language models to refine results and takes advantage of a side channel present in most major AI assistants, except for Google Gemini.

According to Offensive AI Research Lab, a passive adversary can identify the precise subject of more than half of all recorded responses by intercepting data packets sent back and forth between the user and the AI assistant.

Recognizing Token Privacy

This attack is centered around a side channel that is integrated within the tokens that AI assistants use. 

Real-time response transmission is facilitated via tokens, which are encoded-word representations. But the tokens are delivered one after the other, exposing a flaw known as the "token-length sequence." By using this route, attackers can infer response content and jeopardize user privacy.

The Token Inference Assault: Deciphering Cryptographic Reactions

Researchers use a token inference attack to refine intercepted data by using LLMs to convert token sequences into comprehensible language. 

Yisroel Mirsky, the director of the Offensive AI Research Lab at Ben-Gurion University in Israel, stated in an email that "private chats sent from ChatGPT and other services can currently be read by anybody."

By using publicly accessible conversation data to train LLMs, researchers can decrypt responses with remarkably high accuracy. This technique leverages the predictability of AI assistant replies to enable contextual decryption of encrypted content, similar to a known plaintext attack.

An AI Chatbot's Anatomy: Understanding of Tokenization

AI chatbots use tokens as the basic building blocks for text processing, which direct the creation and interpretation of conversation. 

To learn patterns and probabilities, LLMs examine large datasets of tokenized text during training. According to Ars Technica, tokens enable real-time communication between users and AI helpers, allowing users to customize their responses depending on environmental cues.

Current Vulnerabilities and Countermeasures

An important vulnerability is the real-time token transmission, which allows attackers to deduce response content based on packet length. 

Sequential delivery reveals answer data, while batch transmission hides individual token lengths. Reevaluating token transmission mechanisms is necessary to mitigate this risk and reduce susceptibility to passive adversaries.

Protecting the Privacy of Data in AI Interactions

Protecting user privacy is still critical as AI helpers develop. Reducing security threats requires implementing strong encryption techniques and improving token delivery mechanisms. 

By fixing flaws and improving data security protocols, providers can maintain users' faith and trust in AI technologies.

Safeguarding AI's Future

A new age of human-computer interaction is dawning with the introduction of AI helpers. But innovation also means accountability. 

Providers need to give data security and privacy top priority as vulnerabilities are found by researchers. Hackers are out there; the next thing we know, they're giving other businesses access to our private chats.

Expert Urges iPhone and Android Users to Brace for 'AI Tsunami' Threat to Bank Accounts

 

In an interview with Techopedia, Frank Abagnale, a renowned figure in the field of security, provided invaluable advice for individuals navigating the complexities of cybersecurity in today's digital landscape. Abagnale, whose life inspired the Steven Spielberg film "Catch Me If You Can," emphasized the escalating threat posed by cybercrime, projected to reach a staggering $10.5 trillion by 2025, according to Cybersecurity Ventures.

Addressing the perpetual intersection of technology and crime, Abagnale remarked, "Technology breeds crime. It always has and always will." He highlighted the impending challenges brought forth by artificial intelligence (AI), particularly its potential to fuel a surge in various forms of cybercrimes and scams. Abagnale cautioned against the rising threat of deepfake technology, which enables the fabrication of convincing multimedia content, complicating efforts to discern authenticity online.

Deepfakes, generated by AI algorithms, can produce deceptive images, videos, and audio mimicking real individuals, often exploited by cybercriminals to orchestrate elaborate scams and extortion schemes. Abagnale stressed the indispensability of education in combating social engineering tactics, emphasizing the importance of empowering individuals to recognize and thwart manipulative schemes.

One prevalent form of cybercrime discussed was phishing, a deceitful practice wherein attackers manipulate individuals into divulging sensitive information, such as banking details or passwords. Phishing attempts typically manifest through unsolicited emails or text messages, characterized by suspicious links, urgent appeals, and grammatical errors.

To fortify defenses against social engineering and hacking attempts, Abagnale endorsed the adoption of passkey technology, heralding it as a pivotal advancement poised to supplant conventional username-password authentication methods. Passkeys, embedded digital credentials associated with user accounts and applications, streamline authentication processes, mitigating vulnerabilities associated with passwords.

Abagnale underscored the ubiquity of passkey technology across various devices, envisioning its eventual displacement of traditional login mechanisms. This transition, he asserted, is long overdue and represents a crucial stride towards enhancing digital security.

Additionally, Techopedia shared practical recommendations for safeguarding online accounts, advocating for regular review and pruning of unused or obsolete accounts. They also recommended utilizing tools like "Have I Been Pwned" to assess potential data breaches and adopting a cautious approach towards hyperlinks, assuming every link to be potentially malicious until verified.

Moreover, users are advised to exercise vigilance in verifying the authenticity of sender identities and message content before responding or taking any action, mitigating the risk of falling victim to cyber threats.

Hyper-Personalization in Retail: Benefits, Challenges, and the Gen-Z Dilemma

Hyper-Personalization in Retail

Customers often embrace hyper-personalization, which is defined by customized product suggestions and AI-powered support. Marigold, Econsultancy, Rokt, and The Harris Poll polls reveal that a sizable majority of consumers—including 88% of Gen Zers—view personalized services as positive additions to their online buying experiences.

Adopting hyper-personalization could increase customer engagement and loyalty, and benefit retailers' bottom lines. According to a survey conducted in the United States by Retail Systems Research (RSR) and Coveo, 70% of merchants believe personalized offers will increase sales, indicating a move away from mass market promotions.

Adopting Hyper-Personalization

Hyper-personalization has drawbacks despite its possible advantages, especially in terms of data security and customer privacy issues. Retailers confront the difficult challenge of striking a balance between personalization and respect for privacy rights, as 78% of consumers worldwide show increasing vigilance about their private data.

Privacy and data security issues

Strong data privacy policies are a top priority for retailers to reduce the hazards connected with hyper-personalization. By implementing data clean rooms, personally identifiable information is protected and secure data sharing with third parties is made possible. By following privacy rules and regulations, retailers can increase consumer confidence and trust.

Retailers should take proactive measures targeted at empowering customers and improving their experiences to take advantage of the opportunities provided by hyper-personalization while resolving its drawbacks.

Customers can take control of their communication preferences and the data they share by setting up preference centers. Retailers build trust and openness by allowing customers to participate in the customizing process, which eventually improves customer relations.

Measurement and tracking of customer sentiment are critical elements of effective hyper-personalization campaigns. Retailers should make sure that personalized experiences are well-received by their target audience and strengthen brand loyalty and trust by routinely assessing consumer feedback and satisfaction levels.

In the retail industry, hyper-personalization is a paradigm shift that offers never-before-seen chances for revenue development and customer engagement. However, data security, privacy issues, and customer preferences must all be carefully taken into account while implementing it. 

In the digital age, businesses can negotiate the challenges of hyper-personalization and yet provide great customer experiences by putting an emphasis on empowerment, transparency, and ethical data practices.


Google's 'Woke' AI Troubles: Charting a Pragmatic Course

 


As Google CEO Sundar Pichai informed employees in a note on Tuesday, he is working to fix the AI tool Gemini that was implemented last year. The note stated that some of the text and image responses reported by the model were "biased" and "completely unacceptable". 

Following inaccuracies found in some historical depictions generated by its application, the company was forced to suspend its use of its tool for creating images of people last week. After being hammered for almost a week last week over supposedly coming out with a chatbot that could be used at work, Google finally apologised for missing the mark and apologized for getting it wrong. 

Despite the momentum of the criticism, the focus is shifting: This week, the barbs were aimed at Google for what appeared to be a reluctance to generate images of white people via its Gemini chatbot, when it came to images of white people. It appears that Gemini's text responses have been subjected to a similar criticism. 

In recent years, Google's artificial intelligence (AI) tool Gemini has been subjected to intense criticism and scrutiny, especially as a result of ongoing cultural clashes between those of left-leaning and right-leaning perspectives. In contrast to the viral chatbot ChatGPT, Gemini has faced significant backlash as a Google counterpart, demonstrating the difficulties associated with navigating AI biases. 

As a result of the controversy surrounding Gemini, images that depict historical figures inaccurately were generated, and responses to text prompts that were deemed overly politically correct or absurd by some users, escalated the controversy. It was quickly acknowledged by Google that the tool had been "missing the mark" and the tool was halted. 

However, the fallout from the incident continued as Gemini's decisions continued to fuel controversies. There has been a sense of disempowerment among Googlers on the ethical AI team during the past year, as the company increased the pace at which it rolled out AI products to keep up with its rivals, such as OpenAI, who have been rolling out AI products at a record pace. 

Gemini images included people of colour as a demonstration that the company was considering diversity, but it was also clear that the company failed to take into account all possible scenarios in which users might wish to create images. 

In her view, Margaret Mitchell, former co-head of Google's Ethical AI research group and chief ethics scientist for Hugging Face AI, has done a wonderful job of understanding the ethical challenges faced by users. As a company that had just been established four years ago, Google had been paying lip service to increasing its awareness of skin tone diversity, but it has made great strides since then.

As Mitchell put it, it is kind of like taking two steps forward and taking one step backwards." he said. There should be recognition given to them for taking the time to pay attention to this stuff. In a general opinion, Google employees should be concerned that the social media pile-on will make it even harder for internal teams who are responsible for mitigating the real-world harms that their artificial intelligence products are causing, such as whether the technology can hide systemic prejudices. 

The employees worry that Google employees should not be able to accomplish this task by themselves. A Google employee said that the outrage that was generated by the AI tool for unintentionally sidelining a group that is already overrepresented in the majority of training datasets could spur some at Google to argue for fewer protections or guardrails on the AI’s outputs — something that, if taken to an extreme, could hurt society in the end. 

The search engine giant is currently focused on damage control as a means to mitigate the damage. It was reported that Demis Hassabis, the director of Google DeepMind's research division, said on Feb. 26 that the company plans to bring the Gemini feature back online within the next few weeks. 

However, over the weekend, conservative personalities continued their attack against Google, specifically in light of the text responses Gemini provides to users. There is no doubt that Google is leading the AI race on paper, with a considerable lead. 

The company makes and supplies its artificial intelligence chips, has its cloud network, which is one of the requisites for AI computation, can access enormous amounts of data, and has an enormous base of customers. Google recruits top-tier AI talent, and its work in artificial intelligence enjoys widespread acclaim. A senior executive from a competing technology giant expressed to me the sentiment that witnessing the missteps of Gemini feels akin to observing a defeat taken from the brink of victory.

From Classrooms to Cyberspace: The AI Takeover in EdTech

 


Recently, the intersection between artificial intelligence (AI) and education technology (EdTech) has become one of the most significant areas of concern and growth within the educational industry. The rapid adoption of AI-based EdTech tools is creating a unique set of challenges and opportunities for educators, students, and parents who are faced with the rapid acceleration of online learning that is catalyzed by the COVID-19 pandemic. 

With the rise of these technologies in education, there has been a significant discussion about student data security and privacy and the effectiveness and ethical concerns associated with utilizing these technologies. 

With the advent of artificial intelligence in EdTech, we have seen innovative solutions to personalize learning and boost student engagement as well as substantial security risks associated with data privacy. It has become increasingly obvious that unbiased data and transparent technology use are crucial to evaluating student work and managing educational content, a situation underscored by the use of artificial intelligence algorithms in evaluating student work and managing educational content. 

EdTech companies have been forced to reevaluate their security measures and data handling practices after incidents of data breaches and unauthorized data use have highlighted vulnerabilities within the sector. It has been said that the ability of artificial intelligence to customize learning experiences and meet individual needs has made education technology a popular subject because it can optimize educational outcomes. 

The education platforms will be able to identify patterns by analyzing vast datasets and tailoring content accordingly if they use artificial intelligence algorithms to analyze large amounts of data. This will allow them to identify patterns and tailor content to meet the unique needs of students. There is no doubt that artificial intelligence is making a significant impact in the development of intelligent tutoring systems, which is one of the key areas where AI is making a significant impact on the educational environment in terms of engaging and effective education. 

A machine learning algorithm is used to identify a student's strengths and weaknesses and provide targeted feedback and individualized learning plans based on the assessment results. In this way, students receive tailored support that enables them to understand better and retain their academic content.

Moreover, AI is revolutionizing the assessment landscape, bringing automated grading systems to the classroom and streamlining the evaluation process for educators, thus transforming education’s landscape. By doing this, educators will be able to save valuable time as well as foster an interactive learning atmosphere as they will be able to focus on giving constructive feedback and providing constructive criticism. 

It is becoming more apparent that stringent security protocols and transparent data practices are essential to prevent significant security breaches, such as the lawsuit filed by the Federal Trade Commission against Edmodo for improper data use. As a result of these incidents, researchers have begun to reexamine how AI-based EdTech tools are used by educators in conjunction with how educators themselves safeguard information about their students.

AI has the prospect of bringing many benefits to education, but integrating it into the system requires a balanced approach that utilizes the advantages of technology while minimizing any potential risks associated with the process. 

There is an urgent need for educators and schools to establish strong data protection agreements with EdTech vendors, which will ensure privacy laws are adhered to, and clarity is provided on how personal data will be used, protected, and protected from unauthorized access. There is a need for transparency with stakeholders - parents, students, and teachers – about the use of AI tools, the data they collect, and the protective measures they are taking. 

To successfully navigate this complex terrain, it is essential to conduct regular reviews of privacy policies, monitor the effectiveness of EdTech tools, and engage with the evolving AI landscape daily. The use of artificial intelligence can enhance the learning experience and enhance student responses if educators adopt a proactive, informed approach to the use of AI to enhance student outcomes while maintaining a vigilant stance on the security and privacy of student data. 

As Artificial Intelligence (AI) is integrating into classrooms and classroom practices more and more, it is heralding an era that offers remarkable opportunities for innovation and personalized learning in teaching and learning. 

The ethical, privacy, and security implications of these technologies, however, call for a critical review of their potential use. There will be a crucial need to orchestrate a concerted effort by educators, policymakers, and technology providers to be able to realize the potential of AI in educational settings responsibly and effectively as the educational landscape continues to evolve.

Analysis: AI-Driven Online Financial Scams Surge

 

Cybersecurity experts are sounding the alarm about a surge in online financial scams, driven by artificial intelligence (AI), which they warn is becoming increasingly difficult to control. This warning coincides with an investigation by AAP FactCheck into cryptocurrency scams targeting the Pacific Islands.

AAP FactCheck's analysis of over 100 Facebook accounts purporting to be crypto traders reveals deceptive tactics such as fake profile images, altered bank notifications, and false affiliations with prestigious financial institutions.

The experts point out that Pacific Island nations, with their low levels of financial and media literacy and under-resourced law enforcement, are particularly vulnerable. However, they emphasize that this issue extends globally.

In 2022, Australians lost over $3 billion to scams, with a significant portion involving fraudulent investments. Ken Gamble, co-founder of IFW Global, notes that AI is amplifying the sophistication of scams, enabling faster dissemination across social media platforms and rendering them challenging to combat effectively.

Gamble highlights that scammers are leveraging AI to adapt to local languages, enabling them to target victims worldwide. While the Pacific Islands are a prime target due to their limited law enforcement capabilities, organized criminal groups from various countries, including Israel, China, and Nigeria, are behind many of these schemes.

Victims recount their experiences, such as a woman in PNG who fell prey to a scam after her relative's Facebook account was hacked, resulting in a loss of over 15,000 kina.

Dan Halpin from Cybertrace underscores the necessity of a coordinated global response involving law enforcement, international organizations like Interpol, public awareness campaigns, regulatory enhancements, and cross-border collaboration.

Halpin stresses the importance of improving cyber literacy levels in the region to mitigate these risks. However, Gamble warns that without prioritizing this issue, fueled by AI advancements, the situation will only deteriorate further.

IBM Signals Major Paradigm Shift as Valid Account Attacks Surge

 


As a result of IBM X-Force's findings, enterprises cannot distinguish between legitimate authentication and unauthorized access due to poor credential management. Several cybersecurity products are not designed to detect the misuse of valid credentials by illegitimate operators, and this is a major problem for organizations seeking to detect illegitimate uses. 

Henderson added that these products do not detect illegitimate activity. In addition to widespread credential reuse and a vast repository of valid credentials that are being sold on the dark web for sale, IBM also stated that cloud account credentials account for almost 90% of the assets for sale on the dark web, which is also fueling the rise of identity-based attacks. 

The practice of credential reuse, Henderson said, can deliver the same results as single sign-on providers by allowing threat actors to gain access to a large number of accounts at once. It is well known that because users reuse credentials for many, many different accounts, the credentials themselves become de facto single sign-on. 

In the year 2023, the number of phishing campaigns that were linked to attacks declined by 44% from 2022 as threat actors flocked to valid credentials. Phishing accounted for almost one in three of the total number of incidents resolved by X-Force in 2016. 

It's not a technology shift for threat actors. They are taking low-cost routes of entry to maximize their return on investment. That's what Henderson said was not a technology shift, but rather a business strategy shift on their part. According to IBM's report, organizations still need to correct the mistakes cybersecurity experts have warned about for years. 

It is Henderson's belief that the industry would be dealing with newer and bigger problems by now, but he does not seem discouraged at all. The great thing about this report is that it simplifies what we need to do, and what's great about it is that there are no things that are insurmountable highlighted in it. 

Henderson explained that focusing on the right things and prioritizing them will solve the authentication problem. Henderson added that even if authentication is solved, it will be followed by another problem. 

However, as we get more and more successful, we reduce their return on investment, making it more difficult for them to commit crimes. It takes a lot of effort to toss out the business model that governs cybercrime, and that is exactly what companies are trying to do.

Indian Government Warns Social Media Platforms Over Deepfake Misinformation

In a strong statement directed at social media platforms, the government of India has emphasized the critical need for swift identification and removal of misinformation, including deepfakes, or risk facing legal consequences. This warning follows a deepfake scandal involving the esteemed Indian actor Akshay Kumar. 

The controversy erupted after a digitally manipulated video, allegedly portraying Kumar endorsing a gaming application, surfaced online. Despite the actor's explicit denial of any involvement in such promotions, the video circulated widely across social media platforms, fueling concerns over the spread of fabricated content. 

The government's stance underscores the growing threat posed by deepfakes, which are increasingly being used to spread false information and manipulate public opinion. With the rise of sophisticated digital manipulation techniques, authorities are urging social media companies to implement robust measures to combat the dissemination of deceptive content. 

Following the cases of deepfake technology, the Rajya Sabha, Minister of State for Electronics and Information Technology Rajeev Chandrasekhar, talked about how fake news and deepfake videos, which use fancy technology, are causing big problems. 

He reminded everyone about the rules that say social media companies have to quickly remove this fake stuff. If they do not, they can get in big trouble, even facing legal action. The government wants these companies to take responsibility and keep the internet safe and trustworthy. 

Further Minister added under the IT Rules, 2021, “they (intermediaries) lose their safe harbour protection under section 79 of the IT Act and shall be liable for consequential action or prosecution as provided under any law for the time being in force including the IT Act and the Indian Penal Code, including section 469 of the IPC”. 

Additionally, several months ago, deepfake videos featuring other famous Indian celebrities went viral on social media. In response, the Government of India issued an advisory to top social media platforms, stating that they must remove such content within 24 hours or face consequences under the provisions of the IT Rules. 

The advisory highlighted that Section 66D of the IT Act, 2000, prescribes punishment— including imprisonment for up to 3 years and a fine of up to Rs 1 lakh (1,205 US Dollars)—for individuals found guilty of cheating by impersonation through the use of computer resources. 

Let's Understand Deepfake AI Technolgy

Deepfake, a form of artificial intelligence (AI), has emerged as a potent tool capable of creating convincing hoax images, sounds, and videos. Combining the concepts of deep learning and fakery, the term "deepfake" embodies the manipulation of digital content with sophisticated algorithms. 

Utilizing machine learning algorithms, deepfake technology compiles fabricated images and sounds, seamlessly stitching them together to create realistic scenarios and individuals that never existed or events that never took place. 

However, the widespread use of deepfake technology is often associated with malicious intent. Nefarious actors harness this technology to propagate false information and propaganda, manipulating public perception with deceptive content. 

For instance, deepfake videos may depict world leaders or celebrities making statements they never uttered, a phenomenon commonly known as "fake news," which has the power to sway public opinion and disrupt societal trust. 

Recent Deepfake Incidents Shake Global Landscape 

In Pakistan, reports have surfaced of deepfake content being utilized to influence the outcome of the Prime Minister election. 

Meanwhile, in Hong Kong, a finance worker fell victim to a sophisticated deepfake scam, resulting in the fraudulent transfer of $25 million after fraudsters impersonated a company executive during a video conference call. 

Additionally, Iran-backed hackers disrupted streaming services in the UAE by disseminating deepfake news, underscoring the potential for such technology to be weaponized for cyber warfare.

Can Face Biometrics Prevent AI-Generated Deepfakes?


AI-Generated deep fakes on the rise

A serious threat to the reliability of identity verification and authentication systems is the emergence of AI-generated deepfakes that attack face biometric systems. The prediction by Gartner, Inc. that by 2026, 30% of businesses will doubt these technologies' dependability emphasizes how urgently this new threat needs to be addressed.

Deepfakes, or synthetic images that accurately imitate genuine human faces, are becoming more and more powerful tools in the toolbox of cybercriminals as artificial intelligence develops. These entities circumvent security mechanisms by taking advantage of the static nature of physical attributes like fingerprints, facial shapes, and eye sizes that are employed for authentication. 

Moreover, the capacity of deepfakes to accurately mimic human speech introduces an additional level of intricacy to the security problem, potentially evading voice recognition software. This changing environment draws attention to a serious flaw in biometric security technology and emphasizes the necessity for enterprises to reassess the effectiveness of their present security measures.

According to Gartner researcher Akif Khan, significant progress in AI technology over the past ten years has made it possible to create artificial faces that closely mimic genuine ones. Because these deep fakes mimic the facial features of real individuals, they open up new possibilities for cyberattacks and can go beyond biometric verification systems.

As Khan demonstrates, these developments have significant ramifications. When organizations are unable to determine whether the person trying access is authentic or just a highly skilled deepfake representation, they may rapidly begin to doubt the integrity of their identity verification procedures. The security protocols that many rely on are seriously in danger from this ambiguity.

Deepfakes introduce complex challenges to biometric security measures by exploiting static data—unchanging physical characteristics such as eye size, face shape, or fingerprints—that authentication devices use to recognize individuals. The static nature of these attributes makes them vulnerable to replication by deepfakes, allowing unauthorized access to sensitive systems and data.

Deepfakes and challenges

Additionally, the technology underpinning deepfakes has evolved to replicate human voices with remarkable accuracy. By dissecting audio recordings of speech into smaller fragments, AI systems can recreate a person’s vocal characteristics, enabling deepfakes to convincingly mimic someone’s voice for use in scripted or impromptu dialogue.

By taking advantage of static data—unchanging physical traits like eye size, face shape, or fingerprints—that authentication devices use to identify people, deepfakes pose sophisticated threats to biometric security systems. Because these qualities are static, deepfakes can replicate them and gain unauthorized access to confidential information and systems.

Furthermore, the technology underlying deepfakes has advanced to remarkably accurately mimic human voices. Artificial intelligence (AI) systems can accurately replicate a person's voice by breaking down speech recordings into smaller segments. This allows deepfakes to realistically imitate a person's voice for usage in pre-recorded or spontaneous dialogue.

MFA and PAD

By taking advantage of static data—unchanging physical traits like eye size, face shape, or fingerprints—that authentication devices use to identify people, deepfakes pose sophisticated threats to biometric security systems. Because these qualities are static, deepfakes can replicate them and gain unauthorized access to confidential information and systems.

Furthermore, the technology underlying deepfakes has advanced to remarkably accurately mimic human voices. Artificial intelligence (AI) systems can accurately replicate a person's voice by breaking down speech recordings into smaller segments. This allows deepfakes to realistically imitate a person's voice for usage in pre-recorded or spontaneous dialogue.

Deepfakes are sophisticated threats to biometric security systems because they use static data, which is unchangeable physical attributes like eye size, face shape, or fingerprints that authentication devices use to identify persons. 

AI Takes Center Stage: Microsoft's Bold Move to Unparalleled Scalability

 


In the world of artificial intelligence, Microsoft is currently making some serious waves with its recent success in deploying the technology at scale, making it one of the leading players. With a market value that has been estimated to be around $3tn, every one of Microsoft's AI capabilities is becoming the envy of the world. 

AI holds enormous potential for transformation and Microsoft is leading the way in harnessing the power of AI for a more efficient and effective life. It is not only Microsoft's impressive growth that demonstrates the company's potential, but it also emphasizes how artificial intelligence plays such a significant role in our digital environment. 

There is no doubt that artificial intelligence has revolutionized the world of business, transforming everything from healthcare to finance, and beyond. It is Microsoft's commitment to transforming the way we live and work that makes its commitment to deploying AI solutions at scale all the more evident. 

OpenAI, the manufacturer of the ChatGPT bot which was released in 2022, has a large stake in the tech giant, which led to a wave of optimism about the possibilities that could be accessed by technology. Despite this, OpenAI has not been without controversy. 

The New York Times, an American newspaper, is suing OpenAI for alleged copyright violations in training the system. Microsoft is also named as a defendant in the lawsuit, which states that the firms should be liable for damages worth "billions of dollars" in damages to the plaintiff. 

To "learn" by analysing massive amounts of data sourced from the internet, ChatGPT and other large language models (LLMs) analyze a vast amount of data. It is also important for Alphabet to keep an eye on artificial intelligence, as it updated investors on Tuesday as well. 

In the September-December quarter, Alphabet reported revenues and profits based on a 13 per cent increase year-over-year, which were nearly $20.7bn. It has also been said that AI investments are also helping to improve Google's search, cloud computing, and YouTube divisions, according to Sundar Pichai, the company's CEO. 

Although both companies have enjoyed gains this year, their workforces have continued to slim down. Google's headcount has been down almost 5% since last year, and it has announced another round of cuts earlier in the month. 

In the same vein, Microsoft announced plans to eliminate 1,900 jobs in its gaming division, reducing 9% of its staff. It became obvious that this move would be made following their acquisition of Activision Blizzard, the company that makes the games World of Warcraft and Call of Duty.

Privacy Watchdog Fines Italy’s Trento City for Privacy Breaches in Use of AI


Italy’s privacy watchdog has recently fined the northern city of Trento since they failed to keep up with the data protection guidelines in how they used artificial intelligence (AI) for street surveillance projects. 

Trento was the first local administration in Italy to be sanctioned by the GPDP watchdog for using data from AI tools. The city has been fined a sum of 50,000 euros (454,225). Trento has also been urged to take down the data gathered in the two European Union-sponsored projects. 

The privacy watchdog, known to be one of the most proactive bodies deployed by the EU, for evaluating AI platform compliance with the bloc's data protection regulations temporarily outlawed ChatGPT, a well-known chatbot, in Italy. In 2021, the authority also reported about a facial recognition system tested under the Italian Interior Ministry, which did not meet the terms of privacy laws.

Concerns around personal data security and privacy rights have been brought up by the rapid advancements in AI across several businesses.

Following a thorough investigation of the Trento projects, the GPDP found “multiple violations of privacy regulations,” they noted in a statement, while also recognizing how the municipality acted in good faith.

Also, it mentioned that the data collected in the project needed to be sufficiently anonymous and that it was illicitly shared with third-party entities. 

“The decision by the regulator highlights how the current legislation is totally insufficient to regulate the use of AI to analyse large amounts of data and improve city security,” it said in a statement.

Moreover, in its presidency of the Group of Seven (G7) major democracies, the government of Italy which is led by Prime Minister Giorgia Meloni has promised to highlight the AI revolution.

Legislators and governments in the European Union reached a temporary agreement in December to regulate ChatGPT and other AI systems, bringing the technology one step closer to regulations. One major source of contention concerns the application of AI to biometric surveillance.  

UK Cybersecurity Agency Issues Warning: AI to Enhance Authenticity of Scam Emails

 

The UK's cybersecurity agency has issued a warning that artificial intelligence (AI) advancements may make it challenging to distinguish between genuine and fraudulent emails, particularly those prompting users to reset passwords. The National Cyber Security Centre (NCSC), affiliated with the GCHQ spy agency, highlighted the increasing sophistication of AI tools, such as generative AI, which can create convincing text, voice, and images based on simple prompts.

According to the NCSC's assessment of AI's impact on cyber threats, it anticipates a significant rise in cyber-attacks over the next two years. Generative AI, coupled with large language models like those powering chatbots, is expected to complicate the identification of various attack types, including phishing, spoofing, and social engineering.

The agency emphasized that by 2025, assessing the legitimacy of emails or password reset requests would become challenging for individuals, regardless of their cybersecurity expertise. Ransomware attacks, which have affected institutions like the British Library and Royal Mail, are also projected to increase. The NCSC pointed out that AI's sophistication lowers the entry barrier for amateur cybercriminals, enabling them to paralyze computer systems, extract sensitive data, and demand cryptocurrency ransoms.

Generative AI tools are already being used to create more convincing approaches to potential victims by crafting fake "lure documents" without typical errors associated with phishing attacks. While generative AI won't enhance ransomware code effectiveness, it will assist in identifying potential targets.

In 2022, the UK reported 706 ransomware incidents, compared to 694 in 2021, according to the Information Commissioner's Office. The NCSC warned that state actors likely possess enough malware to train AI models capable of creating new code that can evade security measures.

The report acknowledged AI's dual role, stating that it can also serve as a defensive tool by detecting attacks and designing more secure systems. In response to the rising threat of ransomware, the UK government introduced new guidelines, the "Cyber Governance Code of Practice," urging businesses to prioritize information security alongside financial and legal management.

Despite these measures, cybersecurity experts, including Ciaran Martin, the former head of the NCSC, have called for stronger actions. Martin emphasized the need for a fundamental shift in approaching ransomware threats, suggesting stronger rules on ransom payments and abandoning unrealistic notions of retaliatory measures.

New AI System Aids Early Detection of Deadly Pancreatic Cancer Cases

 

A new research has unveiled a novel AI system designed to enhance the detection of the most prevalent type of pancreatic cancer. Identifying pancreatic cancer poses challenges due to the pancreas being obscured by surrounding organs, making tumor identification challenging. Moreover, symptoms rarely manifest in early stages, resulting in diagnoses at advanced stages when the cancer has already spread, diminishing chances of a cure.

To address this, a collaborative effort between MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Limor Appelbaum from Beth Israel Deaconess Medical Center produced an AI system aimed at predicting the likelihood of an individual developing pancreatic ductal adenocarcinoma (PDAC), the predominant form of the cancer. This AI system, named PRISM, demonstrated superior performance compared to existing diagnostic standards, presenting the potential for future clinical applications in identifying candidates for early screening or testing, ultimately leading to improved outcomes.

The researchers aspired to construct a model capable of forecasting a patient's risk of PDAC diagnosis within the next six to 18 months, facilitating early detection and treatment. Leveraging existing electronic health records, the PRISM system comprises two AI models. The first model, utilizing artificial neural networks, analyzes patterns in data such as age, medical history, and lab results to calculate a personalized risk score. The second model, employing a simpler algorithm, processes the same data to generate a comparable score.

The team fed anonymized data from 6 million electronic health records, including 35,387 PDAC cases, from 55 U.S. healthcare organizations into the models. By evaluating PDAC risk every 90 days, the neural network identified 35% of eventual pancreatic cancer cases as high risk six to 18 months before diagnosis, signifying a notable advancement over existing screening systems. With pancreatic cancer lacking routine screening recommendations for the general population, the current criteria capture only around 10% of cases.

While the AI system shows promise in early detection, experts caution that the model's impact depends on its ability to identify cases early enough for effective treatment. Michael Goggins, a pancreatic cancer specialist at Johns Hopkins University School of Medicine, emphasizes the importance of early detection and acknowledges the potential improvement offered by the PRISM system.

The study, while retrospective, sets the groundwork for future investigations involving real-time data and outcome assessments. The research team acknowledges potential challenges related to the generalizability of AI models across different healthcare organizations, emphasizing the need for diverse datasets. PRISM holds promise for deployment in two ways: selectively recommending pancreatic cancer testing for specific patients and initiating broader screenings using blood or saliva tests for asymptomatic individuals. Limor Appelbaum envisions the transition of such models from academic literature to clinical practice, emphasizing their life-saving potential.